language stringclasses 1
value | repo stringclasses 346
values | path stringlengths 6 201 | class_span dict | source stringlengths 21 2.38M | target stringlengths 1 96 |
|---|---|---|---|---|---|
python | altair-viz__altair | altair/vegalite/v6/schema/channels.py | {
"start": 178177,
"end": 207600
} | class ____(
FieldChannelMixin,
core.FieldOrDatumDefWithConditionMarkPropFieldDefGradientstringnull,
):
r"""
Fill schema wrapper.
Parameters
----------
shorthand : str, dict, Sequence[str], :class:`RepeatRef`
shorthand for field, aggregate, and type
aggregate : dict, :class:`Aggregate`, :class:`ArgmaxDef`, :class:`ArgminDef`, :class:`NonArgAggregateOp`, Literal['average', 'count', 'distinct', 'max', 'mean', 'median', 'min', 'missing', 'product', 'q1', 'q3', 'ci0', 'ci1', 'stderr', 'stdev', 'stdevp', 'sum', 'valid', 'values', 'variance', 'variancep', 'exponential', 'exponentialb']
Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
``"min"``, ``"max"``, ``"count"``).
**Default value:** ``undefined`` (None)
**See also:** `aggregate <https://vega.github.io/vega-lite/docs/aggregate.html>`__
documentation.
bandPosition : float
Relative position on a band of a stacked, binned, time unit, or band scale. For
example, the marks will be positioned at the beginning of the band if set to ``0``,
and at the middle of the band if set to ``0.5``.
bin : bool, dict, :class:`BinParams`, None
A flag for binning a ``quantitative`` field, `an object defining binning parameters
<https://vega.github.io/vega-lite/docs/bin.html#bin-parameters>`__, or indicating
that the data for ``x`` or ``y`` channel are binned before they are imported into
Vega-Lite (``"binned"``).
* If ``true``, default `binning parameters
<https://vega.github.io/vega-lite/docs/bin.html#bin-parameters>`__ will be
applied.
* If ``"binned"``, this indicates that the data for the ``x`` (or ``y``) channel are
already binned. You can map the bin-start field to ``x`` (or ``y``) and the
bin-end field to ``x2`` (or ``y2``). The scale and axis will be formatted similar
to binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can
also set the axis's `tickMinStep
<https://vega.github.io/vega-lite/docs/axis.html#ticks>`__ property.
**Default value:** ``false``
**See also:** `bin <https://vega.github.io/vega-lite/docs/bin.html>`__
documentation.
condition : dict, :class:`ConditionalValueDefGradientstringnullExprRef`, :class:`ConditionalParameterValueDefGradientstringnullExprRef`, :class:`ConditionalPredicateValueDefGradientstringnullExprRef`, Sequence[dict, :class:`ConditionalValueDefGradientstringnullExprRef`, :class:`ConditionalParameterValueDefGradientstringnullExprRef`, :class:`ConditionalPredicateValueDefGradientstringnullExprRef`]
One or more value definition(s) with `a parameter or a test predicate
<https://vega.github.io/vega-lite/docs/condition.html>`__.
**Note:** A field definition's ``condition`` property can only contain `conditional
value definitions <https://vega.github.io/vega-lite/docs/condition.html#value>`__
since Vega-Lite only allows at most one encoded field per encoding channel.
field : str, dict, :class:`Field`, :class:`FieldName`, :class:`RepeatRef`
**Required.** A string defining the name of the field from which to pull a data
value or an object defining iterated values from the `repeat
<https://vega.github.io/vega-lite/docs/repeat.html>`__ operator.
**See also:** `field <https://vega.github.io/vega-lite/docs/field.html>`__
documentation.
**Notes:** 1) Dots (``.``) and brackets (``[`` and ``]``) can be used to access
nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"``). If
field names contain dots or brackets but are not nested, you can use ``\\`` to
escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"``). See more details
about escaping in the `field documentation
<https://vega.github.io/vega-lite/docs/field.html>`__. 2) ``field`` is not required
if ``aggregate`` is ``count``.
legend : dict, :class:`Legend`, None
An object defining properties of the legend. If ``null``, the legend for the
encoding channel will be removed.
**Default value:** If undefined, default `legend properties
<https://vega.github.io/vega-lite/docs/legend.html>`__ are applied.
**See also:** `legend <https://vega.github.io/vega-lite/docs/legend.html>`__
documentation.
scale : dict, :class:`Scale`, None
An object defining properties of the channel's scale, which is the function that
transforms values in the data domain (numbers, dates, strings, etc) to visual values
(pixels, colors, sizes) of the encoding channels.
If ``null``, the scale will be `disabled and the data value will be directly encoded
<https://vega.github.io/vega-lite/docs/scale.html#disable>`__.
**Default value:** If undefined, default `scale properties
<https://vega.github.io/vega-lite/docs/scale.html>`__ are applied.
**See also:** `scale <https://vega.github.io/vega-lite/docs/scale.html>`__
documentation.
sort : dict, :class:`Sort`, Sequence[str], Sequence[bool], Sequence[float], :class:`SortArray`, :class:`SortOrder`, :class:`AllSortString`, :class:`SortByChannel`, :class:`SortByEncoding`, :class:`EncodingSortField`, :class:`SortByChannelDesc`, Sequence[dict, :class:`DateTime`], Literal['-x', '-y', '-color', '-fill', '-stroke', '-strokeWidth', '-size', '-shape', '-fillOpacity', '-strokeOpacity', '-opacity', '-text', 'ascending', 'descending', 'x', 'y', 'color', 'fill', 'stroke', 'strokeWidth', 'size', 'shape', 'fillOpacity', 'strokeOpacity', 'opacity', 'text'], None
Sort order for the encoded field.
For continuous fields (quantitative or temporal), ``sort`` can be either
``"ascending"`` or ``"descending"``.
For discrete fields, ``sort`` can be one of the following:
* ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
JavaScript.
* `A string indicating an encoding channel name to sort by
<https://vega.github.io/vega-lite/docs/sort.html#sort-by-encoding>`__ (e.g.,
``"x"`` or ``"y"``) with an optional minus prefix for descending sort (e.g.,
``"-x"`` to sort by x-field, descending). This channel string is short-form of `a
sort-by-encoding definition
<https://vega.github.io/vega-lite/docs/sort.html#sort-by-encoding>`__. For
example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order":
"descending"}``.
* `A sort field definition
<https://vega.github.io/vega-lite/docs/sort.html#sort-field>`__ for sorting by
another field.
* `An array specifying the field values in preferred order
<https://vega.github.io/vega-lite/docs/sort.html#sort-array>`__. In this case, the
sort order will obey the values in the array, followed by any unspecified values
in their original order. For discrete time field, values in the sort array can be
`date-time definition objects
<https://vega.github.io/vega-lite/docs/datetime.html>`__. In addition, for time
units ``"month"`` and ``"day"``, the values can be the month or day names (case
insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"``).
* ``null`` indicating no sort.
**Default value:** ``"ascending"``
**Note:** ``null`` and sorting by another channel is not supported for ``row`` and
``column``.
**See also:** `sort <https://vega.github.io/vega-lite/docs/sort.html>`__
documentation.
timeUnit : dict, :class:`TimeUnit`, :class:`MultiTimeUnit`, :class:`BinnedTimeUnit`, :class:`SingleTimeUnit`, :class:`TimeUnitParams`, :class:`UtcMultiTimeUnit`, :class:`UtcSingleTimeUnit`, :class:`LocalMultiTimeUnit`, :class:`LocalSingleTimeUnit`, Literal['binnedyear', 'binnedyearquarter', 'binnedyearquartermonth', 'binnedyearmonth', 'binnedyearmonthdate', 'binnedyearmonthdatehours', 'binnedyearmonthdatehoursminutes', 'binnedyearmonthdatehoursminutesseconds', 'binnedyearweek', 'binnedyearweekday', 'binnedyearweekdayhours', 'binnedyearweekdayhoursminutes', 'binnedyearweekdayhoursminutesseconds', 'binnedyeardayofyear', 'binnedutcyear', 'binnedutcyearquarter', 'binnedutcyearquartermonth', 'binnedutcyearmonth', 'binnedutcyearmonthdate', 'binnedutcyearmonthdatehours', 'binnedutcyearmonthdatehoursminutes', 'binnedutcyearmonthdatehoursminutesseconds', 'binnedutcyearweek', 'binnedutcyearweekday', 'binnedutcyearweekdayhours', 'binnedutcyearweekdayhoursminutes', 'binnedutcyearweekdayhoursminutesseconds', 'binnedutcyeardayofyear', 'utcyear', 'utcquarter', 'utcmonth', 'utcweek', 'utcday', 'utcdayofyear', 'utcdate', 'utchours', 'utcminutes', 'utcseconds', 'utcmilliseconds', 'year', 'quarter', 'month', 'week', 'day', 'dayofyear', 'date', 'hours', 'minutes', 'seconds', 'milliseconds', 'utcyearquarter', 'utcyearquartermonth', 'utcyearmonth', 'utcyearmonthdate', 'utcyearmonthdatehours', 'utcyearmonthdatehoursminutes', 'utcyearmonthdatehoursminutesseconds', 'utcyearweek', 'utcyearweekday', 'utcyearweekdayhours', 'utcyearweekdayhoursminutes', 'utcyearweekdayhoursminutesseconds', 'utcyeardayofyear', 'utcquartermonth', 'utcmonthdate', 'utcmonthdatehours', 'utcmonthdatehoursminutes', 'utcmonthdatehoursminutesseconds', 'utcweekday', 'utcweekdayhours', 'utcweekdayhoursminutes', 'utcweekdayhoursminutesseconds', 'utcdayhours', 'utcdayhoursminutes', 'utcdayhoursminutesseconds', 'utchoursminutes', 'utchoursminutesseconds', 'utcminutesseconds', 'utcsecondsmilliseconds', 'yearquarter', 'yearquartermonth', 'yearmonth', 'yearmonthdate', 'yearmonthdatehours', 'yearmonthdatehoursminutes', 'yearmonthdatehoursminutesseconds', 'yearweek', 'yearweekday', 'yearweekdayhours', 'yearweekdayhoursminutes', 'yearweekdayhoursminutesseconds', 'yeardayofyear', 'quartermonth', 'monthdate', 'monthdatehours', 'monthdatehoursminutes', 'monthdatehoursminutesseconds', 'weekday', 'weekdayhours', 'weekdayhoursminutes', 'weekdayhoursminutesseconds', 'dayhours', 'dayhoursminutes', 'dayhoursminutesseconds', 'hoursminutes', 'hoursminutesseconds', 'minutesseconds', 'secondsmilliseconds']
Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours``) for a temporal
field. or `a temporal field that gets casted as ordinal
<https://vega.github.io/vega-lite/docs/type.html#cast>`__.
**Default value:** ``undefined`` (None)
**See also:** `timeUnit <https://vega.github.io/vega-lite/docs/timeunit.html>`__
documentation.
title : str, :class:`Text`, Sequence[str], None
A title for the field. If ``null``, the title will be removed.
**Default value:** derived from the field's name and transformation function
(``aggregate``, ``bin`` and ``timeUnit``). If the field has an aggregate function,
the function is displayed as part of the title (e.g., ``"Sum of Profit"``). If the
field is binned or has a time unit applied, the applied function is shown in
parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"``).
Otherwise, the title is simply the field name.
**Notes**:
1) You can customize the default field title format by providing the `fieldTitle
<https://vega.github.io/vega-lite/docs/config.html#top-level-config>`__ property in
the `config <https://vega.github.io/vega-lite/docs/config.html>`__ or `fieldTitle
function via the compile function's options
<https://vega.github.io/vega-lite/usage/compile.html#field-title>`__.
2) If both field definition's ``title`` and axis, header, or legend ``title`` are
defined, axis/header/legend title will be used.
type : :class:`StandardType`, Literal['quantitative', 'ordinal', 'temporal', 'nominal']
The type of measurement (``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
``"nominal"``) for the encoded field or constant value (``datum``). It can also be a
``"geojson"`` type for encoding `'geoshape'
<https://vega.github.io/vega-lite/docs/geoshape.html>`__.
Vega-Lite automatically infers data types in many cases as discussed below. However,
type is required for a field if: (1) the field is not nominal and the field encoding
has no specified ``aggregate`` (except ``argmin`` and ``argmax``), ``bin``, scale
type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
scale for a field with ``bin`` or ``timeUnit``.
**Default value:**
1) For a data ``field``, ``"nominal"`` is the default data type unless the field
encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
``timeUnit`` that satisfies the following criteria:
* ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
quantitative scale <https://vega.github.io/vega-lite/docs/scale.html#type>`__.
* ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
or (2) the specified scale type is a time or utc scale
* ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
order
<https://vega.github.io/vega-lite/docs/sort.html#specifying-custom-sort-order>`__,
(2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
channel is ``order``.
2) For a constant value in data domain (``datum``):
* ``"quantitative"`` if the datum is a number
* ``"nominal"`` if the datum is a string
* ``"temporal"`` if the datum is `a date time object
<https://vega.github.io/vega-lite/docs/datetime.html>`__
**Note:**
* Data ``type`` describes the semantics of the data rather than the primitive data
types (number, string, etc.). The same primitive data type can have different
types of measurement. For example, numeric data can represent quantitative,
ordinal, or nominal data.
* Data values for a temporal field can be either a date-time string (e.g.,
``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"``) or a
timestamp number (e.g., ``1552199579097``).
* When using with `bin <https://vega.github.io/vega-lite/docs/bin.html>`__, the
``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
or `"ordinal" (for using an ordinal bin scale)
<https://vega.github.io/vega-lite/docs/type.html#cast-bin>`__.
* When using with `timeUnit
<https://vega.github.io/vega-lite/docs/timeunit.html>`__, the ``type`` property
can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
(for using an ordinal scale)
<https://vega.github.io/vega-lite/docs/type.html#cast-bin>`__.
* When using with `aggregate
<https://vega.github.io/vega-lite/docs/aggregate.html>`__, the ``type`` property
refers to the post-aggregation data type. For example, we can calculate count
``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
"field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
* Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError``) do not have
``type`` as they must have exactly the same type as their primary channels (e.g.,
``x``, ``y``).
**See also:** `type <https://vega.github.io/vega-lite/docs/type.html>`__
documentation.
"""
_class_is_valid_at_instantiation = False
_encoding_name = "fill"
@overload
def aggregate(self, _: NonArgAggregateOp_T, /) -> Fill: ...
@overload
def aggregate(self, *, argmax: Optional[str | SchemaBase] = Undefined) -> Fill: ...
@overload
def aggregate(self, *, argmin: Optional[str | SchemaBase] = Undefined) -> Fill: ...
@overload
def bandPosition(self, _: float, /) -> Fill: ...
@overload
def bin(self, _: bool | Bin | None, /) -> Fill: ...
@overload
def bin(
self,
*,
anchor: Optional[float] = Undefined,
base: Optional[float] = Undefined,
binned: Optional[bool] = Undefined,
divide: Optional[Sequence[float]] = Undefined,
extent: Optional[Parameter | SchemaBase | Sequence[float] | Map] = Undefined,
maxbins: Optional[float] = Undefined,
minstep: Optional[float] = Undefined,
nice: Optional[bool] = Undefined,
step: Optional[float] = Undefined,
steps: Optional[Sequence[float]] = Undefined,
) -> Fill: ...
@overload
def condition(
self,
*,
test: Optional[str | SchemaBase | Map] = Undefined,
value: Optional[str | Parameter | SchemaBase | Map | None] = Undefined,
) -> Fill: ...
@overload
def condition(
self,
*,
empty: Optional[bool] = Undefined,
param: Optional[str | SchemaBase] = Undefined,
value: Optional[str | Parameter | SchemaBase | Map | None] = Undefined,
) -> Fill: ...
@overload
def condition(
self, _: list[core.ConditionalValueDefGradientstringnullExprRef], /
) -> Fill: ...
@overload
def field(self, _: str | RepeatRef, /) -> Fill: ...
@overload
def field(
self,
*,
repeat: Optional[Literal["row", "column", "repeat", "layer"]] = Undefined,
) -> Fill: ...
@overload
def legend(self, _: Legend | None, /) -> Fill: ...
@overload
def legend(
self,
*,
aria: Optional[bool | Parameter | SchemaBase | Map] = Undefined,
clipHeight: Optional[float | Parameter | SchemaBase | Map] = Undefined,
columnPadding: Optional[float | Parameter | SchemaBase | Map] = Undefined,
columns: Optional[float | Parameter | SchemaBase | Map] = Undefined,
cornerRadius: Optional[float | Parameter | SchemaBase | Map] = Undefined,
description: Optional[str | Parameter | SchemaBase | Map] = Undefined,
direction: Optional[SchemaBase | Orientation_T] = Undefined,
fillColor: Optional[
str | Parameter | SchemaBase | Map | ColorName_T | None
] = Undefined,
format: Optional[str | SchemaBase | Map] = Undefined,
formatType: Optional[str] = Undefined,
gradientLength: Optional[float | Parameter | SchemaBase | Map] = Undefined,
gradientOpacity: Optional[float | Parameter | SchemaBase | Map] = Undefined,
gradientStrokeColor: Optional[
str | Parameter | SchemaBase | Map | ColorName_T | None
] = Undefined,
gradientStrokeWidth: Optional[float | Parameter | SchemaBase | Map] = Undefined,
gradientThickness: Optional[float | Parameter | SchemaBase | Map] = Undefined,
gridAlign: Optional[Parameter | SchemaBase | Map | LayoutAlign_T] = Undefined,
labelAlign: Optional[Parameter | SchemaBase | Map | Align_T] = Undefined,
labelBaseline: Optional[
Parameter | SchemaBase | Map | TextBaseline_T
] = Undefined,
labelColor: Optional[
str | Parameter | SchemaBase | Map | ColorName_T | None
] = Undefined,
labelExpr: Optional[str] = Undefined,
labelFont: Optional[str | Parameter | SchemaBase | Map] = Undefined,
labelFontSize: Optional[float | Parameter | SchemaBase | Map] = Undefined,
labelFontStyle: Optional[str | Parameter | SchemaBase | Map] = Undefined,
labelFontWeight: Optional[
Parameter | SchemaBase | Map | FontWeight_T
] = Undefined,
labelLimit: Optional[float | Parameter | SchemaBase | Map] = Undefined,
labelOffset: Optional[float | Parameter | SchemaBase | Map] = Undefined,
labelOpacity: Optional[float | Parameter | SchemaBase | Map] = Undefined,
labelOverlap: Optional[
bool | Parameter | SchemaBase | Literal["greedy", "parity"] | Map
] = Undefined,
labelPadding: Optional[float | Parameter | SchemaBase | Map] = Undefined,
labelSeparation: Optional[float | Parameter | SchemaBase | Map] = Undefined,
legendX: Optional[float | Parameter | SchemaBase | Map] = Undefined,
legendY: Optional[float | Parameter | SchemaBase | Map] = Undefined,
offset: Optional[float | Parameter | SchemaBase | Map] = Undefined,
orient: Optional[SchemaBase | LegendOrient_T] = Undefined,
padding: Optional[float | Parameter | SchemaBase | Map] = Undefined,
rowPadding: Optional[float | Parameter | SchemaBase | Map] = Undefined,
strokeColor: Optional[
str | Parameter | SchemaBase | Map | ColorName_T | None
] = Undefined,
symbolDash: Optional[
Parameter | SchemaBase | Sequence[float] | Map
] = Undefined,
symbolDashOffset: Optional[float | Parameter | SchemaBase | Map] = Undefined,
symbolFillColor: Optional[
str | Parameter | SchemaBase | Map | ColorName_T | None
] = Undefined,
symbolLimit: Optional[float | Parameter | SchemaBase | Map] = Undefined,
symbolOffset: Optional[float | Parameter | SchemaBase | Map] = Undefined,
symbolOpacity: Optional[float | Parameter | SchemaBase | Map] = Undefined,
symbolSize: Optional[float | Parameter | SchemaBase | Map] = Undefined,
symbolStrokeColor: Optional[
str | Parameter | SchemaBase | Map | ColorName_T | None
] = Undefined,
symbolStrokeWidth: Optional[float | Parameter | SchemaBase | Map] = Undefined,
symbolType: Optional[str | Parameter | SchemaBase | Map] = Undefined,
tickCount: Optional[
float | Parameter | SchemaBase | Map | TimeInterval_T
] = Undefined,
tickMinStep: Optional[float | Parameter | SchemaBase | Map] = Undefined,
title: Optional[str | SchemaBase | Sequence[str] | None] = Undefined,
titleAlign: Optional[Parameter | SchemaBase | Map | Align_T] = Undefined,
titleAnchor: Optional[Parameter | SchemaBase | Map | TitleAnchor_T] = Undefined,
titleBaseline: Optional[
Parameter | SchemaBase | Map | TextBaseline_T
] = Undefined,
titleColor: Optional[
str | Parameter | SchemaBase | Map | ColorName_T | None
] = Undefined,
titleFont: Optional[str | Parameter | SchemaBase | Map] = Undefined,
titleFontSize: Optional[float | Parameter | SchemaBase | Map] = Undefined,
titleFontStyle: Optional[str | Parameter | SchemaBase | Map] = Undefined,
titleFontWeight: Optional[
Parameter | SchemaBase | Map | FontWeight_T
] = Undefined,
titleLimit: Optional[float | Parameter | SchemaBase | Map] = Undefined,
titleLineHeight: Optional[float | Parameter | SchemaBase | Map] = Undefined,
titleOpacity: Optional[float | Parameter | SchemaBase | Map] = Undefined,
titleOrient: Optional[Parameter | SchemaBase | Map | Orient_T] = Undefined,
titlePadding: Optional[float | Parameter | SchemaBase | Map] = Undefined,
type: Optional[Literal["symbol", "gradient"]] = Undefined,
values: Optional[
Parameter
| SchemaBase
| Sequence[str]
| Sequence[bool]
| Sequence[float]
| Sequence[Temporal | SchemaBase | Map]
| Map
] = Undefined,
zindex: Optional[float] = Undefined,
) -> Fill: ...
@overload
def scale(self, _: Scale | None, /) -> Fill: ...
@overload
def scale(
self,
*,
align: Optional[float | Parameter | SchemaBase | Map] = Undefined,
base: Optional[float | Parameter | SchemaBase | Map] = Undefined,
bins: Optional[SchemaBase | Sequence[float] | Map] = Undefined,
clamp: Optional[bool | Parameter | SchemaBase | Map] = Undefined,
constant: Optional[float | Parameter | SchemaBase | Map] = Undefined,
domain: Optional[
Parameter
| SchemaBase
| Literal["unaggregated"]
| Sequence[
str | bool | float | Temporal | Parameter | SchemaBase | Map | None
]
| Map
] = Undefined,
domainMax: Optional[
float | Temporal | Parameter | SchemaBase | Map
] = Undefined,
domainMid: Optional[float | Parameter | SchemaBase | Map] = Undefined,
domainMin: Optional[
float | Temporal | Parameter | SchemaBase | Map
] = Undefined,
domainRaw: Optional[Parameter | SchemaBase | Map] = Undefined,
exponent: Optional[float | Parameter | SchemaBase | Map] = Undefined,
interpolate: Optional[
Parameter | SchemaBase | Map | ScaleInterpolateEnum_T
] = Undefined,
nice: Optional[
bool | float | Parameter | SchemaBase | Map | TimeInterval_T
] = Undefined,
padding: Optional[float | Parameter | SchemaBase | Map] = Undefined,
paddingInner: Optional[float | Parameter | SchemaBase | Map] = Undefined,
paddingOuter: Optional[float | Parameter | SchemaBase | Map] = Undefined,
range: Optional[
SchemaBase
| Sequence[str | float | Parameter | SchemaBase | Sequence[float] | Map]
| Map
| RangeEnum_T
] = Undefined,
rangeMax: Optional[str | float | Parameter | SchemaBase | Map] = Undefined,
rangeMin: Optional[str | float | Parameter | SchemaBase | Map] = Undefined,
reverse: Optional[bool | Parameter | SchemaBase | Map] = Undefined,
round: Optional[bool | Parameter | SchemaBase | Map] = Undefined,
scheme: Optional[Parameter | SchemaBase | Map | ColorScheme_T] = Undefined,
type: Optional[SchemaBase | ScaleType_T] = Undefined,
zero: Optional[bool | Parameter | SchemaBase | Map] = Undefined,
) -> Fill: ...
@overload
def sort(
self,
_: Sequence[str]
| Sequence[bool]
| Sequence[float]
| Sequence[DateTime | Temporal]
| AllSortString_T
| None,
/,
) -> Fill: ...
@overload
def sort(
self,
*,
field: Optional[str | SchemaBase | Map] = Undefined,
op: Optional[SchemaBase | NonArgAggregateOp_T] = Undefined,
order: Optional[SchemaBase | SortOrder_T | None] = Undefined,
) -> Fill: ...
@overload
def sort(
self,
*,
encoding: Optional[SchemaBase | SortByChannel_T] = Undefined,
order: Optional[SchemaBase | SortOrder_T | None] = Undefined,
) -> Fill: ...
@overload
def timeUnit(
self,
_: TimeUnitParams | MultiTimeUnit_T | BinnedTimeUnit_T | SingleTimeUnit_T,
/,
) -> Fill: ...
@overload
def timeUnit(
self,
*,
binned: Optional[bool] = Undefined,
maxbins: Optional[float] = Undefined,
step: Optional[float] = Undefined,
unit: Optional[SchemaBase | MultiTimeUnit_T | SingleTimeUnit_T] = Undefined,
utc: Optional[bool] = Undefined,
) -> Fill: ...
@overload
def title(self, _: str | Sequence[str] | None, /) -> Fill: ...
@overload
def type(self, _: StandardType_T, /) -> Fill: ...
def __init__(
self,
shorthand: Optional[str | SchemaBase | Sequence[str] | Map] = Undefined,
aggregate: Optional[SchemaBase | Map | NonArgAggregateOp_T] = Undefined,
bandPosition: Optional[float] = Undefined,
bin: Optional[bool | SchemaBase | Map | None] = Undefined,
condition: Optional[SchemaBase | Sequence[SchemaBase | Map] | Map] = Undefined,
field: Optional[str | SchemaBase | Map] = Undefined,
legend: Optional[SchemaBase | Map | None] = Undefined,
scale: Optional[SchemaBase | Map | None] = Undefined,
sort: Optional[
SchemaBase
| Sequence[str]
| Sequence[bool]
| Sequence[float]
| Sequence[Temporal | SchemaBase | Map]
| Map
| AllSortString_T
| None
] = Undefined,
timeUnit: Optional[
SchemaBase | Map | MultiTimeUnit_T | BinnedTimeUnit_T | SingleTimeUnit_T
] = Undefined,
title: Optional[str | SchemaBase | Sequence[str] | None] = Undefined,
type: Optional[SchemaBase | StandardType_T] = Undefined,
**kwds,
):
super().__init__(
shorthand=shorthand,
aggregate=aggregate,
bandPosition=bandPosition,
bin=bin,
condition=condition,
field=field,
legend=legend,
scale=scale,
sort=sort,
timeUnit=timeUnit,
title=title,
type=type,
**kwds,
)
@with_property_setters
| Fill |
python | python__mypy | mypyc/test/test_alwaysdefined.py | {
"start": 485,
"end": 1528
} | class ____(MypycDataSuite):
files = files
base_path = test_temp_dir
def run_case(self, testcase: DataDrivenTestCase) -> None:
"""Perform a runtime checking transformation test case."""
options = infer_ir_build_options_from_test_name(testcase.name)
if options is None:
# Skipped test case
return
with use_custom_builtins(os.path.join(self.data_prefix, ICODE_GEN_BUILTINS), testcase):
try:
ir = build_ir_for_single_file2(testcase.input, options)[0]
except CompileError as e:
actual = e.messages
else:
actual = []
for cl in ir.classes:
if cl.name.startswith("_"):
continue
actual.append(
"{}: [{}]".format(cl.name, ", ".join(sorted(cl._always_initialized_attrs)))
)
assert_test_output(testcase, actual, "Invalid test output", testcase.output)
| TestAlwaysDefined |
python | microsoft__pyright | packages/pyright-internal/src/tests/samples/protocol28.py | {
"start": 370,
"end": 576
} | class ____(Protocol[_T2]):
def __call__(self, __x: Callable1[_T2]) -> Any: ...
def decorator1(__x: Decorator1[_T3]) -> Decorator1[_T3]: ...
def func1(__x: _T4) -> _T4: ...
decorator1(func1)
| Decorator1 |
python | PrefectHQ__prefect | src/integrations/prefect-bitbucket/prefect_bitbucket/repository.py | {
"start": 1736,
"end": 7442
} | class ____(ReadableDeploymentStorage):
"""Interact with files stored in BitBucket repositories.
An accessible installation of git is required for this block to function
properly.
"""
_block_type_name = "BitBucket Repository"
_logo_url = "https://cdn.sanity.io/images/3ugk85nk/production/5d729f7355fb6828c4b605268ded9cfafab3ae4f-250x250.png" # noqa
_description = "Interact with files stored in BitBucket repositories."
repository: str = Field(
default=...,
description="The URL of a BitBucket repository to read from in HTTPS format",
)
reference: Optional[str] = Field(
default=None,
description="An optional reference to pin to; can be a branch or tag.",
)
bitbucket_credentials: Optional[BitBucketCredentials] = Field(
default=None,
description=(
"An optional BitBucketCredentials block for authenticating with "
"private BitBucket repos."
),
)
@model_validator(mode="after")
def _ensure_credentials_go_with_https(self) -> Self:
"""Ensure that credentials are not provided with 'SSH' formatted BitBucket URLs.
Validators are by default only called on provided arguments.
Note: validates `credentials` specifically so that it only fires when private
repositories are used.
"""
if self.bitbucket_credentials is not None:
if urlparse(self.repository).scheme != "https":
raise InvalidRepositoryURLError(
(
"Credentials can only be used with BitBucket repositories "
"using the 'HTTPS' format. You must either remove the "
"credential if you wish to use the 'SSH' format and are not "
"using a private repository, or you must change the repository "
"URL to the 'HTTPS' format."
)
)
return self
def _create_repo_url(self) -> str:
"""Format the URL provided to the `git clone` command.
For private repos in the cloud:
https://x-token-auth:<access-token>@bitbucket.org/<user>/<repo>.git
For private repos with a local bitbucket server:
https://<username>:<access-token>@<server>/scm/<project>/<repo>.git
All other repos should be the same as `self.repository`.
"""
url_components = urlparse(self.repository)
token_is_set = (
self.bitbucket_credentials is not None and self.bitbucket_credentials.token
)
# Need a token for private repos
if url_components.scheme == "https" and token_is_set:
token = self.bitbucket_credentials.token.get_secret_value()
username = self.bitbucket_credentials.username
if username is None:
username = "x-token-auth"
# Encode special characters in username and token
safe_username = _quote_credential(username or "")
safe_token = _quote_credential(token or "")
updated_components = url_components._replace(
netloc=f"{safe_username}:{safe_token}@{url_components.netloc}"
)
full_url = urlunparse(updated_components)
else:
full_url = self.repository
return full_url
@staticmethod
def _get_paths(
dst_dir: Union[str, None], src_dir: str, sub_directory: Optional[str]
) -> Tuple[str, str]:
"""Return the fully formed paths for BitBucketRepository contents.
Return will take the form of (content_source, content_destination).
"""
if dst_dir is None:
content_destination = Path(".").absolute()
else:
content_destination = Path(dst_dir)
content_source = Path(src_dir)
if sub_directory:
content_destination = content_destination.joinpath(sub_directory)
content_source = content_source.joinpath(sub_directory)
return str(content_source), str(content_destination)
@sync_compatible
async def get_directory(
self, from_path: Optional[str] = None, local_path: Optional[str] = None
) -> None:
"""Clones a BitBucket project within `from_path` to the provided `local_path`.
This defaults to cloning the repository reference configured on the
Block to the present working directory.
Args:
from_path: If provided, interpreted as a subdirectory of the underlying
repository that will be copied to the provided local path.
local_path: A local path to clone to; defaults to present working directory.
"""
# Construct command
cmd = ["git", "clone", self._create_repo_url()]
if self.reference:
cmd += ["-b", self.reference]
# Limit git history
cmd += ["--depth", "1"]
# Clone to a temporary directory and move the subdirectory over
with TemporaryDirectory(suffix="prefect") as tmp_dir:
cmd.append(tmp_dir)
err_stream = io.StringIO()
out_stream = io.StringIO()
process = await run_process(cmd, stream_output=(out_stream, err_stream))
if process.returncode != 0:
err_stream.seek(0)
raise OSError(f"Failed to pull from remote:\n {err_stream.read()}")
content_source, content_destination = self._get_paths(
dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path
)
copytree(src=content_source, dst=content_destination, dirs_exist_ok=True)
| BitBucketRepository |
python | pallets__werkzeug | src/werkzeug/sansio/multipart.py | {
"start": 743,
"end": 1519
} | class ____(Enum):
PREAMBLE = auto()
PART = auto()
DATA = auto()
DATA_START = auto()
EPILOGUE = auto()
COMPLETE = auto()
# Multipart line breaks MUST be CRLF (\r\n) by RFC-7578, except that
# many implementations break this and either use CR or LF alone.
LINE_BREAK = b"(?:\r\n|\n|\r)"
BLANK_LINE_RE = re.compile(b"(?:\r\n\r\n|\r\r|\n\n)", re.MULTILINE)
LINE_BREAK_RE = re.compile(LINE_BREAK, re.MULTILINE)
# Header values can be continued via a space or tab after the linebreak, as
# per RFC2231
HEADER_CONTINUATION_RE = re.compile(b"%s[ \t]" % LINE_BREAK, re.MULTILINE)
# This must be long enough to contain any line breaks plus any
# additional boundary markers (--) such that they will be found in a
# subsequent search
SEARCH_EXTRA_LENGTH = 8
| State |
python | getsentry__sentry | src/sentry/api/serializers/models/team.py | {
"start": 11796,
"end": 11988
} | class ____(TeamSerializer):
"""@deprecated Use `expand` instead."""
def __init__(self) -> None:
super().__init__(expand=["projects", "externalTeams"])
| TeamWithProjectsSerializer |
python | MongoEngine__mongoengine | mongoengine/fields.py | {
"start": 79972,
"end": 80440
} | class ____(GeoJsonBaseField):
"""A GeoJSON field storing a list of LineStrings.
The data is represented as:
.. code-block:: js
{'type' : 'MultiLineString' ,
'coordinates' : [[[x1, y1], [x1, y1] ... [xn, yn]],
[[x1, y1], [x1, y1] ... [xn, yn]]]}
You can either pass a dict with the full information or a list of points.
Requires mongodb >= 2.6
"""
_type = "MultiLineString"
| MultiLineStringField |
python | mkdocs__mkdocs | mkdocs/tests/search_tests.py | {
"start": 492,
"end": 2330
} | class ____(unittest.TestCase):
def test_lang_default(self):
option = search.LangOption(default=['en'])
value = option.validate(None)
self.assertEqual(['en'], value)
def test_lang_str(self):
option = search.LangOption()
value = option.validate('en')
self.assertEqual(['en'], value)
def test_lang_list(self):
option = search.LangOption()
value = option.validate(['en'])
self.assertEqual(['en'], value)
def test_lang_multi_list(self):
option = search.LangOption()
value = option.validate(['en', 'es', 'fr'])
self.assertEqual(['en', 'es', 'fr'], value)
def test_lang_no_default_none(self):
option = search.LangOption()
value = option.validate(None)
self.assertIsNone(value)
def test_lang_no_default_str(self):
option = search.LangOption(default=[])
value = option.validate('en')
self.assertEqual(['en'], value)
def test_lang_no_default_list(self):
option = search.LangOption(default=[])
value = option.validate(['en'])
self.assertEqual(['en'], value)
def test_lang_bad_type(self):
option = search.LangOption()
with self.assertRaises(ValidationError):
option.validate({})
def test_lang_bad_code(self):
option = search.LangOption()
value = option.validate(['foo'])
self.assertEqual(['en'], value)
def test_lang_good_and_bad_code(self):
option = search.LangOption()
value = option.validate(['en', 'foo'])
self.assertEqual(['en'], value)
def test_lang_missing_and_with_territory(self):
option = search.LangOption()
value = option.validate(['cs_CZ', 'pt_BR', 'fr'])
self.assertEqual(['fr', 'en', 'pt'], value)
| SearchConfigTests |
python | numpy__numpy | numpy/f2py/tests/test_character.py | {
"start": 14990,
"end": 19833
} | class ____(util.F2PyTest):
# options = ['--debug-capi', '--build-dir', '/tmp/test-build-f2py']
suffix = '.f90'
fprefix = 'test_misc_character'
code = textwrap.dedent(f"""
subroutine {fprefix}_gh18684(x, y, m)
character(len=5), dimension(m), intent(in) :: x
character*5, dimension(m), intent(out) :: y
integer i, m
!f2py integer, intent(hide), depend(x) :: m = f2py_len(x)
do i=1,m
y(i) = x(i)
end do
end subroutine {fprefix}_gh18684
subroutine {fprefix}_gh6308(x, i)
integer i
!f2py check(i>=0 && i<12) i
character*5 name, x
common name(12)
name(i + 1) = x
end subroutine {fprefix}_gh6308
subroutine {fprefix}_gh4519(x)
character(len=*), intent(in) :: x(:)
!f2py intent(out) x
integer :: i
! Uncomment for debug printing:
!do i=1, size(x)
! print*, "x(",i,")=", x(i)
!end do
end subroutine {fprefix}_gh4519
pure function {fprefix}_gh3425(x) result (y)
character(len=*), intent(in) :: x
character(len=len(x)) :: y
integer :: i
do i = 1, len(x)
j = iachar(x(i:i))
if (j>=iachar("a") .and. j<=iachar("z") ) then
y(i:i) = achar(j-32)
else
y(i:i) = x(i:i)
endif
end do
end function {fprefix}_gh3425
subroutine {fprefix}_character_bc_new(x, y, z)
character, intent(in) :: x
character, intent(out) :: y
!f2py character, depend(x) :: y = x
!f2py character, dimension((x=='a'?1:2)), depend(x), intent(out) :: z
character, dimension(*) :: z
!f2py character, optional, check(x == 'a' || x == 'b') :: x = 'a'
!f2py callstatement (*f2py_func)(&x, &y, z)
!f2py callprotoargument character*, character*, character*
if (y.eq.x) then
y = x
else
y = 'e'
endif
z(1) = 'c'
end subroutine {fprefix}_character_bc_new
subroutine {fprefix}_character_bc_old(x, y, z)
character, intent(in) :: x
character, intent(out) :: y
!f2py character, depend(x) :: y = x[0]
!f2py character, dimension((*x=='a'?1:2)), depend(x), intent(out) :: z
character, dimension(*) :: z
!f2py character, optional, check(*x == 'a' || x[0] == 'b') :: x = 'a'
!f2py callstatement (*f2py_func)(x, y, z)
!f2py callprotoargument char*, char*, char*
if (y.eq.x) then
y = x
else
y = 'e'
endif
z(1) = 'c'
end subroutine {fprefix}_character_bc_old
""")
@pytest.mark.slow
def test_gh18684(self):
# Test character(len=5) and character*5 usages
f = getattr(self.module, self.fprefix + '_gh18684')
x = np.array(["abcde", "fghij"], dtype='S5')
y = f(x)
assert_array_equal(x, y)
def test_gh6308(self):
# Test character string array in a common block
f = getattr(self.module, self.fprefix + '_gh6308')
assert_equal(self.module._BLNK_.name.dtype, np.dtype('S5'))
assert_equal(len(self.module._BLNK_.name), 12)
f("abcde", 0)
assert_equal(self.module._BLNK_.name[0], b"abcde")
f("12345", 5)
assert_equal(self.module._BLNK_.name[5], b"12345")
def test_gh4519(self):
# Test array of assumed length strings
f = getattr(self.module, self.fprefix + '_gh4519')
for x, expected in [
('a', {'shape': (), 'dtype': np.dtype('S1')}),
('text', {'shape': (), 'dtype': np.dtype('S4')}),
(np.array(['1', '2', '3'], dtype='S1'),
{'shape': (3,), 'dtype': np.dtype('S1')}),
(['1', '2', '34'],
{'shape': (3,), 'dtype': np.dtype('S2')}),
(['', ''], {'shape': (2,), 'dtype': np.dtype('S1')})]:
r = f(x)
for k, v in expected.items():
assert_equal(getattr(r, k), v)
def test_gh3425(self):
# Test returning a copy of assumed length string
f = getattr(self.module, self.fprefix + '_gh3425')
# f is equivalent to bytes.upper
assert_equal(f('abC'), b'ABC')
assert_equal(f(''), b'')
assert_equal(f('abC12d'), b'ABC12D')
@pytest.mark.parametrize("state", ['new', 'old'])
def test_character_bc(self, state):
f = getattr(self.module, self.fprefix + '_character_bc_' + state)
c, a = f()
assert_equal(c, b'a')
assert_equal(len(a), 1)
c, a = f(b'b')
assert_equal(c, b'b')
assert_equal(len(a), 2)
assert_raises(Exception, lambda: f(b'c'))
| TestMiscCharacter |
python | scipy__scipy | benchmarks/benchmarks/go_benchmark_functions/go_funcs_B.py | {
"start": 2329,
"end": 3526
} | class ____(Benchmark):
r"""
BiggsExp02 objective function.
The BiggsExp02 [1]_ global optimization problem is a multimodal minimization
problem defined as follows
.. math::
\begin{matrix}
f_{\text{BiggsExp02}}(x) = \sum_{i=1}^{10} (e^{-t_i x_1}
- 5 e^{-t_i x_2} - y_i)^2 \\
t_i = 0.1 i\\
y_i = e^{-t_i} - 5 e^{-10t_i}\\
\end{matrix}
with :math:`x_i \in [0, 20]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [1, 10]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([0] * 2,
[20] * 2))
self.global_optimum = [[1., 10.]]
self.fglob = 0
def fun(self, x, *args):
self.nfev += 1
t = arange(1, 11.) * 0.1
y = exp(-t) - 5 * exp(-10 * t)
vec = (exp(-t * x[0]) - 5 * exp(-t * x[1]) - y) ** 2
return sum(vec)
| BiggsExp02 |
python | huggingface__transformers | src/transformers/modeling_outputs.py | {
"start": 3215,
"end": 5306
} | class ____(ModelOutput):
"""
Base class for model's outputs that also contains a pooling of the last hidden states.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Sequence of hidden-states at the output of the last layer of the model.
pooler_output (`torch.FloatTensor` of shape `(batch_size, hidden_size)`):
Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
"""
last_hidden_state: Optional[torch.FloatTensor] = None
pooler_output: Optional[torch.FloatTensor] = None
hidden_states: Optional[tuple[torch.FloatTensor, ...]] = None
attentions: Optional[tuple[torch.FloatTensor, ...]] = None
@dataclass
| BaseModelOutputWithPooling |
python | tensorflow__tensorflow | tensorflow/python/tpu/tests/tpu_embedding_v2_correctness_sequence_feature_test.py | {
"start": 1270,
"end": 5586
} | class ____(
tpu_embedding_v2_correctness_base_test.TPUEmbeddingCorrectnessBaseTest):
@parameterized.parameters([True, False])
def test_sequence_embeddings(self, sparse):
feature_config = (
tpu_embedding_v2_utils.FeatureConfig(
table=self.table_video, name='watched',
max_sequence_length=2),
tpu_embedding_v2_utils.FeatureConfig(
table=self.table_video, name='favorited',
max_sequence_length=2),
tpu_embedding_v2_utils.FeatureConfig(
table=self.table_user, name='friends',
max_sequence_length=3))
optimizer = tpu_embedding_v2_utils.SGD(learning_rate=0.1)
strategy = self._get_strategy()
num_replicas = strategy.num_replicas_in_sync
with strategy.scope():
mid_level = tpu_embedding_v2.TPUEmbedding(
feature_config=feature_config,
optimizer=optimizer)
# Call build here. We call 'next' outside of the tf.function and this
# results in data where the shape of the sparse tensor is a tensor which we
# can't tell the shape of at tracing time.
mid_level.build(self.batch_size)
if sparse:
dataset = self._create_sparse_dataset(strategy)
else:
dataset = self._create_ragged_dataset(strategy)
data = next(
iter(
strategy.experimental_distribute_dataset(
dataset,
options=distribute_lib.InputOptions(
experimental_fetch_to_device=False))))
@def_function.function
def embedding_and_set_gradients(data):
def tpu_fn():
activations = mid_level.dequeue()
mid_level.apply_gradients(nest.map_structure(array_ops.ones_like,
activations))
return activations
mid_level.enqueue(data)
return strategy.run(tpu_fn)
@def_function.function
def embedding_only(data):
def tpu_fn():
return mid_level.dequeue()
mid_level.enqueue(data, training=False)
return strategy.run(tpu_fn)
# Only check core 0.
before_update = self._get_replica_numpy(
embedding_and_set_gradients(data), strategy, 0)
after_update = self._get_replica_numpy(embedding_only(data), strategy, 0)
# For videos table, row 0 and row 1 are looked up 3*num_replicas times as
# they occur 3 times per replica (considering the features 0 and 1 which are
# both looked up in the videos table).
# Feature 0 has ids [0, 0, 1], [0, 1, 1], ... repeated over num_replicas
# Feature 1 has ids [0, 1, 1], [0, 0, 1], ... repeated over num_replicas
# This means that both rows 0 and 1 get a -0.1*3*num_replicas update
# For users table, each row is looked up twice:
# Feature 2 has ids [3, 0, 1, 2], .. repeated over num_replicas
# This means that we get a -0.1*num_replicas update to the third feature.
# In general this means that after the update, if we lookup feature 0 and 1
# the values will be 0.3*num_replicas lower per entry and for feature 2 they
# will be 0.1*num_replicas lower.
# The one issue is that these lookups contain padding values.
# For core 0, we get the first 2 elements of the 4 element batch.
# For feature 0, the indices are [[0, 0], [1, 0], [1, 1]] with max sequence
# length of 2, which means that [0, 1] will be 0s.
# For feature 1, the indices are [[0, 0], [0, 1], [1, 0]] with max sequence
# length of 2, which means that [1, 1] will be 0s.
# For feature 2, the indices are [[0, 0], [1, 0], [1, 1], [1, 2]] with max
# sequence length of 3, which means that [0, 1], [0, 2] will be 0s.
# The following masks represent that so that we only apply the above updates
# to the non-padding rows:
masks = (
np.array([[[1], [0]], [[1], [1]]]),
np.array([[[1], [1]], [[1], [0]]]),
np.array([[[1], [0], [0]], [[1], [1], [1]]]))
per_row_update = (0.3 * num_replicas,
0.3 * num_replicas,
0.1 * num_replicas)
golden = tuple([before - update * mask for before, update, mask in
zip(before_update, per_row_update, masks)])
self.assertAllClose(golden, after_update)
if __name__ == '__main__':
v2_compat.enable_v2_behavior()
test.main()
| TPUEmbeddingCorrectnessTest |
python | joke2k__faker | tests/providers/test_ssn.py | {
"start": 20127,
"end": 22339
} | class ____(unittest.TestCase):
def setUp(self):
self._NUIP_REGEX: Pattern = re.compile(r"1[012]\d{8}|[1-9]\d{6,7}")
self._NATURAL_PERSON_NIT_REGEX: Pattern = self._NUIP_REGEX
self._CHECK_DIGIT_REGEX: Pattern = re.compile(r"\d")
self._LEGAL_PERSON_NIT_REGEX: Pattern = re.compile(r"[89]\d{8}")
self.fake = Faker("es_CO")
Faker.seed(0)
def test_nuip(self):
for _ in range(100):
assert self._NUIP_REGEX.fullmatch(self.fake.nuip())
assert self._NUIP_REGEX.fullmatch(self.fake.natural_person_nit())
def test_natural_person_nit_with_check_digit(self):
for _ in range(100):
natural_person_nit, check_digit = self.fake.natural_person_nit_with_check_digit().split("-")
assert self._NATURAL_PERSON_NIT_REGEX.fullmatch(natural_person_nit)
assert self._CHECK_DIGIT_REGEX.fullmatch(check_digit)
assert nit_check_digit(natural_person_nit) == check_digit
def test_legal_person_nit(self):
for _ in range(100):
assert self._LEGAL_PERSON_NIT_REGEX.fullmatch(self.fake.legal_person_nit())
def test_legal_person_nit_with_check_digit(self):
for _ in range(100):
legal_person_nit, check_digit = self.fake.legal_person_nit_with_check_digit().split("-")
assert self._LEGAL_PERSON_NIT_REGEX.fullmatch(legal_person_nit)
assert self._CHECK_DIGIT_REGEX.fullmatch(check_digit)
assert nit_check_digit(legal_person_nit) == check_digit
def test_nit_check_digit(self):
# NITs and check digits of some Colombian state entities.
# Source: <https://www.funcionpublica.gov.co/web/sigep/entidades>
for nit, check_digit in (
("830040256", "0"),
("899999003", "1"),
("892301483", "2"),
("800194600", "3"),
("899999403", "4"),
("860042945", "5"),
("830114475", "6"),
("811000231", "7"),
("899999027", "8"),
("900639630", "9"),
):
with self.subTest(nit=nit, check_digit=check_digit):
assert nit_check_digit(nit) == check_digit
| TestEsCO |
python | kamyu104__LeetCode-Solutions | Python/equalize-strings-by-adding-or-removing-characters-at-ends.py | {
"start": 90,
"end": 1444
} | class ____(object):
def minOperations(self, initial, target):
"""
:type initial: str
:type target: str
:rtype: int
"""
def binary_search_right(left, right, check):
while left <= right:
mid = left+(right-left)//2
if not check( mid):
right = mid-1
else:
left = mid+1
return right
def rolling_hash(s, l, lookup, check):
MOD, P = 10**9+7, 113
h = 0
pw = pow(P, l-1, MOD)
for i in xrange(len(s)):
h = (h*P+(ord(s[i])-ord('a')))%MOD
if i < l-1:
continue
if not check:
lookup.add(h)
elif h in lookup:
return True
h = (h-(ord(s[i-(l-1)])-ord('a'))*pw)%MOD
return False
def check(l):
lookup = set()
rolling_hash(target, l, lookup, False)
return rolling_hash(initial, l, lookup, True)
if len(initial) < len(target):
initial, target = target, initial
return len(initial)+len(target)-2*binary_search_right(1, min(len(initial), min(target)), check)
# Time: O(n * m)
# Space: O(1)
# dp
| Solution |
python | pytorch__pytorch | test/torch_np/numpy_tests/lib/test_type_check.py | {
"start": 7980,
"end": 8701
} | class ____(TestCase):
# Fixme, wrong place, isfinite now ufunc
def test_goodvalues(self):
z = np.array((-1.0, 0.0, 1.0))
res = np.isfinite(z) == 1
assert_all(np.all(res, axis=0))
def test_posinf(self):
assert_all(np.isfinite(np.array((1.0,)) / 0.0) == 0)
def test_neginf(self):
assert_all(np.isfinite(np.array((-1.0,)) / 0.0) == 0)
def test_ind(self):
assert_all(np.isfinite(np.array((0.0,)) / 0.0) == 0)
def test_integer(self):
assert_all(np.isfinite(1) == 1)
def test_complex(self):
assert_all(np.isfinite(1 + 1j) == 1)
def test_complex1(self):
assert_all(np.isfinite(np.array(1 + 1j) / 0.0) == 0)
| TestIsfinite |
python | pytorch__pytorch | torch/_functorch/_aot_autograd/descriptors.py | {
"start": 20625,
"end": 20879
} | class ____(AOTInput):
"""The offset for functionalized Philox RNG calls, specifically for backward graph."""
def expr(self) -> str:
return "__philox_backward_base_offset"
@dataclasses.dataclass(frozen=True)
| PhiloxBackwardBaseOffsetAOTInput |
python | openai__openai-python | src/openai/types/evals/create_eval_completions_run_data_source_param.py | {
"start": 1467,
"end": 1594
} | class ____(TypedDict, total=False):
item: Required[Dict[str, object]]
sample: Dict[str, object]
| SourceFileContentContent |
python | doocs__leetcode | solution/3500-3599/3566.Partition Array into Two Equal Product Subsets/Solution.py | {
"start": 0,
"end": 409
} | class ____:
def checkEqualPartitions(self, nums: List[int], target: int) -> bool:
n = len(nums)
for i in range(1 << n):
x = y = 1
for j in range(n):
if i >> j & 1:
x *= nums[j]
else:
y *= nums[j]
if x == target and y == target:
return True
return False
| Solution |
python | Netflix__metaflow | metaflow/user_decorators/user_flow_decorator.py | {
"start": 358,
"end": 3786
} | class ____(type):
_all_registered_decorators = ClassPath_Trie()
_do_not_register = set()
_import_modules = set()
def __new__(mcs, name, bases, namespace):
cls = super().__new__(mcs, name, bases, namespace)
cls.decorator_name = getattr(
cls, "_decorator_name", f"{cls.__module__}.{cls.__name__}"
)
if not cls.__module__.startswith("metaflow.") and not cls.__module__.startswith(
"metaflow_extensions."
):
mcs._import_modules.add(cls.__module__)
if name == "FlowMutator" or cls.decorator_name in mcs._do_not_register:
return cls
# We inject a __init_subclass__ method so we can figure out if there
# are subclasses. We want to register as decorators only the ones that do
# not have a subclass. The logic is that everything is registered and if
# a subclass shows up, we will unregister the parent class leaving only those
# classes that do not have any subclasses registered.
@classmethod
def do_unregister(cls_, **_kwargs):
for base in cls_.__bases__:
if isinstance(base, FlowMutatorMeta):
# If the base is a FlowMutatorMeta, we unregister it
# so that we don't have any decorators that are not the
# most derived one.
mcs._all_registered_decorators.remove(base.decorator_name)
# Also make sure we don't register again
mcs._do_not_register.add(base.decorator_name)
cls.__init_subclass__ = do_unregister
mcs._all_registered_decorators.insert(cls.decorator_name, cls)
return cls
@classmethod
def all_decorators(mcs) -> Dict[str, "FlowMutatorMeta"]:
mcs._check_init()
return mcs._all_registered_decorators.get_unique_prefixes()
def __str__(cls):
return "FlowMutator(%s)" % cls.decorator_name
@classmethod
def get_decorator_by_name(
mcs, decorator_name: str
) -> Optional[Union["FlowDecoratorMeta", "metaflow.decorators.Decorator"]]:
"""
Get a decorator by its name.
Parameters
----------
decorator_name: str
The name of the decorator to retrieve.
Returns
-------
Optional[FlowDecoratorMeta]
The decorator class if found, None otherwise.
"""
mcs._check_init()
return mcs._all_registered_decorators.unique_prefix_value(decorator_name)
@classmethod
def get_decorator_name(mcs, decorator_type: type) -> Optional[str]:
"""
Get the minimally unique classpath name for a decorator type.
Parameters
----------
decorator_type: type
The type of the decorator to retrieve the name for.
Returns
-------
Optional[str]
The minimally unique classpath name if found, None otherwise.
"""
mcs._check_init()
return mcs._all_registered_decorators.unique_prefix_for_type(decorator_type)
@classmethod
def _check_init(mcs):
# Delay importing STEP_DECORATORS until we actually need it
if not mcs._all_registered_decorators.inited:
from metaflow.plugins import FLOW_DECORATORS
mcs._all_registered_decorators.init([(t.name, t) for t in FLOW_DECORATORS])
| FlowMutatorMeta |
python | tensorflow__tensorflow | tensorflow/compiler/tests/stateless_random_ops_test.py | {
"start": 2214,
"end": 16361
} | class ____(xla_test.XLATestCase, parameterized.TestCase):
"""Test cases for stateless random-number generator operators."""
def _random_types(self, include_int=False):
return self.all_tf_types & _allowed_types(include_int)
@test_util.run_v2_only
def testForcedCompile(self):
"""Tests whole-function forced-compilation.
This test checks that stateless_random_* can be used in forced-compilation
scenarios (e.g. TPU). The new version of stateless_random_* requires the
intermediate tensor `alg` to be compile-time constant, so we need to check
that this requirement won't prevent `seed` from depending on variables.
"""
if config.list_logical_devices('TPU'):
self.skipTest('To accommodate OSS, experimental_compile support for TPU '
'is not linked in.')
# GPU doesn't support int32 variables, so we use int64.
v = variables.Variable([1, 2], dtype=dtypes.int64)
@def_function.function(experimental_compile=True)
def f():
key, counter = (
gen_stateless_random_ops_v2.stateless_random_get_key_counter(
seed=math_ops.cast(v.read_value(), dtypes.int32)))
alg = gen_stateless_random_ops_v2.stateless_random_get_alg()
return gen_stateless_random_ops_v2.stateless_random_normal_v2(
shape=[], key=key, counter=counter, alg=alg)
f()
@test_util.run_v2_only
def testGetKeyCounterAlg(self):
seed = [1, 2]
key, counter = gen_stateless_random_ops_v2.stateless_random_get_key_counter(
seed)
self.assertAllEqual(key.shape, [1])
self.assertAllEqual(counter.shape, [2])
alg = gen_stateless_random_ops_v2.stateless_random_get_alg()
self.assertAllEqual(alg.shape, [])
@parameterized.named_parameters(
('_%s_%s' % (op_id, alg_id), op, alg_group) # pylint: disable=g-complex-comprehension
for alg_id, alg_group in enumerate([
[
random_ops_util.Algorithm.PHILOX,
random_ops_util.Algorithm.PHILOX.value,
'philox',
],
[
random_ops_util.Algorithm.THREEFRY,
random_ops_util.Algorithm.THREEFRY.value,
'threefry',
],
[
random_ops_util.Algorithm.AUTO_SELECT,
random_ops_util.Algorithm.AUTO_SELECT.value,
'auto_select',
None,
],
])
for op_id, op in enumerate([
stateless.stateless_random_normal,
stateless.stateless_truncated_normal,
functools.partial(
stateless.stateless_random_uniform,
dtype=dtypes.uint32,
minval=None,
maxval=None,
),
functools.partial(
stateless.stateless_random_uniform, dtype=dtypes.int32, maxval=100
),
functools.partial(
stateless.stateless_random_uniform, dtype=dtypes.float32
),
])
)
@test_util.run_v2_only
def testAlg(self, op, alg_group):
"""Tests all values of `alg`."""
if config.list_logical_devices('TPU') or config.list_logical_devices('GPU'):
self.skipTest('Only _cpu tests linked in support for jit_compile on CPU.')
seed = [1, 2]
shape = [2, 3]
outputs = []
for alg in alg_group:
with ops.device('CPU'):
output = def_function.function(jit_compile=True)(op)(
shape=shape, seed=seed, alg=alg)
self.assertEqual(output.shape, shape)
outputs.append(output)
x = outputs[0]
for y in outputs[1:]:
self.assertAllEqual(x, y)
def testLargeNormal(self):
"""Tests an OOM bug of StatelessRandomNormalV2 on TPU."""
with self.session() as sess, self.test_scope():
seed_t = array_ops.placeholder(dtypes.int32, shape=[2])
key, counter, alg = (gen_stateless_random_ops_v2.
stateless_random_get_key_counter_alg(seed_t))
x = gen_stateless_random_ops_v2.stateless_random_normal_v2(
shape=[1024, 32000], key=key, counter=counter, dtype=dtypes.float32,
alg=alg)
y = sess.run(x, {seed_t: [0x12345678, 0xabcdef1]})
self.assertAllEqual([1024, 32000], y.shape)
key, counter = (gen_stateless_random_ops_v2.
stateless_random_get_key_counter(seed_t))
alg = gen_stateless_random_ops_v2.stateless_random_get_alg()
x = gen_stateless_random_ops_v2.stateless_random_normal_v2(
shape=[1024, 32000], key=key, counter=counter, dtype=dtypes.float32,
alg=alg)
y = sess.run(x, {seed_t: [0x12345678, 0xabcdef1]})
self.assertAllEqual([1024, 32000], y.shape)
@parameterized.named_parameters(
(f'_{op_name}_{shape}_{dtype.name}', stateless_op, shape, dtype) # pylint: disable=g-complex-comprehension
for dtype in _allowed_types() for shape in ((), (3,), (2, 5))
for op_name, stateless_op in (
('uniform', stateless.stateless_random_uniform),
('normal', stateless.stateless_random_normal),
))
def testDeterminism(self, stateless_op, shape, dtype):
# Stateless values should be equal iff the seeds are equal (roughly)
seeds = [(x, y) for x in range(-2, 3) for y in range(-2, 3)] * 3 # pylint: disable=g-complex-comprehension
with self.session(), self.test_scope():
seed_t = array_ops.placeholder(dtypes.int32, shape=[2])
pure = stateless_op(shape, seed=seed_t, dtype=dtype)
values = [(seed, pure.eval(feed_dict={seed_t: seed})) for seed in seeds]
for s0, v0 in values:
for s1, v1 in values:
if s0 == s1:
self.assertAllEqual(v0, v1)
else:
# The resolutions of float16 and bfloat16 are too low, so
# in some cases (e.g. scalar shape) different seeds may
# lead to the same output. So we skip those dtypes.
if not (dtype in (dtypes.bfloat16, dtypes.float16) and shape == ()): # pylint: disable=g-explicit-bool-comparison
self.assertNotAllEqual(v0, v1)
def testRandomUniformIsInRange(self):
with self.session() as sess, self.test_scope():
for dtype in self._random_types(include_int=True):
maxval = 1
if dtype.is_integer:
maxval = 100
seed_t = array_ops.placeholder(dtypes.int32, shape=[2])
x = stateless.stateless_random_uniform(
shape=[1000], seed=seed_t, maxval=maxval, dtype=dtype)
y = sess.run(x, {seed_t: [0x12345678, 0xabcdef1]})
self.assertTrue(np.all(y >= 0))
self.assertTrue(np.all(y < maxval))
@parameterized.named_parameters(
(f'_{alg.name}_{dtype.name}_{seed}', alg, dtype, seed) # pylint: disable=g-complex-comprehension
for seed in ([1, 2], [12, 23], [123, 456], [565656, 121212])
for dtype in _allowed_types(include_int=True)
for alg in list(random_ops_util.Algorithm)
)
def testDistributionOfStatelessRandomUniform(self, alg, dtype, seed):
"""Use Pearson's Chi-squared test to test for uniformity."""
philox = random_ops_util.Algorithm.PHILOX
auto_select = random_ops_util.Algorithm.AUTO_SELECT
device = xla_device()
if 'CPU' in device.device_type:
device_type = 'CPU'
elif 'GPU' in device.device_type:
device_type = 'GPU'
elif device.device_type == 'TPU':
device_type = 'TPU'
else:
device_type = None
bad_combos1 = [
(dtypes.int32, [123, 456]),
(dtypes.int64, [123, 456]),
(dtypes.float16, [565656, 121212]),
(dtypes.bfloat16, [1, 2]),
]
bad_combos2 = [
(dtypes.int32, [1, 2]),
(dtypes.int32, [12, 23]),
]
# TODO(b/244649364): Investigate why these combinations fail.
if (device_type in ('CPU', 'GPU') and alg in (philox, auto_select) and
(dtype, seed) in bad_combos1 or device_type == 'TPU' and
(alg == philox and
(dtype, seed) in bad_combos1 or alg == auto_select and
(dtype, seed) in bad_combos2)):
self.skipTest(
'This (device, alg, dtype, seed) combination fails (b/244649364).')
with self.session() as sess, self.test_scope():
seed_t = array_ops.placeholder(dtypes.int32, shape=[2])
n = 1000
maxval = 1
if dtype.is_integer:
maxval = 100
x = stateless.stateless_random_uniform(
shape=[n], seed=seed_t, maxval=maxval, dtype=dtype, alg=alg)
y = sess.run(x, {seed_t: seed})
# Convert y to float and normalize its value to range [0, 1) when
# maxval != 1.
y = y.astype(float) / maxval
# Tests that the values are distributed amongst 10 bins with equal
# probability. 27.88 is the Chi^2 value for 9 degrees of freedom with
# p=0.001. This test is probabilistic and would be flaky if the random
# seed were not fixed.
bins = 10
self.assertLess(random_test_util.chi_squared(y, bins), 27.88)
def testRandomNormalIsFinite(self):
with self.session() as sess, self.test_scope():
for dtype in self._random_types():
seed_t = array_ops.placeholder(dtypes.int32, shape=[2])
x = stateless.stateless_random_normal(
shape=[10000], seed=seed_t, dtype=dtype)
y = sess.run(x, {seed_t: [0x12345678, 0xabcdef1]})
self.assertTrue(np.all(np.isfinite(y)))
@parameterized.named_parameters(
(f'_{dtype.name}_{seed}', dtype, seed) # pylint: disable=g-complex-comprehension
for seed in ([1, 2], [12, 23], [25252, 314159])
for dtype in _allowed_types()
)
def testDistributionOfStatelessRandomNormal(self, dtype, seed):
"""Use Anderson-Darling test to test distribution appears normal."""
with self.session() as sess, self.test_scope():
seed_t = array_ops.placeholder(dtypes.int32, shape=[2])
n = 1000
x = stateless.stateless_random_normal(shape=[n], seed=seed_t, dtype=dtype)
y = sess.run(x, {seed_t: seed})
# The constant 2.492 is the 5% critical value for the Anderson-Darling
# test where the mean and variance are known. This test is probabilistic
# so to avoid flakiness the seed is fixed.
self.assertLess(random_test_util.anderson_darling(y.astype(float)), 2.492)
@parameterized.named_parameters(
(f'_{dtype.name}', dtype) for dtype in _allowed_types())
def testTruncatedNormal(self, dtype):
with self.session() as sess, self.test_scope():
seed_t = array_ops.placeholder(dtypes.int32, shape=[2])
n = 10000000
x = stateless.stateless_truncated_normal(
shape=[n], seed=seed_t, dtype=dtype)
y = sess.run(x, {seed_t: [0x12345678, 0xabcdef1]})
if dtype == dtypes.float16:
mean_atol = 2e-3
else:
mean_atol = 5e-4
if dtype == dtypes.float16:
median_atol = 2e-3
else:
median_atol = 8e-4
if dtype == dtypes.bfloat16:
variance_rtol = 6e-3
elif dtype == dtypes.float16:
variance_rtol = 3e-3
else:
variance_rtol = 1e-3
random_test_util.test_truncated_normal(
self.assertEqual,
self.assertAllClose,
n,
y,
mean_atol=mean_atol,
median_atol=median_atol,
variance_rtol=variance_rtol)
def _testParameterizedTruncatedNormal(self,
means,
stddevs,
minvals,
maxvals,
variance_rtol=None):
if 'CPU' in xla_device().device_type:
n = int(1e7)
else:
n = int(10e7)
for dtype in self._random_types():
with self.session() as sess, self.test_scope():
seed_t = array_ops.placeholder(dtypes.int32, shape=[2])
x = stateless.stateless_parameterized_truncated_normal(
shape=[n],
seed=seed_t,
means=means,
stddevs=stddevs,
minvals=minvals,
maxvals=maxvals)
y = sess.run(x, {seed_t: [0x12345678, 0xabcdef1]})
if variance_rtol is None:
variance_rtol = 6e-3 if dtype == dtypes.bfloat16 else 1e-3
random_test_util.test_truncated_normal(
self.assertEqual,
self.assertAllClose,
n,
y,
means=means,
stddevs=stddevs,
minvals=minvals,
maxvals=maxvals,
mean_atol=1e-3,
median_atol=1e-3,
variance_rtol=variance_rtol)
def testParameterizedTruncatedNormalDefault(self):
self._testParameterizedTruncatedNormal(0., 1., -2., 2.)
def testParameterizedTruncatedNormalShifted(self):
self._testParameterizedTruncatedNormal(-1., 1., -2., 2.)
def testParameterizedTruncatedNormalRightTail(self):
self.skipTest('b/276957102')
self._testParameterizedTruncatedNormal(0., 1., 4., 20., variance_rtol=2e-2)
def testParameterizedTruncatedNormalLeftTail(self):
self.skipTest('b/276957102')
self._testParameterizedTruncatedNormal(
0., 1., -20., -4., variance_rtol=5e-2)
def testParameterizedTruncatedNormalLeftTailTwoSidedBounds(self):
self._testParameterizedTruncatedNormal(
0., 1., -6., -3., variance_rtol=5e-2)
def testParameterizedTruncatedNormalSmallStddev(self):
self._testParameterizedTruncatedNormal(0., 0.1, 0.05, 0.10)
def testParameterizedTruncatedNormalBroadcast(self):
with self.session() as sess, self.test_scope():
seed_t = array_ops.placeholder(dtypes.int32, shape=[2])
means = array_ops.zeros([2], dtype=dtypes.float32)
stddevs = array_ops.ones([3, 1], dtype=dtypes.float32)
minvals = -array_ops.ones([5, 1, 1], dtype=dtypes.float32)
maxvals = array_ops.ones([7, 1, 1, 1], dtype=dtypes.float32)
shape = [11, 7, 5, 3, 2]
x = stateless.stateless_parameterized_truncated_normal(
shape=shape,
seed=seed_t,
means=means,
stddevs=stddevs,
minvals=minvals,
maxvals=maxvals)
y = sess.run(x, {seed_t: [0x12345678, 0xabcdef1]})
self.assertEqual((11, 7, 5, 3, 2), y.shape)
| StatelessRandomOpsTest |
python | sphinx-doc__sphinx | sphinx/domains/c/_ast.py | {
"start": 25818,
"end": 26989
} | class ____(ASTTrailingTypeSpec):
def __init__(self, prefix: str, nestedName: ASTNestedName) -> None:
self.prefix = prefix
self.nestedName = nestedName
def __eq__(self, other: object) -> bool:
if not isinstance(other, ASTTrailingTypeSpecName):
return NotImplemented
return self.prefix == other.prefix and self.nestedName == other.nestedName
def __hash__(self) -> int:
return hash((self.prefix, self.nestedName))
@property
def name(self) -> ASTNestedName:
return self.nestedName
def _stringify(self, transform: StringifyTransform) -> str:
res: list[str] = []
if self.prefix:
res.extend((self.prefix, ' '))
res.append(transform(self.nestedName))
return ''.join(res)
def describe_signature(
self, signode: TextElement, mode: str, env: BuildEnvironment, symbol: Symbol
) -> None:
if self.prefix:
signode += addnodes.desc_sig_keyword(self.prefix, self.prefix)
signode += addnodes.desc_sig_space()
self.nestedName.describe_signature(signode, mode, env, symbol=symbol)
| ASTTrailingTypeSpecName |
python | langchain-ai__langchain | libs/core/langchain_core/language_models/chat_models.py | {
"start": 8592,
"end": 69310
} | class ____(BaseLanguageModel[AIMessage], ABC):
r"""Base class for chat models.
Key imperative methods:
Methods that actually call the underlying model.
This table provides a brief overview of the main imperative methods. Please see the base `Runnable` reference for full documentation.
| Method | Input | Output | Description |
| ---------------------- | ------------------------------------------------------------ | ---------------------------------------------------------- | -------------------------------------------------------------------------------- |
| `invoke` | `str` \| `list[dict | tuple | BaseMessage]` \| `PromptValue` | `BaseMessage` | A single chat model call. |
| `ainvoke` | `'''` | `BaseMessage` | Defaults to running `invoke` in an async executor. |
| `stream` | `'''` | `Iterator[BaseMessageChunk]` | Defaults to yielding output of `invoke`. |
| `astream` | `'''` | `AsyncIterator[BaseMessageChunk]` | Defaults to yielding output of `ainvoke`. |
| `astream_events` | `'''` | `AsyncIterator[StreamEvent]` | Event types: `on_chat_model_start`, `on_chat_model_stream`, `on_chat_model_end`. |
| `batch` | `list[''']` | `list[BaseMessage]` | Defaults to running `invoke` in concurrent threads. |
| `abatch` | `list[''']` | `list[BaseMessage]` | Defaults to running `ainvoke` in concurrent threads. |
| `batch_as_completed` | `list[''']` | `Iterator[tuple[int, Union[BaseMessage, Exception]]]` | Defaults to running `invoke` in concurrent threads. |
| `abatch_as_completed` | `list[''']` | `AsyncIterator[tuple[int, Union[BaseMessage, Exception]]]` | Defaults to running `ainvoke` in concurrent threads. |
Key declarative methods:
Methods for creating another `Runnable` using the chat model.
This table provides a brief overview of the main declarative methods. Please see the reference for each method for full documentation.
| Method | Description |
| ---------------------------- | ------------------------------------------------------------------------------------------ |
| `bind_tools` | Create chat model that can call tools. |
| `with_structured_output` | Create wrapper that structures model output using schema. |
| `with_retry` | Create wrapper that retries model calls on failure. |
| `with_fallbacks` | Create wrapper that falls back to other models on failure. |
| `configurable_fields` | Specify init args of the model that can be configured at runtime via the `RunnableConfig`. |
| `configurable_alternatives` | Specify alternative models which can be swapped in at runtime via the `RunnableConfig`. |
Creating custom chat model:
Custom chat model implementations should inherit from this class.
Please reference the table below for information about which
methods and properties are required or optional for implementations.
| Method/Property | Description | Required |
| -------------------------------- | ------------------------------------------------------------------ | ----------------- |
| `_generate` | Use to generate a chat result from a prompt | Required |
| `_llm_type` (property) | Used to uniquely identify the type of the model. Used for logging. | Required |
| `_identifying_params` (property) | Represent model parameterization for tracing purposes. | Optional |
| `_stream` | Use to implement streaming | Optional |
| `_agenerate` | Use to implement a native async method | Optional |
| `_astream` | Use to implement async version of `_stream` | Optional |
""" # noqa: E501
rate_limiter: BaseRateLimiter | None = Field(default=None, exclude=True)
"An optional rate limiter to use for limiting the number of requests."
disable_streaming: bool | Literal["tool_calling"] = False
"""Whether to disable streaming for this model.
If streaming is bypassed, then `stream`/`astream`/`astream_events` will
defer to `invoke`/`ainvoke`.
- If `True`, will always bypass streaming case.
- If `'tool_calling'`, will bypass streaming case only when the model is called
with a `tools` keyword argument. In other words, LangChain will automatically
switch to non-streaming behavior (`invoke`) only when the tools argument is
provided. This offers the best of both worlds.
- If `False` (Default), will always use streaming case if available.
The main reason for this flag is that code might be written using `stream` and
a user may want to swap out a given model for another model whose the implementation
does not properly support streaming.
"""
output_version: str | None = Field(
default_factory=from_env("LC_OUTPUT_VERSION", default=None)
)
"""Version of `AIMessage` output format to store in message content.
`AIMessage.content_blocks` will lazily parse the contents of `content` into a
standard format. This flag can be used to additionally store the standard format
in message content, e.g., for serialization purposes.
Supported values:
- `'v0'`: provider-specific format in content (can lazily-parse with
`content_blocks`)
- `'v1'`: standardized format in content (consistent with `content_blocks`)
Partner packages (e.g.,
[`langchain-openai`](https://pypi.org/project/langchain-openai)) can also use this
field to roll out new content formats in a backward-compatible way.
!!! version-added "Added in `langchain-core` 1.0.0"
"""
profile: ModelProfile | None = Field(default=None, exclude=True)
"""Profile detailing model capabilities.
!!! warning "Beta feature"
This is a beta feature. The format of model profiles is subject to change.
If not specified, automatically loaded from the provider package on initialization
if data is available.
Example profile data includes context window sizes, supported modalities, or support
for tool calling, structured output, and other features.
!!! version-added "Added in `langchain-core` 1.1.0"
"""
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
@cached_property
def _serialized(self) -> dict[str, Any]:
return dumpd(self)
# --- Runnable methods ---
@property
@override
def OutputType(self) -> Any:
"""Get the output type for this `Runnable`."""
return AnyMessage
def _convert_input(self, model_input: LanguageModelInput) -> PromptValue:
if isinstance(model_input, PromptValue):
return model_input
if isinstance(model_input, str):
return StringPromptValue(text=model_input)
if isinstance(model_input, Sequence):
return ChatPromptValue(messages=convert_to_messages(model_input))
msg = (
f"Invalid input type {type(model_input)}. "
"Must be a PromptValue, str, or list of BaseMessages."
)
raise ValueError(msg)
@override
def invoke(
self,
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AIMessage:
config = ensure_config(config)
return cast(
"AIMessage",
cast(
"ChatGeneration",
self.generate_prompt(
[self._convert_input(input)],
stop=stop,
callbacks=config.get("callbacks"),
tags=config.get("tags"),
metadata=config.get("metadata"),
run_name=config.get("run_name"),
run_id=config.pop("run_id", None),
**kwargs,
).generations[0][0],
).message,
)
@override
async def ainvoke(
self,
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AIMessage:
config = ensure_config(config)
llm_result = await self.agenerate_prompt(
[self._convert_input(input)],
stop=stop,
callbacks=config.get("callbacks"),
tags=config.get("tags"),
metadata=config.get("metadata"),
run_name=config.get("run_name"),
run_id=config.pop("run_id", None),
**kwargs,
)
return cast(
"AIMessage", cast("ChatGeneration", llm_result.generations[0][0]).message
)
def _should_stream(
self,
*,
async_api: bool,
run_manager: CallbackManagerForLLMRun
| AsyncCallbackManagerForLLMRun
| None = None,
**kwargs: Any,
) -> bool:
"""Determine if a given model call should hit the streaming API."""
sync_not_implemented = type(self)._stream == BaseChatModel._stream # noqa: SLF001
async_not_implemented = type(self)._astream == BaseChatModel._astream # noqa: SLF001
# Check if streaming is implemented.
if (not async_api) and sync_not_implemented:
return False
# Note, since async falls back to sync we check both here.
if async_api and async_not_implemented and sync_not_implemented:
return False
# Check if streaming has been disabled on this instance.
if self.disable_streaming is True:
return False
# We assume tools are passed in via "tools" kwarg in all models.
if self.disable_streaming == "tool_calling" and kwargs.get("tools"):
return False
# Check if a runtime streaming flag has been passed in.
if "stream" in kwargs:
return kwargs["stream"]
if "streaming" in self.model_fields_set:
streaming_value = getattr(self, "streaming", None)
if isinstance(streaming_value, bool):
return streaming_value
# Check if any streaming callback handlers have been passed in.
handlers = run_manager.handlers if run_manager else []
return any(isinstance(h, _StreamingCallbackHandler) for h in handlers)
@override
def stream(
self,
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> Iterator[AIMessageChunk]:
if not self._should_stream(async_api=False, **{**kwargs, "stream": True}):
# Model doesn't implement streaming, so use default implementation
yield cast(
"AIMessageChunk",
self.invoke(input, config=config, stop=stop, **kwargs),
)
else:
config = ensure_config(config)
messages = self._convert_input(input).to_messages()
ls_structured_output_format = kwargs.pop(
"ls_structured_output_format", None
) or kwargs.pop("structured_output_format", None)
ls_structured_output_format_dict = _format_ls_structured_output(
ls_structured_output_format
)
params = self._get_invocation_params(stop=stop, **kwargs)
options = {"stop": stop, **kwargs, **ls_structured_output_format_dict}
inheritable_metadata = {
**(config.get("metadata") or {}),
**self._get_ls_params(stop=stop, **kwargs),
}
callback_manager = CallbackManager.configure(
config.get("callbacks"),
self.callbacks,
self.verbose,
config.get("tags"),
self.tags,
inheritable_metadata,
self.metadata,
)
(run_manager,) = callback_manager.on_chat_model_start(
self._serialized,
[_format_for_tracing(messages)],
invocation_params=params,
options=options,
name=config.get("run_name"),
run_id=config.pop("run_id", None),
batch_size=1,
)
chunks: list[ChatGenerationChunk] = []
if self.rate_limiter:
self.rate_limiter.acquire(blocking=True)
try:
input_messages = _normalize_messages(messages)
run_id = "-".join((LC_ID_PREFIX, str(run_manager.run_id)))
yielded = False
index = -1
index_type = ""
for chunk in self._stream(input_messages, stop=stop, **kwargs):
if chunk.message.id is None:
chunk.message.id = run_id
chunk.message.response_metadata = _gen_info_and_msg_metadata(chunk)
if self.output_version == "v1":
# Overwrite .content with .content_blocks
chunk.message = _update_message_content_to_blocks(
chunk.message, "v1"
)
for block in cast(
"list[types.ContentBlock]", chunk.message.content
):
if block["type"] != index_type:
index_type = block["type"]
index = index + 1
if "index" not in block:
block["index"] = index
run_manager.on_llm_new_token(
cast("str", chunk.message.content), chunk=chunk
)
chunks.append(chunk)
yield cast("AIMessageChunk", chunk.message)
yielded = True
# Yield a final empty chunk with chunk_position="last" if not yet
# yielded
if (
yielded
and isinstance(chunk.message, AIMessageChunk)
and not chunk.message.chunk_position
):
empty_content: str | list = (
"" if isinstance(chunk.message.content, str) else []
)
msg_chunk = AIMessageChunk(
content=empty_content, chunk_position="last", id=run_id
)
run_manager.on_llm_new_token(
"", chunk=ChatGenerationChunk(message=msg_chunk)
)
yield msg_chunk
except BaseException as e:
generations_with_error_metadata = _generate_response_from_error(e)
chat_generation_chunk = merge_chat_generation_chunks(chunks)
if chat_generation_chunk:
generations = [
[chat_generation_chunk],
generations_with_error_metadata,
]
else:
generations = [generations_with_error_metadata]
run_manager.on_llm_error(
e,
response=LLMResult(generations=generations),
)
raise
generation = merge_chat_generation_chunks(chunks)
if generation is None:
err = ValueError("No generation chunks were returned")
run_manager.on_llm_error(err, response=LLMResult(generations=[]))
raise err
run_manager.on_llm_end(LLMResult(generations=[[generation]]))
@override
async def astream(
self,
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[AIMessageChunk]:
if not self._should_stream(async_api=True, **{**kwargs, "stream": True}):
# No async or sync stream is implemented, so fall back to ainvoke
yield cast(
"AIMessageChunk",
await self.ainvoke(input, config=config, stop=stop, **kwargs),
)
return
config = ensure_config(config)
messages = self._convert_input(input).to_messages()
ls_structured_output_format = kwargs.pop(
"ls_structured_output_format", None
) or kwargs.pop("structured_output_format", None)
ls_structured_output_format_dict = _format_ls_structured_output(
ls_structured_output_format
)
params = self._get_invocation_params(stop=stop, **kwargs)
options = {"stop": stop, **kwargs, **ls_structured_output_format_dict}
inheritable_metadata = {
**(config.get("metadata") or {}),
**self._get_ls_params(stop=stop, **kwargs),
}
callback_manager = AsyncCallbackManager.configure(
config.get("callbacks"),
self.callbacks,
self.verbose,
config.get("tags"),
self.tags,
inheritable_metadata,
self.metadata,
)
(run_manager,) = await callback_manager.on_chat_model_start(
self._serialized,
[_format_for_tracing(messages)],
invocation_params=params,
options=options,
name=config.get("run_name"),
run_id=config.pop("run_id", None),
batch_size=1,
)
if self.rate_limiter:
await self.rate_limiter.aacquire(blocking=True)
chunks: list[ChatGenerationChunk] = []
try:
input_messages = _normalize_messages(messages)
run_id = "-".join((LC_ID_PREFIX, str(run_manager.run_id)))
yielded = False
index = -1
index_type = ""
async for chunk in self._astream(
input_messages,
stop=stop,
**kwargs,
):
if chunk.message.id is None:
chunk.message.id = run_id
chunk.message.response_metadata = _gen_info_and_msg_metadata(chunk)
if self.output_version == "v1":
# Overwrite .content with .content_blocks
chunk.message = _update_message_content_to_blocks(
chunk.message, "v1"
)
for block in cast(
"list[types.ContentBlock]", chunk.message.content
):
if block["type"] != index_type:
index_type = block["type"]
index = index + 1
if "index" not in block:
block["index"] = index
await run_manager.on_llm_new_token(
cast("str", chunk.message.content), chunk=chunk
)
chunks.append(chunk)
yield cast("AIMessageChunk", chunk.message)
yielded = True
# Yield a final empty chunk with chunk_position="last" if not yet yielded
if (
yielded
and isinstance(chunk.message, AIMessageChunk)
and not chunk.message.chunk_position
):
empty_content: str | list = (
"" if isinstance(chunk.message.content, str) else []
)
msg_chunk = AIMessageChunk(
content=empty_content, chunk_position="last", id=run_id
)
await run_manager.on_llm_new_token(
"", chunk=ChatGenerationChunk(message=msg_chunk)
)
yield msg_chunk
except BaseException as e:
generations_with_error_metadata = _generate_response_from_error(e)
chat_generation_chunk = merge_chat_generation_chunks(chunks)
if chat_generation_chunk:
generations = [[chat_generation_chunk], generations_with_error_metadata]
else:
generations = [generations_with_error_metadata]
await run_manager.on_llm_error(
e,
response=LLMResult(generations=generations),
)
raise
generation = merge_chat_generation_chunks(chunks)
if not generation:
err = ValueError("No generation chunks were returned")
await run_manager.on_llm_error(err, response=LLMResult(generations=[]))
raise err
await run_manager.on_llm_end(
LLMResult(generations=[[generation]]),
)
# --- Custom methods ---
def _combine_llm_outputs(self, llm_outputs: list[dict | None]) -> dict: # noqa: ARG002
return {}
def _convert_cached_generations(self, cache_val: list) -> list[ChatGeneration]:
"""Convert cached Generation objects to ChatGeneration objects.
Handle case where cache contains Generation objects instead of
ChatGeneration objects. This can happen due to serialization/deserialization
issues or legacy cache data (see #22389).
Args:
cache_val: List of cached generation objects.
Returns:
List of ChatGeneration objects.
"""
converted_generations = []
for gen in cache_val:
if isinstance(gen, Generation) and not isinstance(gen, ChatGeneration):
# Convert Generation to ChatGeneration by creating AIMessage
# from the text content
chat_gen = ChatGeneration(
message=AIMessage(content=gen.text),
generation_info=gen.generation_info,
)
converted_generations.append(chat_gen)
else:
# Already a ChatGeneration or other expected type
if hasattr(gen, "message") and isinstance(gen.message, AIMessage):
# We zero out cost on cache hits
gen.message = gen.message.model_copy(
update={
"usage_metadata": {
**(gen.message.usage_metadata or {}),
"total_cost": 0,
}
}
)
converted_generations.append(gen)
return converted_generations
def _get_invocation_params(
self,
stop: list[str] | None = None,
**kwargs: Any,
) -> dict:
params = self.dict()
params["stop"] = stop
return {**params, **kwargs}
def _get_ls_params(
self,
stop: list[str] | None = None,
**kwargs: Any,
) -> LangSmithParams:
"""Get standard params for tracing."""
# get default provider from class name
default_provider = self.__class__.__name__
if default_provider.startswith("Chat"):
default_provider = default_provider[4:].lower()
elif default_provider.endswith("Chat"):
default_provider = default_provider[:-4]
default_provider = default_provider.lower()
ls_params = LangSmithParams(ls_provider=default_provider, ls_model_type="chat")
if stop:
ls_params["ls_stop"] = stop
# model
if "model" in kwargs and isinstance(kwargs["model"], str):
ls_params["ls_model_name"] = kwargs["model"]
elif hasattr(self, "model") and isinstance(self.model, str):
ls_params["ls_model_name"] = self.model
elif hasattr(self, "model_name") and isinstance(self.model_name, str):
ls_params["ls_model_name"] = self.model_name
# temperature
if "temperature" in kwargs and isinstance(kwargs["temperature"], float):
ls_params["ls_temperature"] = kwargs["temperature"]
elif hasattr(self, "temperature") and isinstance(self.temperature, float):
ls_params["ls_temperature"] = self.temperature
# max_tokens
if "max_tokens" in kwargs and isinstance(kwargs["max_tokens"], int):
ls_params["ls_max_tokens"] = kwargs["max_tokens"]
elif hasattr(self, "max_tokens") and isinstance(self.max_tokens, int):
ls_params["ls_max_tokens"] = self.max_tokens
return ls_params
def _get_llm_string(self, stop: list[str] | None = None, **kwargs: Any) -> str:
if self.is_lc_serializable():
params = {**kwargs, "stop": stop}
param_string = str(sorted(params.items()))
# This code is not super efficient as it goes back and forth between
# json and dict.
serialized_repr = self._serialized
_cleanup_llm_representation(serialized_repr, 1)
llm_string = json.dumps(serialized_repr, sort_keys=True)
return llm_string + "---" + param_string
params = self._get_invocation_params(stop=stop, **kwargs)
params = {**params, **kwargs}
return str(sorted(params.items()))
def generate(
self,
messages: list[list[BaseMessage]],
stop: list[str] | None = None,
callbacks: Callbacks = None,
*,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
run_name: str | None = None,
run_id: uuid.UUID | None = None,
**kwargs: Any,
) -> LLMResult:
"""Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
1. Take advantage of batched calls,
2. Need more output from the model than just the top generated value,
3. Are building chains that are agnostic to the underlying language model
type (e.g., pure text completion models vs chat models).
Args:
messages: List of list of messages.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
tags: The tags to apply.
metadata: The metadata to apply.
run_name: The name of the run.
run_id: The ID of the run.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Returns:
An `LLMResult`, which contains a list of candidate `Generations` for each
input prompt and additional model provider-specific output.
"""
ls_structured_output_format = kwargs.pop(
"ls_structured_output_format", None
) or kwargs.pop("structured_output_format", None)
ls_structured_output_format_dict = _format_ls_structured_output(
ls_structured_output_format
)
params = self._get_invocation_params(stop=stop, **kwargs)
options = {"stop": stop, **ls_structured_output_format_dict}
inheritable_metadata = {
**(metadata or {}),
**self._get_ls_params(stop=stop, **kwargs),
}
callback_manager = CallbackManager.configure(
callbacks,
self.callbacks,
self.verbose,
tags,
self.tags,
inheritable_metadata,
self.metadata,
)
messages_to_trace = [
_format_for_tracing(message_list) for message_list in messages
]
run_managers = callback_manager.on_chat_model_start(
self._serialized,
messages_to_trace,
invocation_params=params,
options=options,
name=run_name,
run_id=run_id,
batch_size=len(messages),
)
results = []
input_messages = [
_normalize_messages(message_list) for message_list in messages
]
for i, m in enumerate(input_messages):
try:
results.append(
self._generate_with_cache(
m,
stop=stop,
run_manager=run_managers[i] if run_managers else None,
**kwargs,
)
)
except BaseException as e:
if run_managers:
generations_with_error_metadata = _generate_response_from_error(e)
run_managers[i].on_llm_error(
e,
response=LLMResult(
generations=[generations_with_error_metadata]
),
)
raise
flattened_outputs = [
LLMResult(generations=[res.generations], llm_output=res.llm_output)
for res in results
]
llm_output = self._combine_llm_outputs([res.llm_output for res in results])
generations = [res.generations for res in results]
output = LLMResult(generations=generations, llm_output=llm_output)
if run_managers:
run_infos = []
for manager, flattened_output in zip(
run_managers, flattened_outputs, strict=False
):
manager.on_llm_end(flattened_output)
run_infos.append(RunInfo(run_id=manager.run_id))
output.run = run_infos
return output
async def agenerate(
self,
messages: list[list[BaseMessage]],
stop: list[str] | None = None,
callbacks: Callbacks = None,
*,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
run_name: str | None = None,
run_id: uuid.UUID | None = None,
**kwargs: Any,
) -> LLMResult:
"""Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
1. Take advantage of batched calls,
2. Need more output from the model than just the top generated value,
3. Are building chains that are agnostic to the underlying language model
type (e.g., pure text completion models vs chat models).
Args:
messages: List of list of messages.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
tags: The tags to apply.
metadata: The metadata to apply.
run_name: The name of the run.
run_id: The ID of the run.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Returns:
An `LLMResult`, which contains a list of candidate `Generations` for each
input prompt and additional model provider-specific output.
"""
ls_structured_output_format = kwargs.pop(
"ls_structured_output_format", None
) or kwargs.pop("structured_output_format", None)
ls_structured_output_format_dict = _format_ls_structured_output(
ls_structured_output_format
)
params = self._get_invocation_params(stop=stop, **kwargs)
options = {"stop": stop, **ls_structured_output_format_dict}
inheritable_metadata = {
**(metadata or {}),
**self._get_ls_params(stop=stop, **kwargs),
}
callback_manager = AsyncCallbackManager.configure(
callbacks,
self.callbacks,
self.verbose,
tags,
self.tags,
inheritable_metadata,
self.metadata,
)
messages_to_trace = [
_format_for_tracing(message_list) for message_list in messages
]
run_managers = await callback_manager.on_chat_model_start(
self._serialized,
messages_to_trace,
invocation_params=params,
options=options,
name=run_name,
batch_size=len(messages),
run_id=run_id,
)
input_messages = [
_normalize_messages(message_list) for message_list in messages
]
results = await asyncio.gather(
*[
self._agenerate_with_cache(
m,
stop=stop,
run_manager=run_managers[i] if run_managers else None,
**kwargs,
)
for i, m in enumerate(input_messages)
],
return_exceptions=True,
)
exceptions = []
for i, res in enumerate(results):
if isinstance(res, BaseException):
if run_managers:
generations_with_error_metadata = _generate_response_from_error(res)
await run_managers[i].on_llm_error(
res,
response=LLMResult(
generations=[generations_with_error_metadata]
),
)
exceptions.append(res)
if exceptions:
if run_managers:
await asyncio.gather(
*[
run_manager.on_llm_end(
LLMResult(
generations=[res.generations], # type: ignore[union-attr]
llm_output=res.llm_output, # type: ignore[union-attr]
)
)
for run_manager, res in zip(run_managers, results, strict=False)
if not isinstance(res, Exception)
]
)
raise exceptions[0]
flattened_outputs = [
LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[union-attr]
for res in results
]
llm_output = self._combine_llm_outputs([res.llm_output for res in results]) # type: ignore[union-attr]
generations = [res.generations for res in results] # type: ignore[union-attr]
output = LLMResult(generations=generations, llm_output=llm_output)
await asyncio.gather(
*[
run_manager.on_llm_end(flattened_output)
for run_manager, flattened_output in zip(
run_managers, flattened_outputs, strict=False
)
]
)
if run_managers:
output.run = [
RunInfo(run_id=run_manager.run_id) for run_manager in run_managers
]
return output
@override
def generate_prompt(
self,
prompts: list[PromptValue],
stop: list[str] | None = None,
callbacks: Callbacks = None,
**kwargs: Any,
) -> LLMResult:
prompt_messages = [p.to_messages() for p in prompts]
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
@override
async def agenerate_prompt(
self,
prompts: list[PromptValue],
stop: list[str] | None = None,
callbacks: Callbacks = None,
**kwargs: Any,
) -> LLMResult:
prompt_messages = [p.to_messages() for p in prompts]
return await self.agenerate(
prompt_messages, stop=stop, callbacks=callbacks, **kwargs
)
def _generate_with_cache(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> ChatResult:
llm_cache = self.cache if isinstance(self.cache, BaseCache) else get_llm_cache()
# We should check the cache unless it's explicitly set to False
# A None cache means we should use the default global cache
# if it's configured.
check_cache = self.cache or self.cache is None
if check_cache:
if llm_cache:
llm_string = self._get_llm_string(stop=stop, **kwargs)
prompt = dumps(messages)
cache_val = llm_cache.lookup(prompt, llm_string)
if isinstance(cache_val, list):
converted_generations = self._convert_cached_generations(cache_val)
return ChatResult(generations=converted_generations)
elif self.cache is None:
pass
else:
msg = "Asked to cache, but no cache found at `langchain.cache`."
raise ValueError(msg)
# Apply the rate limiter after checking the cache, since
# we usually don't want to rate limit cache lookups, but
# we do want to rate limit API requests.
if self.rate_limiter:
self.rate_limiter.acquire(blocking=True)
# If stream is not explicitly set, check if implicitly requested by
# astream_events() or astream_log(). Bail out if _stream not implemented
if self._should_stream(
async_api=False,
run_manager=run_manager,
**kwargs,
):
chunks: list[ChatGenerationChunk] = []
run_id: str | None = (
f"{LC_ID_PREFIX}-{run_manager.run_id}" if run_manager else None
)
yielded = False
index = -1
index_type = ""
for chunk in self._stream(messages, stop=stop, **kwargs):
chunk.message.response_metadata = _gen_info_and_msg_metadata(chunk)
if self.output_version == "v1":
# Overwrite .content with .content_blocks
chunk.message = _update_message_content_to_blocks(
chunk.message, "v1"
)
for block in cast(
"list[types.ContentBlock]", chunk.message.content
):
if block["type"] != index_type:
index_type = block["type"]
index = index + 1
if "index" not in block:
block["index"] = index
if run_manager:
if chunk.message.id is None:
chunk.message.id = run_id
run_manager.on_llm_new_token(
cast("str", chunk.message.content), chunk=chunk
)
chunks.append(chunk)
yielded = True
# Yield a final empty chunk with chunk_position="last" if not yet yielded
if (
yielded
and isinstance(chunk.message, AIMessageChunk)
and not chunk.message.chunk_position
):
empty_content: str | list = (
"" if isinstance(chunk.message.content, str) else []
)
chunk = ChatGenerationChunk(
message=AIMessageChunk(
content=empty_content, chunk_position="last", id=run_id
)
)
if run_manager:
run_manager.on_llm_new_token("", chunk=chunk)
chunks.append(chunk)
result = generate_from_stream(iter(chunks))
elif inspect.signature(self._generate).parameters.get("run_manager"):
result = self._generate(
messages, stop=stop, run_manager=run_manager, **kwargs
)
else:
result = self._generate(messages, stop=stop, **kwargs)
if self.output_version == "v1":
# Overwrite .content with .content_blocks
for generation in result.generations:
generation.message = _update_message_content_to_blocks(
generation.message, "v1"
)
# Add response metadata to each generation
for idx, generation in enumerate(result.generations):
if run_manager and generation.message.id is None:
generation.message.id = f"{LC_ID_PREFIX}-{run_manager.run_id}-{idx}"
generation.message.response_metadata = _gen_info_and_msg_metadata(
generation
)
if len(result.generations) == 1 and result.llm_output is not None:
result.generations[0].message.response_metadata = {
**result.llm_output,
**result.generations[0].message.response_metadata,
}
if check_cache and llm_cache:
llm_cache.update(prompt, llm_string, result.generations)
return result
async def _agenerate_with_cache(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: AsyncCallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> ChatResult:
llm_cache = self.cache if isinstance(self.cache, BaseCache) else get_llm_cache()
# We should check the cache unless it's explicitly set to False
# A None cache means we should use the default global cache
# if it's configured.
check_cache = self.cache or self.cache is None
if check_cache:
if llm_cache:
llm_string = self._get_llm_string(stop=stop, **kwargs)
prompt = dumps(messages)
cache_val = await llm_cache.alookup(prompt, llm_string)
if isinstance(cache_val, list):
converted_generations = self._convert_cached_generations(cache_val)
return ChatResult(generations=converted_generations)
elif self.cache is None:
pass
else:
msg = "Asked to cache, but no cache found at `langchain.cache`."
raise ValueError(msg)
# Apply the rate limiter after checking the cache, since
# we usually don't want to rate limit cache lookups, but
# we do want to rate limit API requests.
if self.rate_limiter:
await self.rate_limiter.aacquire(blocking=True)
# If stream is not explicitly set, check if implicitly requested by
# astream_events() or astream_log(). Bail out if _astream not implemented
if self._should_stream(
async_api=True,
run_manager=run_manager,
**kwargs,
):
chunks: list[ChatGenerationChunk] = []
run_id: str | None = (
f"{LC_ID_PREFIX}-{run_manager.run_id}" if run_manager else None
)
yielded = False
index = -1
index_type = ""
async for chunk in self._astream(messages, stop=stop, **kwargs):
chunk.message.response_metadata = _gen_info_and_msg_metadata(chunk)
if self.output_version == "v1":
# Overwrite .content with .content_blocks
chunk.message = _update_message_content_to_blocks(
chunk.message, "v1"
)
for block in cast(
"list[types.ContentBlock]", chunk.message.content
):
if block["type"] != index_type:
index_type = block["type"]
index = index + 1
if "index" not in block:
block["index"] = index
if run_manager:
if chunk.message.id is None:
chunk.message.id = run_id
await run_manager.on_llm_new_token(
cast("str", chunk.message.content), chunk=chunk
)
chunks.append(chunk)
yielded = True
# Yield a final empty chunk with chunk_position="last" if not yet yielded
if (
yielded
and isinstance(chunk.message, AIMessageChunk)
and not chunk.message.chunk_position
):
empty_content: str | list = (
"" if isinstance(chunk.message.content, str) else []
)
chunk = ChatGenerationChunk(
message=AIMessageChunk(
content=empty_content, chunk_position="last", id=run_id
)
)
if run_manager:
await run_manager.on_llm_new_token("", chunk=chunk)
chunks.append(chunk)
result = generate_from_stream(iter(chunks))
elif inspect.signature(self._agenerate).parameters.get("run_manager"):
result = await self._agenerate(
messages, stop=stop, run_manager=run_manager, **kwargs
)
else:
result = await self._agenerate(messages, stop=stop, **kwargs)
if self.output_version == "v1":
# Overwrite .content with .content_blocks
for generation in result.generations:
generation.message = _update_message_content_to_blocks(
generation.message, "v1"
)
# Add response metadata to each generation
for idx, generation in enumerate(result.generations):
if run_manager and generation.message.id is None:
generation.message.id = f"{LC_ID_PREFIX}-{run_manager.run_id}-{idx}"
generation.message.response_metadata = _gen_info_and_msg_metadata(
generation
)
if len(result.generations) == 1 and result.llm_output is not None:
result.generations[0].message.response_metadata = {
**result.llm_output,
**result.generations[0].message.response_metadata,
}
if check_cache and llm_cache:
await llm_cache.aupdate(prompt, llm_string, result.generations)
return result
@abstractmethod
def _generate(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> ChatResult:
"""Generate the result.
Args:
messages: The messages to generate from.
stop: Optional list of stop words to use when generating.
run_manager: Optional callback manager to use for this call.
**kwargs: Additional keyword arguments to pass to the model.
Returns:
The chat result.
"""
async def _agenerate(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: AsyncCallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> ChatResult:
"""Generate the result.
Args:
messages: The messages to generate from.
stop: Optional list of stop words to use when generating.
run_manager: Optional callback manager to use for this call.
**kwargs: Additional keyword arguments to pass to the model.
Returns:
The chat result.
"""
return await run_in_executor(
None,
self._generate,
messages,
stop,
run_manager.get_sync() if run_manager else None,
**kwargs,
)
def _stream(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
"""Stream the output of the model.
Args:
messages: The messages to generate from.
stop: Optional list of stop words to use when generating.
run_manager: Optional callback manager to use for this call.
**kwargs: Additional keyword arguments to pass to the model.
Yields:
The chat generation chunks.
"""
raise NotImplementedError
async def _astream(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: AsyncCallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> AsyncIterator[ChatGenerationChunk]:
"""Stream the output of the model.
Args:
messages: The messages to generate from.
stop: Optional list of stop words to use when generating.
run_manager: Optional callback manager to use for this call.
**kwargs: Additional keyword arguments to pass to the model.
Yields:
The chat generation chunks.
"""
iterator = await run_in_executor(
None,
self._stream,
messages,
stop,
run_manager.get_sync() if run_manager else None,
**kwargs,
)
done = object()
while True:
item = await run_in_executor(
None,
next,
iterator,
done,
)
if item is done:
break
yield item # type: ignore[misc]
async def _call_async(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
callbacks: Callbacks = None,
**kwargs: Any,
) -> BaseMessage:
result = await self.agenerate(
[messages], stop=stop, callbacks=callbacks, **kwargs
)
generation = result.generations[0][0]
if isinstance(generation, ChatGeneration):
return generation.message
msg = "Unexpected generation type"
raise ValueError(msg)
@property
@abstractmethod
def _llm_type(self) -> str:
"""Return type of chat model."""
@override
def dict(self, **kwargs: Any) -> dict:
"""Return a dictionary of the LLM."""
starter_dict = dict(self._identifying_params)
starter_dict["_type"] = self._llm_type
return starter_dict
def bind_tools(
self,
tools: Sequence[
typing.Dict[str, Any] | type | Callable | BaseTool # noqa: UP006
],
*,
tool_choice: str | None = None,
**kwargs: Any,
) -> Runnable[LanguageModelInput, AIMessage]:
"""Bind tools to the model.
Args:
tools: Sequence of tools to bind to the model.
tool_choice: The tool to use. If "any" then any tool can be used.
Returns:
A Runnable that returns a message.
"""
raise NotImplementedError
def with_structured_output(
self,
schema: typing.Dict | type, # noqa: UP006
*,
include_raw: bool = False,
**kwargs: Any,
) -> Runnable[LanguageModelInput, typing.Dict | BaseModel]: # noqa: UP006
"""Model wrapper that returns outputs formatted to match the given schema.
Args:
schema: The output schema. Can be passed in as:
- An OpenAI function/tool schema,
- A JSON Schema,
- A `TypedDict` class,
- Or a Pydantic class.
If `schema` is a Pydantic class then the model output will be a
Pydantic instance of that class, and the model-generated fields will be
validated by the Pydantic class. Otherwise the model output will be a
dict and will not be validated.
See `langchain_core.utils.function_calling.convert_to_openai_tool` for
more on how to properly specify types and descriptions of schema fields
when specifying a Pydantic or `TypedDict` class.
include_raw:
If `False` then only the parsed structured output is returned.
If an error occurs during model output parsing it will be raised.
If `True` then both the raw model response (a `BaseMessage`) and the
parsed model response will be returned.
If an error occurs during output parsing it will be caught and returned
as well.
The final output is always a `dict` with keys `'raw'`, `'parsed'`, and
`'parsing_error'`.
Raises:
ValueError: If there are any unsupported `kwargs`.
NotImplementedError: If the model does not implement
`with_structured_output()`.
Returns:
A `Runnable` that takes same inputs as a
`langchain_core.language_models.chat.BaseChatModel`. If `include_raw` is
`False` and `schema` is a Pydantic class, `Runnable` outputs an instance
of `schema` (i.e., a Pydantic object). Otherwise, if `include_raw` is
`False` then `Runnable` outputs a `dict`.
If `include_raw` is `True`, then `Runnable` outputs a `dict` with keys:
- `'raw'`: `BaseMessage`
- `'parsed'`: `None` if there was a parsing error, otherwise the type
depends on the `schema` as described above.
- `'parsing_error'`: `BaseException | None`
Example: Pydantic schema (`include_raw=False`):
```python
from pydantic import BaseModel
class AnswerWithJustification(BaseModel):
'''An answer to the user question along with justification for the answer.'''
answer: str
justification: str
model = ChatModel(model="model-name", temperature=0)
structured_model = model.with_structured_output(AnswerWithJustification)
structured_model.invoke(
"What weighs more a pound of bricks or a pound of feathers"
)
# -> AnswerWithJustification(
# answer='They weigh the same',
# justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'
# )
```
Example: Pydantic schema (`include_raw=True`):
```python
from pydantic import BaseModel
class AnswerWithJustification(BaseModel):
'''An answer to the user question along with justification for the answer.'''
answer: str
justification: str
model = ChatModel(model="model-name", temperature=0)
structured_model = model.with_structured_output(
AnswerWithJustification, include_raw=True
)
structured_model.invoke(
"What weighs more a pound of bricks or a pound of feathers"
)
# -> {
# 'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}),
# 'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'),
# 'parsing_error': None
# }
```
Example: Dictionary schema (`include_raw=False`):
```python
from pydantic import BaseModel
from langchain_core.utils.function_calling import convert_to_openai_tool
class AnswerWithJustification(BaseModel):
'''An answer to the user question along with justification for the answer.'''
answer: str
justification: str
dict_schema = convert_to_openai_tool(AnswerWithJustification)
model = ChatModel(model="model-name", temperature=0)
structured_model = model.with_structured_output(dict_schema)
structured_model.invoke(
"What weighs more a pound of bricks or a pound of feathers"
)
# -> {
# 'answer': 'They weigh the same',
# 'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
# }
```
!!! warning "Behavior changed in `langchain-core` 0.2.26"
Added support for `TypedDict` class.
""" # noqa: E501
_ = kwargs.pop("method", None)
_ = kwargs.pop("strict", None)
if kwargs:
msg = f"Received unsupported arguments {kwargs}"
raise ValueError(msg)
if type(self).bind_tools is BaseChatModel.bind_tools:
msg = "with_structured_output is not implemented for this model."
raise NotImplementedError(msg)
llm = self.bind_tools(
[schema],
tool_choice="any",
ls_structured_output_format={
"kwargs": {"method": "function_calling"},
"schema": schema,
},
)
if isinstance(schema, type) and is_basemodel_subclass(schema):
output_parser: OutputParserLike = PydanticToolsParser(
tools=[cast("TypeBaseModel", schema)], first_tool_only=True
)
else:
key_name = convert_to_openai_tool(schema)["function"]["name"]
output_parser = JsonOutputKeyToolsParser(
key_name=key_name, first_tool_only=True
)
if include_raw:
parser_assign = RunnablePassthrough.assign(
parsed=itemgetter("raw") | output_parser, parsing_error=lambda _: None
)
parser_none = RunnablePassthrough.assign(parsed=lambda _: None)
parser_with_fallback = parser_assign.with_fallbacks(
[parser_none], exception_key="parsing_error"
)
return RunnableMap(raw=llm) | parser_with_fallback
return llm | output_parser
| BaseChatModel |
python | chroma-core__chroma | chromadb/rate_limit/simple_rate_limit/__init__.py | {
"start": 723,
"end": 1177
} | class ____(RateLimitEnforcer):
"""
A naive implementation of a rate limit enforcer that allows all requests.
"""
def __init__(self, system: System) -> None:
super().__init__(system)
@override
def rate_limit(self, func: A) -> A:
@wraps(func)
async def wrapper(*args: Any, **kwargs: Any) -> Any:
return await func(*args, **kwargs)
return wrapper # type: ignore
| SimpleAsyncRateLimitEnforcer |
python | getsentry__sentry | src/sentry/deletions/defaults/monitor.py | {
"start": 170,
"end": 761
} | class ____(ModelDeletionTask[Monitor]):
def get_child_relations(self, instance: Monitor) -> list[BaseRelation]:
from sentry.monitors import models
return [
ModelRelation(models.MonitorIncident, {"monitor_id": instance.id}),
# Use BulkModelDeletionTask here since MonitorIncidents are already handled above
ModelRelation(
models.MonitorCheckIn, {"monitor_id": instance.id}, BulkModelDeletionTask
),
ModelRelation(models.MonitorEnvironment, {"monitor_id": instance.id}),
]
| MonitorDeletionTask |
python | fluentpython__example-code | 20-descriptor/bulkfood/bulkfood_v4.py | {
"start": 1615,
"end": 1916
} | class ____:
weight = Quantity() # <8>
price = Quantity()
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
# END LINEITEM_V4
| LineItem |
python | HypothesisWorks__hypothesis | hypothesis-python/src/hypothesis/internal/conjecture/engine.py | {
"start": 6834,
"end": 7553
} | class ____(Exception):
pass
def _get_provider(backend: str) -> PrimitiveProvider | type[PrimitiveProvider]:
provider_cls = AVAILABLE_PROVIDERS[backend]
if isinstance(provider_cls, str):
module_name, class_name = provider_cls.rsplit(".", 1)
provider_cls = getattr(importlib.import_module(module_name), class_name)
if provider_cls.lifetime == "test_function":
return provider_cls(None)
elif provider_cls.lifetime == "test_case":
return provider_cls
else:
raise InvalidArgument(
f"invalid lifetime {provider_cls.lifetime} for provider {provider_cls.__name__}. "
"Expected one of 'test_function', 'test_case'."
)
| RunIsComplete |
python | sympy__sympy | sympy/functions/special/error_functions.py | {
"start": 35564,
"end": 42483
} | class ____(DefinedFunction):
r"""
Generalized exponential integral.
Explanation
===========
This function is defined as
.. math:: \operatorname{E}_\nu(z) = z^{\nu - 1} \Gamma(1 - \nu, z),
where $\Gamma(1 - \nu, z)$ is the upper incomplete gamma function
(``uppergamma``).
Hence for $z$ with positive real part we have
.. math:: \operatorname{E}_\nu(z)
= \int_1^\infty \frac{e^{-zt}}{t^\nu} \mathrm{d}t,
which explains the name.
The representation as an incomplete gamma function provides an analytic
continuation for $\operatorname{E}_\nu(z)$. If $\nu$ is a
non-positive integer, the exponential integral is thus an unbranched
function of $z$, otherwise there is a branch point at the origin.
Refer to the incomplete gamma function documentation for details of the
branching behavior.
Examples
========
>>> from sympy import expint, S
>>> from sympy.abc import nu, z
Differentiation is supported. Differentiation with respect to $z$ further
explains the name: for integral orders, the exponential integral is an
iterated integral of the exponential function.
>>> expint(nu, z).diff(z)
-expint(nu - 1, z)
Differentiation with respect to $\nu$ has no classical expression:
>>> expint(nu, z).diff(nu)
-z**(nu - 1)*meijerg(((), (1, 1)), ((0, 0, 1 - nu), ()), z)
At non-postive integer orders, the exponential integral reduces to the
exponential function:
>>> expint(0, z)
exp(-z)/z
>>> expint(-1, z)
exp(-z)/z + exp(-z)/z**2
At half-integers it reduces to error functions:
>>> expint(S(1)/2, z)
sqrt(pi)*erfc(sqrt(z))/sqrt(z)
At positive integer orders it can be rewritten in terms of exponentials
and ``expint(1, z)``. Use ``expand_func()`` to do this:
>>> from sympy import expand_func
>>> expand_func(expint(5, z))
z**4*expint(1, z)/24 + (-z**3 + z**2 - 2*z + 6)*exp(-z)/24
The generalised exponential integral is essentially equivalent to the
incomplete gamma function:
>>> from sympy import uppergamma
>>> expint(nu, z).rewrite(uppergamma)
z**(nu - 1)*uppergamma(1 - nu, z)
As such it is branched at the origin:
>>> from sympy import exp_polar, pi, I
>>> expint(4, z*exp_polar(2*pi*I))
I*pi*z**3/3 + expint(4, z)
>>> expint(nu, z*exp_polar(2*pi*I))
z**(nu - 1)*(exp(2*I*pi*nu) - 1)*gamma(1 - nu) + expint(nu, z)
See Also
========
Ei: Another related function called exponential integral.
E1: The classical case, returns expint(1, z).
li: Logarithmic integral.
Li: Offset logarithmic integral.
Si: Sine integral.
Ci: Cosine integral.
Shi: Hyperbolic sine integral.
Chi: Hyperbolic cosine integral.
uppergamma
References
==========
.. [1] https://dlmf.nist.gov/8.19
.. [2] https://functions.wolfram.com/GammaBetaErf/ExpIntegralE/
.. [3] https://en.wikipedia.org/wiki/Exponential_integral
"""
@classmethod
def eval(cls, nu, z):
from sympy.functions.special.gamma_functions import (gamma, uppergamma)
nu2 = unpolarify(nu)
if nu != nu2:
return expint(nu2, z)
if nu.is_Integer and nu <= 0 or (not nu.is_Integer and (2*nu).is_Integer):
return unpolarify(expand_mul(z**(nu - 1)*uppergamma(1 - nu, z)))
# Extract branching information. This can be deduced from what is
# explained in lowergamma.eval().
z, n = z.extract_branch_factor()
if n is S.Zero:
return
if nu.is_integer:
if not nu > 0:
return
return expint(nu, z) \
- 2*pi*I*n*S.NegativeOne**(nu - 1)/factorial(nu - 1)*unpolarify(z)**(nu - 1)
else:
return (exp(2*I*pi*nu*n) - 1)*z**(nu - 1)*gamma(1 - nu) + expint(nu, z)
def fdiff(self, argindex):
nu, z = self.args
if argindex == 1:
return -z**(nu - 1)*meijerg([], [1, 1], [0, 0, 1 - nu], [], z)
elif argindex == 2:
return -expint(nu - 1, z)
else:
raise ArgumentIndexError(self, argindex)
def _eval_rewrite_as_uppergamma(self, nu, z, **kwargs):
from sympy.functions.special.gamma_functions import uppergamma
return z**(nu - 1)*uppergamma(1 - nu, z)
def _eval_rewrite_as_Ei(self, nu, z, **kwargs):
if nu == 1:
return -Ei(z*exp_polar(-I*pi)) - I*pi
elif nu.is_Integer and nu > 1:
# DLMF, 8.19.7
x = -unpolarify(z)
return x**(nu - 1)/factorial(nu - 1)*E1(z).rewrite(Ei) + \
exp(x)/factorial(nu - 1) * \
Add(*[factorial(nu - k - 2)*x**k for k in range(nu - 1)])
else:
return self
def _eval_expand_func(self, **hints):
return self.rewrite(Ei).rewrite(expint, **hints)
def _eval_rewrite_as_Si(self, nu, z, **kwargs):
if nu != 1:
return self
return Shi(z) - Chi(z)
_eval_rewrite_as_Ci = _eval_rewrite_as_Si
_eval_rewrite_as_Chi = _eval_rewrite_as_Si
_eval_rewrite_as_Shi = _eval_rewrite_as_Si
def _eval_nseries(self, x, n, logx, cdir=0):
if not self.args[0].has(x):
nu = self.args[0]
if nu == 1:
f = self._eval_rewrite_as_Si(*self.args)
return f._eval_nseries(x, n, logx)
elif nu.is_Integer and nu > 1:
f = self._eval_rewrite_as_Ei(*self.args)
return f._eval_nseries(x, n, logx)
return super()._eval_nseries(x, n, logx)
def _eval_aseries(self, n, args0, x, logx):
from sympy.series.order import Order
point = args0[1]
nu = self.args[0]
if point is S.Infinity:
z = self.args[1]
s = [S.NegativeOne**k * RisingFactorial(nu, k) / z**k for k in range(n)] + [Order(1/z**n, x)]
return (exp(-z)/z) * Add(*s)
return super(expint, self)._eval_aseries(n, args0, x, logx)
def _eval_rewrite_as_Integral(self, *args, **kwargs):
from sympy.integrals.integrals import Integral
n, x = self.args
t = Dummy(uniquely_named_symbol('t', args).name)
return Integral(t**-n * exp(-t*x), (t, 1, S.Infinity))
def E1(z):
"""
Classical case of the generalized exponential integral.
Explanation
===========
This is equivalent to ``expint(1, z)``.
Examples
========
>>> from sympy import E1
>>> E1(0)
expint(1, 0)
>>> E1(5)
expint(1, 5)
See Also
========
Ei: Exponential integral.
expint: Generalised exponential integral.
li: Logarithmic integral.
Li: Offset logarithmic integral.
Si: Sine integral.
Ci: Cosine integral.
Shi: Hyperbolic sine integral.
Chi: Hyperbolic cosine integral.
"""
return expint(1, z)
| expint |
python | ray-project__ray | python/ray/dashboard/modules/reporter/tests/test_profile_manager.py | {
"start": 1108,
"end": 6249
} | class ____:
async def test_basic_attach_profiler(self, setup_memory_profiler, shutdown_only):
# test basic attach profiler to running process
actor, memory_profiler = setup_memory_profiler
pid = ray.get(actor.getpid.remote())
actor.long_run.remote()
success, profiler_filename, message = await memory_profiler.attach_profiler(
pid, verbose=True
)
assert success, message
assert f"Success attaching memray to process {pid}" in message
assert profiler_filename in os.listdir(memory_profiler.profile_dir_path)
async def test_profiler_multiple_attach(self, setup_memory_profiler, shutdown_only):
# test multiple attaches
actor, memory_profiler = setup_memory_profiler
pid = ray.get(actor.getpid.remote())
actor.long_run.remote()
success, profiler_filename, message = await memory_profiler.attach_profiler(
pid, verbose=True
)
assert success, message
assert f"Success attaching memray to process {pid}" in message
assert profiler_filename in os.listdir(memory_profiler.profile_dir_path)
success, _, message = await memory_profiler.attach_profiler(pid)
assert success, message
assert f"Success attaching memray to process {pid}" in message
async def test_detach_profiler_successful(
self, setup_memory_profiler, shutdown_only
):
# test basic detach profiler
actor, memory_profiler = setup_memory_profiler
pid = ray.get(actor.getpid.remote())
actor.long_run.remote()
success, _, message = await memory_profiler.attach_profiler(pid, verbose=True)
assert success, message
success, message = await memory_profiler.detach_profiler(pid, verbose=True)
assert success, message
assert f"Success detaching memray from process {pid}" in message
async def test_detach_profiler_without_attach(
self, setup_memory_profiler, shutdown_only
):
# test detach profiler from unattached process
actor, memory_profiler = setup_memory_profiler
pid = ray.get(actor.getpid.remote())
success, message = await memory_profiler.detach_profiler(pid)
assert not success, message
assert "Failed to execute" in message
assert "no previous `memray attach`" in message
async def test_profiler_memray_not_installed(
self, setup_memory_profiler, shutdown_only
):
# test profiler when memray is not installed
actor, memory_profiler = setup_memory_profiler
pid = ray.get(actor.getpid.remote())
with patch("shutil.which", return_value=None):
success, _, message = await memory_profiler.attach_profiler(pid)
assert not success
assert "memray is not installed" in message
async def test_profiler_attach_process_not_found(
self, setup_memory_profiler, shutdown_only
):
# test basic attach profiler to non-existing process
_, memory_profiler = setup_memory_profiler
pid = 123456
success, _, message = await memory_profiler.attach_profiler(pid)
assert not success, message
assert "Failed to execute" in message
assert "The given process ID does not exist" in message
async def test_profiler_get_profiler_result(
self, setup_memory_profiler, shutdown_only
):
# test get profiler result from running process
actor, memory_profiler = setup_memory_profiler
pid = ray.get(actor.getpid.remote())
actor.long_run.remote()
success, profiler_filename, message = await memory_profiler.attach_profiler(
pid, verbose=True
)
assert success, message
assert f"Success attaching memray to process {pid}" in message
# get profiler result in flamegraph and table format
supported_formats = ["flamegraph", "table"]
unsupported_formats = ["json"]
for format in supported_formats + unsupported_formats:
success, message = await memory_profiler.get_profile_result(
pid, profiler_filename=profiler_filename, format=format
)
if format in supported_formats:
assert success, message
assert f"{format} report" in message.decode("utf-8")
else:
assert not success, message
assert f"{format} is not supported" in message
async def test_profiler_result_not_exist(
self, setup_memory_profiler, shutdown_only
):
# test get profiler result from unexisting process
_, memory_profiler = setup_memory_profiler
pid = 123456
profiler_filename = "non-existing-file"
success, message = await memory_profiler.get_profile_result(
pid, profiler_filename=profiler_filename, format=format
)
assert not success, message
assert f"process {pid} has not been profiled" in message
if __name__ == "__main__":
sys.exit(pytest.main(["-v", __file__]))
| TestMemoryProfiling |
python | vyperlang__vyper | vyper/builtins/functions.py | {
"start": 28423,
"end": 31638
} | class ____(BuiltinFunctionT):
_id = "extract32"
_inputs = [("b", BytesT.any()), ("start", IntegerT.unsigneds())]
_kwargs = {"output_type": KwargSettings(TYPE_T.any(), BYTES32_T)}
def fetch_call_return(self, node):
self._validate_arg_types(node)
return_type = self.infer_kwarg_types(node)["output_type"].typedef
return return_type
def infer_arg_types(self, node, expected_return_typ=None):
self._validate_arg_types(node)
input_type = get_possible_types_from_node(node.args[0]).pop()
return [input_type, UINT256_T]
def infer_kwarg_types(self, node):
if node.keywords:
output_type = type_from_annotation(node.keywords[0].value)
if not isinstance(output_type, (AddressT, BytesM_T, IntegerT)):
raise InvalidType(
"Output type must be one of integer, bytes32 or address", node.keywords[0].value
)
output_typedef = TYPE_T(output_type)
node.keywords[0].value._metadata["type"] = output_typedef
else:
output_typedef = TYPE_T(BYTES32_T)
return {"output_type": output_typedef}
@process_inputs
def build_IR(self, expr, args, kwargs, context):
bytez, index = args
ret_type = kwargs["output_type"]
if potential_overlap(bytez, index):
bytez = create_memory_copy(bytez, context)
def finalize(ret):
annotation = "extract32"
ret = IRnode.from_list(ret, typ=ret_type, annotation=annotation)
return clamp_basetype(ret)
with bytez.cache_when_complex("_sub") as (b1, bytez):
# merge
length = get_bytearray_length(bytez)
index = clamp2(0, index, ["sub", length, 32], signed=True)
with index.cache_when_complex("_index") as (b2, index):
assert not index.typ.is_signed
# "easy" case, byte- addressed locations:
if bytez.location.word_scale == 32:
word = LOAD(add_ofst(bytes_data_ptr(bytez), index))
return finalize(b1.resolve(b2.resolve(word)))
# storage and transient storage, word-addressed
assert bytez.location.word_scale == 1
slot = IRnode.from_list(["div", index, 32])
# byte offset within the slot
byte_ofst = IRnode.from_list(["mod", index, 32])
with byte_ofst.cache_when_complex("byte_ofst") as (
b3,
byte_ofst,
), slot.cache_when_complex("slot") as (b4, slot):
# perform two loads and merge
w1 = LOAD(add_ofst(bytes_data_ptr(bytez), slot))
w2 = LOAD(add_ofst(bytes_data_ptr(bytez), ["add", slot, 1]))
left_bytes = shl(["mul", 8, byte_ofst], w1)
right_bytes = shr(["mul", 8, ["sub", 32, byte_ofst]], w2)
merged = ["or", left_bytes, right_bytes]
ret = ["if", byte_ofst, merged, left_bytes]
return finalize(b1.resolve(b2.resolve(b3.resolve(b4.resolve(ret)))))
| Extract32 |
python | pandas-dev__pandas | pandas/tests/indexing/multiindex/test_loc.py | {
"start": 25224,
"end": 33452
} | class ____:
def test_missing_keys_raises_keyerror(self):
# GH#27420 KeyError, not TypeError
df = DataFrame(np.arange(12).reshape(4, 3), columns=["A", "B", "C"])
df2 = df.set_index(["A", "B"])
with pytest.raises(KeyError, match="6"):
df2.loc[(1, 6)]
def test_missing_key_raises_keyerror2(self):
# GH#21168 KeyError, not "IndexingError: Too many indexers"
ser = Series(-1, index=MultiIndex.from_product([[0, 1]] * 2))
with pytest.raises(KeyError, match=r"\(0, 3\)"):
ser.loc[0, 3]
def test_missing_key_combination(self):
# GH: 19556
mi = MultiIndex.from_arrays(
[
np.array(["a", "a", "b", "b"]),
np.array(["1", "2", "2", "3"]),
np.array(["c", "d", "c", "d"]),
],
names=["one", "two", "three"],
)
df = DataFrame(np.random.default_rng(2).random((4, 3)), index=mi)
msg = r"\('b', '1', slice\(None, None, None\)\)"
with pytest.raises(KeyError, match=msg):
df.loc[("b", "1", slice(None)), :]
with pytest.raises(KeyError, match=msg):
df.index.get_locs(("b", "1", slice(None)))
with pytest.raises(KeyError, match=r"\('b', '1'\)"):
df.loc[("b", "1"), :]
def test_getitem_loc_commutability(multiindex_year_month_day_dataframe_random_data):
df = multiindex_year_month_day_dataframe_random_data
ser = df["A"]
result = ser[2000, 5]
expected = df.loc[2000, 5]["A"]
tm.assert_series_equal(result, expected)
def test_loc_with_nan():
# GH: 27104
df = DataFrame(
{"col": [1, 2, 5], "ind1": ["a", "d", np.nan], "ind2": [1, 4, 5]}
).set_index(["ind1", "ind2"])
result = df.loc[["a"]]
expected = DataFrame(
{"col": [1]}, index=MultiIndex.from_tuples([("a", 1)], names=["ind1", "ind2"])
)
tm.assert_frame_equal(result, expected)
result = df.loc["a"]
expected = DataFrame({"col": [1]}, index=Index([1], name="ind2"))
tm.assert_frame_equal(result, expected)
def test_getitem_non_found_tuple():
# GH: 25236
df = DataFrame([[1, 2, 3, 4]], columns=["a", "b", "c", "d"]).set_index(
["a", "b", "c"]
)
with pytest.raises(KeyError, match=r"\(2\.0, 2\.0, 3\.0\)"):
df.loc[(2.0, 2.0, 3.0)]
def test_get_loc_datetime_index():
# GH#24263
index = pd.date_range("2001-01-01", periods=100)
mi = MultiIndex.from_arrays([index])
# Check if get_loc matches for Index and MultiIndex
assert mi.get_loc("2001-01") == slice(0, 31, None)
assert index.get_loc("2001-01") == slice(0, 31, None)
loc = mi[::2].get_loc("2001-01")
expected = index[::2].get_loc("2001-01")
assert loc == expected
loc = mi.repeat(2).get_loc("2001-01")
expected = index.repeat(2).get_loc("2001-01")
assert loc == expected
loc = mi.append(mi).get_loc("2001-01")
expected = index.append(index).get_loc("2001-01")
# TODO: standardize return type for MultiIndex.get_loc
tm.assert_numpy_array_equal(loc.nonzero()[0], expected)
def test_loc_setitem_indexer_differently_ordered():
# GH#34603
mi = MultiIndex.from_product([["a", "b"], [0, 1]])
df = DataFrame([[1, 2], [3, 4], [5, 6], [7, 8]], index=mi)
indexer = ("a", [1, 0])
df.loc[indexer, :] = np.array([[9, 10], [11, 12]])
expected = DataFrame([[11, 12], [9, 10], [5, 6], [7, 8]], index=mi)
tm.assert_frame_equal(df, expected)
def test_loc_getitem_index_differently_ordered_slice_none():
# GH#31330
df = DataFrame(
[[1, 2], [3, 4], [5, 6], [7, 8]],
index=[["a", "a", "b", "b"], [1, 2, 1, 2]],
columns=["a", "b"],
)
result = df.loc[(slice(None), [2, 1]), :]
expected = DataFrame(
[[3, 4], [7, 8], [1, 2], [5, 6]],
index=[["a", "b", "a", "b"], [2, 2, 1, 1]],
columns=["a", "b"],
)
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("indexer", [[1, 2, 7, 6, 2, 3, 8, 7], [1, 2, 7, 6, 3, 8]])
def test_loc_getitem_index_differently_ordered_slice_none_duplicates(indexer):
# GH#40978
df = DataFrame(
[1] * 8,
index=MultiIndex.from_tuples(
[(1, 1), (1, 2), (1, 7), (1, 6), (2, 2), (2, 3), (2, 8), (2, 7)]
),
columns=["a"],
)
result = df.loc[(slice(None), indexer), :]
expected = DataFrame(
[1] * 8,
index=[[1, 1, 2, 1, 2, 1, 2, 2], [1, 2, 2, 7, 7, 6, 3, 8]],
columns=["a"],
)
tm.assert_frame_equal(result, expected)
result = df.loc[df.index.isin(indexer, level=1), :]
tm.assert_frame_equal(result, df)
def test_loc_getitem_drops_levels_for_one_row_dataframe():
# GH#10521 "x" and "z" are both scalar indexing, so those levels are dropped
mi = MultiIndex.from_arrays([["x"], ["y"], ["z"]], names=["a", "b", "c"])
df = DataFrame({"d": [0]}, index=mi)
expected = df.droplevel([0, 2])
result = df.loc["x", :, "z"]
tm.assert_frame_equal(result, expected)
ser = Series([0], index=mi)
result = ser.loc["x", :, "z"]
expected = Series([0], index=Index(["y"], name="b"))
tm.assert_series_equal(result, expected)
def test_mi_columns_loc_list_label_order():
# GH 10710
cols = MultiIndex.from_product([["A", "B", "C"], [1, 2]])
df = DataFrame(np.zeros((5, 6)), columns=cols)
result = df.loc[:, ["B", "A"]]
expected = DataFrame(
np.zeros((5, 4)),
columns=MultiIndex.from_tuples([("B", 1), ("B", 2), ("A", 1), ("A", 2)]),
)
tm.assert_frame_equal(result, expected)
def test_mi_partial_indexing_list_raises():
# GH 13501
frame = DataFrame(
np.arange(12).reshape((4, 3)),
index=[["a", "a", "b", "b"], [1, 2, 1, 2]],
columns=[["Ohio", "Ohio", "Colorado"], ["Green", "Red", "Green"]],
)
frame.index.names = ["key1", "key2"]
frame.columns.names = ["state", "color"]
with pytest.raises(KeyError, match="\\[2\\] not in index"):
frame.loc[["b", 2], "Colorado"]
def test_mi_indexing_list_nonexistent_raises():
# GH 15452
s = Series(range(4), index=MultiIndex.from_product([[1, 2], ["a", "b"]]))
with pytest.raises(KeyError, match="\\['not' 'found'\\] not in index"):
s.loc[["not", "found"]]
def test_mi_add_cell_missing_row_non_unique():
# GH 16018
result = DataFrame(
[[1, 2, 5, 6], [3, 4, 7, 8]],
index=["a", "a"],
columns=MultiIndex.from_product([[1, 2], ["A", "B"]]),
)
result.loc["c"] = -1
result.loc["c", (1, "A")] = 3
result.loc["d", (1, "A")] = 3
expected = DataFrame(
[
[1.0, 2.0, 5.0, 6.0],
[3.0, 4.0, 7.0, 8.0],
[3.0, -1.0, -1, -1],
[3.0, np.nan, np.nan, np.nan],
],
index=["a", "a", "c", "d"],
columns=MultiIndex.from_product([[1, 2], ["A", "B"]]),
)
tm.assert_frame_equal(result, expected)
def test_loc_get_scalar_casting_to_float():
# GH#41369
df = DataFrame(
{"a": 1.0, "b": 2}, index=MultiIndex.from_arrays([[3], [4]], names=["c", "d"])
)
result = df.loc[(3, 4), "b"]
assert result == 2
assert isinstance(result, np.int64)
result = df.loc[[(3, 4)], "b"].iloc[0]
assert result == 2
assert isinstance(result, np.int64)
def test_loc_empty_single_selector_with_names():
# GH 19517
idx = MultiIndex.from_product([["a", "b"], ["A", "B"]], names=[1, 0])
s2 = Series(index=idx, dtype=np.float64)
result = s2.loc["a"]
expected = Series([np.nan, np.nan], index=Index(["A", "B"], name=0))
tm.assert_series_equal(result, expected)
def test_loc_keyerror_rightmost_key_missing():
# GH 20951
df = DataFrame(
{
"A": [100, 100, 200, 200, 300, 300],
"B": [10, 10, 20, 21, 31, 33],
"C": range(6),
}
)
df = df.set_index(["A", "B"])
with pytest.raises(KeyError, match="^1$"):
df.loc[(100, 1)]
def test_multindex_series_loc_with_tuple_label():
# GH#43908
mi = MultiIndex.from_tuples([(1, 2), (3, (4, 5))])
ser = Series([1, 2], index=mi)
result = ser.loc[(3, (4, 5))]
assert result == 2
| TestKeyErrorsWithMultiIndex |
python | airbytehq__airbyte | airbyte-ci/connectors/pipelines/pipelines/models/steps.py | {
"start": 2106,
"end": 3008
} | class ____(Result):
"""A dataclass to capture the result of a step."""
step: Step
consider_in_overall_status: bool = True
def __repr__(self) -> str: # noqa D105
return f"{self.step.title}: {self.status.value}"
def __str__(self) -> str: # noqa D105
return f"{self.step.title}: {self.status.value}\n\nSTDOUT:\n{self.stdout}\n\nSTDERR:\n{self.stderr}"
def __post_init__(self) -> None:
if self.stderr:
object.__setattr__(self, "stderr", self.redact_secrets_from_string(self.stderr))
if self.stdout:
object.__setattr__(self, "stdout", self.redact_secrets_from_string(self.stdout))
def redact_secrets_from_string(self, value: str) -> str:
for secret in self.step.context.secrets_to_mask:
value = value.replace(secret, "********")
return value
@dataclass(kw_only=True, frozen=True)
| StepResult |
python | crytic__slither | slither/detectors/statements/controlled_delegatecall.py | {
"start": 707,
"end": 2578
} | class ____(AbstractDetector):
ARGUMENT = "controlled-delegatecall"
HELP = "Controlled delegatecall destination"
IMPACT = DetectorClassification.HIGH
CONFIDENCE = DetectorClassification.MEDIUM
WIKI = "https://github.com/crytic/slither/wiki/Detector-Documentation#controlled-delegatecall"
WIKI_TITLE = "Controlled Delegatecall"
WIKI_DESCRIPTION = "`Delegatecall` or `callcode` to an address controlled by the user."
# region wiki_exploit_scenario
WIKI_EXPLOIT_SCENARIO = """
```solidity
contract Delegatecall{
function delegate(address to, bytes data){
to.delegatecall(data);
}
}
```
Bob calls `delegate` and delegates the execution to his malicious contract. As a result, Bob withdraws the funds of the contract and destructs it."""
# endregion wiki_exploit_scenario
WIKI_RECOMMENDATION = "Avoid using `delegatecall`. Use only trusted destinations."
def _detect(self) -> List[Output]:
results = []
for contract in self.compilation_unit.contracts_derived:
for f in contract.functions:
# If its an upgradeable proxy, do not report protected function
# As functions to upgrades the destination lead to too many FPs
if contract.is_upgradeable_proxy and f.is_protected():
continue
nodes = controlled_delegatecall(f)
if nodes:
func_info: DETECTOR_INFO = [
f,
" uses delegatecall to a input-controlled function id\n",
]
for node in nodes:
node_info: DETECTOR_INFO = func_info + ["\t- ", node, "\n"]
res = self.generate_result(node_info)
results.append(res)
return results
| ControlledDelegateCall |
python | google__jax | jax/_src/config.py | {
"start": 2520,
"end": 2852
} | class ____(Protocol[_T]):
"""A holder for a configuration value.
There are two kinds of value holders: ``Flag``, which is assigned exactly
once and never modified after; and ``State``, which can be changed locally
within a thread via a context manager.
"""
value: _T
def _set(self, value: _T) -> None: ...
| ValueHolder |
python | GoogleCloudPlatform__python-docs-samples | run/django/polls/models.py | {
"start": 746,
"end": 937
} | class ____(models.Model):
question = models.ForeignKey(Question, on_delete=models.CASCADE)
choice_text = models.CharField(max_length=200)
votes = models.IntegerField(default=0)
| Choice |
python | allegroai__clearml | clearml/backend_api/services/v2_20/tasks.py | {
"start": 346308,
"end": 347161
} | class ____(Request):
"""
Refresh the task's last update time
:param task: Task ID
:type task: str
"""
_service = "tasks"
_action = "ping"
_version = "2.20"
_schema = {
"definitions": {},
"properties": {"task": {"description": "Task ID", "type": "string"}},
"required": ["task"],
"type": "object",
}
def __init__(self, task: str, **kwargs: Any) -> None:
super(PingRequest, self).__init__(**kwargs)
self.task = task
@schema_property("task")
def task(self) -> str:
return self._property_task
@task.setter
def task(self, value: str) -> None:
if value is None:
self._property_task = None
return
self.assert_isinstance(value, "task", six.string_types)
self._property_task = value
| PingRequest |
python | openai__openai-python | src/openai/types/webhooks/response_completed_webhook_event.py | {
"start": 326,
"end": 776
} | class ____(BaseModel):
id: str
"""The unique ID of the event."""
created_at: int
"""The Unix timestamp (in seconds) of when the model response was completed."""
data: Data
"""Event data payload."""
type: Literal["response.completed"]
"""The type of the event. Always `response.completed`."""
object: Optional[Literal["event"]] = None
"""The object of the event. Always `event`."""
| ResponseCompletedWebhookEvent |
python | keon__algorithms | tests/test_array.py | {
"start": 11725,
"end": 12128
} | class ____(unittest.TestCase):
def test_summarize_ranges(self):
self.assertListEqual(
summarize_ranges([0, 1, 2, 4, 5, 7]), [(0, 2), (4, 5), (7, 7)]
)
self.assertListEqual(
summarize_ranges([-5, -4, -3, 1, 2, 4, 5, 6]), [(-5, -3), (1, 2), (4, 6)]
)
self.assertListEqual(summarize_ranges([-2, -1, 0, 1, 2]), [(-2, 2)])
| TestSummaryRanges |
python | ray-project__ray | python/ray/tune/tests/test_trial_scheduler_pbt.py | {
"start": 3391,
"end": 6055
} | class ____(unittest.TestCase):
def setUp(self):
ray.init(num_cpus=2)
os.environ["TUNE_GLOBAL_CHECKPOINT_S"] = "1"
def tearDown(self):
ray.shutdown()
def testFileFree(self):
class MyTrainable(Trainable):
def setup(self, config):
self.iter = 0
self.a = config["a"]
def step(self):
self.iter += 1
return {"metric": self.iter + self.a}
def save_checkpoint(self, checkpoint_dir):
file_path = os.path.join(checkpoint_dir, "model.mock")
with open(file_path, "wb") as fp:
pickle.dump((self.iter, self.a), fp)
def load_checkpoint(self, checkpoint_dir):
file_path = os.path.join(checkpoint_dir, "model.mock")
with open(file_path, "rb") as fp:
self.iter, self.a = pickle.load(fp)
from ray.tune.callback import Callback
class FileCheck(Callback):
def __init__(self, verbose=False):
self.iter_ = 0
self.process = psutil.Process()
self.verbose = verbose
def on_trial_result(self, *args, **kwargs):
self.iter_ += 1
all_files = self.process.open_files()
if self.verbose:
print("Iteration", self.iter_)
print("=" * 10)
print("Object memory use: ", object_memory_usage())
print("Virtual Mem:", self.get_virt_mem() >> 30, "gb")
print("File Descriptors:", len(all_files))
assert len(all_files) < 20
@classmethod
def get_virt_mem(cls):
return psutil.virtual_memory().used
param_a = MockParam([1, -1])
pbt = PopulationBasedTraining(
time_attr="training_iteration",
metric="metric",
mode="max",
perturbation_interval=1,
quantile_fraction=0.5,
hyperparam_mutations={"b": [-1]},
)
checkpoint_config = CheckpointConfig(
num_to_keep=3,
checkpoint_frequency=2,
)
tune.run(
MyTrainable,
name="ray_demo",
scheduler=pbt,
stop={"training_iteration": 10},
num_samples=4,
checkpoint_config=checkpoint_config,
verbose=False,
fail_fast=True,
config={"a": tune.sample_from(lambda _: param_a())},
callbacks=[FileCheck()],
)
| PopulationBasedTrainingFileDescriptorTest |
python | joke2k__faker | faker/providers/color/el_GR/__init__.py | {
"start": 98,
"end": 3552
} | class ____(ColorProvider):
"""
Implement color provider for ``el_GR`` locale.
Naming and hex codes are based on https://encycolorpedia.gr/named
"""
all_colors = OrderedDict(
(
("άσιντ πράσινο", "#B0BF1A"),
("άσπρο", "#FFFFFF"),
("άστριοι", "#FDD5B1"),
("αβοκάντο", "#568203"),
("αγκινάρα", "#8F9779"),
("αζούρ", "#8AB9F1"),
("ακαζού", "#4C2F27"),
("ασημένιο", "#C0C0C0"),
("βαθύ κόκκινο", "#850101"),
("βερικοκί", "#FBCEB1"),
("βερμιγιόν", "#E34234"),
("βιολετί", "#7F00FF"),
("βρύο", "#8A9A5B"),
("βυσσινί", "#DC143C"),
("γαλάζιο", "#ADD8E6"),
("γκρι", "#808080"),
("γλαυκό", "#6082B6"),
("εκρού", "#C2B280"),
("ιβουάρ", "#FFFFF0"),
("ινδικό", "#4B0082"),
("κίτρινο", "#9B870C"),
("καμηλό", "#C19A6B"),
("κανέλα", "#D2691E"),
("καστανέρυθρο", "#8B0000"),
("καστανό", "#954535"),
("καφέ", "#A52A2A"),
("καφές", "#6F4E37"),
("κυανό", "#800080"),
("κεχριμπάρι", "#FFBF00"),
("κόκκινο", "#FF0000"),
("λάβα", "#CF1020"),
("λαδί", "#3B3C36"),
("λευκό", "#DBE9F4"),
("μαρόν", "#800000"),
("ματζέντα", "#CC00CC"),
("μαόνι", "#CD4A4C"),
("μαύρο", "#000000"),
("μπέιμπι μπλου", "#89CFF0"),
("μπεζ", "#F5F5DC"),
("μπλε", "#0000FF"),
("μπλε μαρέν", "#1974D2"),
("μπορντό", "#7F1734"),
("μπουργκουντί", "#900020"),
("μυρτιά", "#317873"),
("μωβ", "#B19CD9"),
("ορείχαλκος", "#B5A642"),
("πέρλα", "#EAE0C8"),
("πεύκο", "#01796F"),
("πλατίνα", "#E5E4E2"),
("πορτοκαλί", "#FF7F00"),
("πορτοκαλοκίτρινο", "#DAA520"),
("πράσινο", "#000FF0"),
("πράσινο chartreuse", "#7FFF00"),
("πράσινο αγκινάρας", "#4B6F44"),
("πράσινο ανοιχτό", "#90EE90"),
("πράσινο ζούγκλας", "#29AB87"),
("πράσινο λαουρέλ", "#A9BA9D"),
("πράσινο σκούρο", "#013220"),
("πράσινο της άνοιξης", "#00FF7F"),
("πράσινο της μέντας", "#98FB98"),
("πράσινο της φτέρης", "#4F7942"),
("πράσινο του δάσους", "#228B22"),
("πράσινο τσάι", "#D0F0C0"),
("πράσινο χούκερ", "#49796B"),
("ραφ", "#5D8AA8"),
("ροζ", "#FFC0CB"),
("ροζέ", "#FF007F"),
("σέπια", "#704214"),
("σαμπανιζέ", "#F7E7CE"),
("σκάρλετ", "#FF2400"),
("σκούρο βρύο", "#4A5D23"),
("σπαραγγί", "#87A96B"),
("ταν", "#D2B48C"),
("φλαμίνγκο", "#FC8EAC"),
("φούξια", "#F400A1"),
("φτέρη", "#71BC78"),
("χλωροφύλλη", "#4AFF00"),
("χρυσαφένιο", "#FFD700"),
("χρυσό", "#808000"),
("ώχρα", "#E97451"),
)
)
safe_colors = (
"μαύρο",
"πράσινο",
"μπλε",
"κίτρινο",
"κόκκινο",
"μωβ",
"άσπρο",
"γκρι",
"ασημένιο",
"καφέ",
"λαδί",
"χρυσό",
"ροζ",
)
| Provider |
python | apache__airflow | providers/google/tests/unit/google/cloud/hooks/test_dataprep.py | {
"start": 1972,
"end": 25093
} | class ____:
def setup_method(self):
with mock.patch(f"{BASEHOOK_PATCH_PATH}.get_connection") as conn:
conn.return_value.extra_dejson = EXTRA
self.hook = GoogleDataprepHook(dataprep_conn_id="dataprep_default")
self._imported_dataset_id = 12345
self._create_imported_dataset_body_request = {
"uri": "gs://test/uri",
"name": "test_name",
}
self._create_wrangled_dataset_body_request = {
"importedDataset": {"id": "test_dataset_id"},
"flow": {"id": "test_flow_id"},
"name": "test_dataset_name",
}
self._create_output_object_body_request = {
"execution": "dataflow",
"profiler": False,
"flowNodeId": "test_flow_node_id",
}
self._create_write_settings_body_request = {
"path": "gs://test/path",
"action": "create",
"format": "csv",
"outputObjectId": "test_output_object_id",
}
self._expected_create_imported_dataset_hook_data = json.dumps(
self._create_imported_dataset_body_request
)
self._expected_create_wrangled_dataset_hook_data = json.dumps(
self._create_wrangled_dataset_body_request
)
self._expected_create_output_object_hook_data = json.dumps(self._create_output_object_body_request)
self._expected_create_write_settings_hook_data = json.dumps(self._create_write_settings_body_request)
@patch("airflow.providers.google.cloud.hooks.dataprep.requests.get")
def test_get_jobs_for_job_group_should_be_called_once_with_params(self, mock_get_request):
self.hook.get_jobs_for_job_group(JOB_ID)
mock_get_request.assert_called_once_with(
f"{URL_JOB_GROUPS}/{JOB_ID}/jobs",
headers={"Content-Type": "application/json", "Authorization": f"Bearer {TOKEN}"},
)
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.get",
side_effect=[HTTPError(), mock.MagicMock()],
)
def test_get_jobs_for_job_group_should_pass_after_retry(self, mock_get_request):
self.hook.get_jobs_for_job_group(JOB_ID)
assert mock_get_request.call_count == 2
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.get",
side_effect=[mock.MagicMock(), HTTPError()],
)
def test_get_jobs_for_job_group_should_not_retry_after_success(self, mock_get_request):
self.hook.get_jobs_for_job_group.retry.sleep = mock.Mock()
self.hook.get_jobs_for_job_group(JOB_ID)
assert mock_get_request.call_count == 1
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.get",
side_effect=[HTTPError(), HTTPError(), HTTPError(), HTTPError(), mock.MagicMock()],
)
def test_get_jobs_for_job_group_should_retry_after_four_errors(self, mock_get_request):
self.hook.get_jobs_for_job_group.retry.sleep = mock.Mock()
self.hook.get_jobs_for_job_group(JOB_ID)
assert mock_get_request.call_count == 5
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.get",
side_effect=[HTTPError(), HTTPError(), HTTPError(), HTTPError(), HTTPError()],
)
def test_get_jobs_for_job_group_raise_error_after_five_calls(self, mock_get_request):
self.hook.get_jobs_for_job_group.retry.sleep = mock.Mock()
with pytest.raises(RetryError) as ctx:
self.hook.get_jobs_for_job_group(JOB_ID)
assert "HTTPError" in str(ctx.value)
assert mock_get_request.call_count == 5
@patch("airflow.providers.google.cloud.hooks.dataprep.requests.get")
def test_get_job_group_should_be_called_once_with_params(self, mock_get_request):
self.hook.get_job_group(JOB_ID, EMBED, INCLUDE_DELETED)
mock_get_request.assert_called_once_with(
f"{URL_JOB_GROUPS}/{JOB_ID}",
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {TOKEN}",
},
params={"embed": "", "includeDeleted": False},
)
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.get",
side_effect=[HTTPError(), mock.MagicMock()],
)
def test_get_job_group_should_pass_after_retry(self, mock_get_request):
self.hook.get_job_group(JOB_ID, EMBED, INCLUDE_DELETED)
assert mock_get_request.call_count == 2
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.get",
side_effect=[mock.MagicMock(), HTTPError()],
)
def test_get_job_group_should_not_retry_after_success(self, mock_get_request):
self.hook.get_job_group.retry.sleep = mock.Mock()
self.hook.get_job_group(JOB_ID, EMBED, INCLUDE_DELETED)
assert mock_get_request.call_count == 1
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.get",
side_effect=[
HTTPError(),
HTTPError(),
HTTPError(),
HTTPError(),
mock.MagicMock(),
],
)
def test_get_job_group_should_retry_after_four_errors(self, mock_get_request):
self.hook.get_job_group.retry.sleep = mock.Mock()
self.hook.get_job_group(JOB_ID, EMBED, INCLUDE_DELETED)
assert mock_get_request.call_count == 5
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.get",
side_effect=[HTTPError(), HTTPError(), HTTPError(), HTTPError(), HTTPError()],
)
def test_get_job_group_raise_error_after_five_calls(self, mock_get_request):
self.hook.get_job_group.retry.sleep = mock.Mock()
with pytest.raises(RetryError) as ctx:
self.hook.get_job_group(JOB_ID, EMBED, INCLUDE_DELETED)
assert "HTTPError" in str(ctx.value)
assert mock_get_request.call_count == 5
@patch("airflow.providers.google.cloud.hooks.dataprep.requests.post")
def test_run_job_group_should_be_called_once_with_params(self, mock_get_request):
self.hook.run_job_group(body_request=DATA)
mock_get_request.assert_called_once_with(
f"{URL_JOB_GROUPS}",
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {TOKEN}",
},
data=json.dumps(DATA),
)
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[HTTPError(), mock.MagicMock()],
)
def test_run_job_group_should_pass_after_retry(self, mock_get_request):
self.hook.run_job_group(body_request=DATA)
assert mock_get_request.call_count == 2
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[mock.MagicMock(), HTTPError()],
)
def test_run_job_group_should_not_retry_after_success(self, mock_get_request):
self.hook.run_job_group.retry.sleep = mock.Mock()
self.hook.run_job_group(body_request=DATA)
assert mock_get_request.call_count == 1
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[
HTTPError(),
HTTPError(),
HTTPError(),
HTTPError(),
mock.MagicMock(),
],
)
def test_run_job_group_should_retry_after_four_errors(self, mock_get_request):
self.hook.run_job_group.retry.sleep = mock.Mock()
self.hook.run_job_group(body_request=DATA)
assert mock_get_request.call_count == 5
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[HTTPError(), HTTPError(), HTTPError(), HTTPError(), HTTPError()],
)
def test_run_job_group_raise_error_after_five_calls(self, mock_get_request):
self.hook.run_job_group.retry.sleep = mock.Mock()
with pytest.raises(RetryError) as ctx:
self.hook.run_job_group(body_request=DATA)
assert "HTTPError" in str(ctx.value)
assert mock_get_request.call_count == 5
@patch("airflow.providers.google.cloud.hooks.dataprep.requests.get")
def test_get_job_group_status_should_be_called_once_with_params(self, mock_get_request):
self.hook.get_job_group_status(job_group_id=JOB_ID)
mock_get_request.assert_called_once_with(
f"{URL_JOB_GROUPS}/{JOB_ID}/status",
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {TOKEN}",
},
)
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.get",
side_effect=[HTTPError(), mock.MagicMock()],
)
def test_get_job_group_status_should_pass_after_retry(self, mock_get_request):
self.hook.get_job_group_status(job_group_id=JOB_ID)
assert mock_get_request.call_count == 2
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.get",
side_effect=[mock.MagicMock(), HTTPError()],
)
def test_get_job_group_status_retry_after_success(self, mock_get_request):
self.hook.run_job_group.retry.sleep = mock.Mock()
self.hook.get_job_group_status(job_group_id=JOB_ID)
assert mock_get_request.call_count == 1
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.get",
side_effect=[
HTTPError(),
HTTPError(),
HTTPError(),
HTTPError(),
mock.MagicMock(),
],
)
def test_get_job_group_status_four_errors(self, mock_get_request):
self.hook.run_job_group.retry.sleep = mock.Mock()
self.hook.get_job_group_status(job_group_id=JOB_ID)
assert mock_get_request.call_count == 5
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.get",
side_effect=[HTTPError(), HTTPError(), HTTPError(), HTTPError(), HTTPError()],
)
def test_get_job_group_status_five_calls(self, mock_get_request):
self.hook.get_job_group_status.retry.sleep = mock.Mock()
with pytest.raises(RetryError) as ctx:
self.hook.get_job_group_status(job_group_id=JOB_ID)
assert "HTTPError" in str(ctx.value)
assert mock_get_request.call_count == 5
@pytest.mark.parametrize(
"uri",
[
pytest.param("a://?extra__dataprep__token=abc&extra__dataprep__base_url=abc", id="prefix"),
pytest.param("a://?token=abc&base_url=abc", id="no-prefix"),
],
)
def test_conn_extra_backcompat_prefix(self, uri):
with patch.dict(os.environ, {"AIRFLOW_CONN_MY_CONN": uri}):
hook = GoogleDataprepHook("my_conn")
assert hook._token == "abc"
assert hook._base_url == "abc"
@patch("airflow.providers.google.cloud.hooks.dataprep.requests.post")
def test_create_imported_dataset_should_be_called_once_with_params(self, mock_post_request):
self.hook.create_imported_dataset(body_request=self._create_imported_dataset_body_request)
mock_post_request.assert_called_once_with(
URL_IMPORTED_DATASETS,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {TOKEN}",
},
data=self._expected_create_imported_dataset_hook_data,
)
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[HTTPError(), mock.MagicMock()],
)
def test_create_imported_dataset_should_pass_after_retry(self, mock_post_request):
self.hook.create_imported_dataset(body_request=self._create_imported_dataset_body_request)
assert mock_post_request.call_count == 2
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[mock.MagicMock(), HTTPError()],
)
def test_create_imported_dataset_retry_after_success(self, mock_post_request):
self.hook.create_imported_dataset.retry.sleep = mock.Mock()
self.hook.create_imported_dataset(body_request=self._create_imported_dataset_body_request)
assert mock_post_request.call_count == 1
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[
HTTPError(),
HTTPError(),
HTTPError(),
HTTPError(),
mock.MagicMock(),
],
)
def test_create_imported_dataset_four_errors(self, mock_post_request):
self.hook.create_imported_dataset.retry.sleep = mock.Mock()
self.hook.create_imported_dataset(body_request=self._create_imported_dataset_body_request)
assert mock_post_request.call_count == 5
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[HTTPError(), HTTPError(), HTTPError(), HTTPError(), HTTPError()],
)
def test_create_imported_dataset_five_calls(self, mock_post_request):
self.hook.create_imported_dataset.retry.sleep = mock.Mock()
with pytest.raises(RetryError) as ctx:
self.hook.create_imported_dataset(body_request=self._create_imported_dataset_body_request)
assert "HTTPError" in str(ctx.value)
assert mock_post_request.call_count == 5
@patch("airflow.providers.google.cloud.hooks.dataprep.requests.post")
def test_create_wrangled_dataset_should_be_called_once_with_params(self, mock_post_request):
self.hook.create_wrangled_dataset(body_request=self._create_wrangled_dataset_body_request)
mock_post_request.assert_called_once_with(
URL_WRANGLED_DATASETS,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {TOKEN}",
},
data=self._expected_create_wrangled_dataset_hook_data,
)
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[HTTPError(), mock.MagicMock()],
)
def test_create_wrangled_dataset_should_pass_after_retry(self, mock_post_request):
self.hook.create_wrangled_dataset(body_request=self._create_wrangled_dataset_body_request)
assert mock_post_request.call_count == 2
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[mock.MagicMock(), HTTPError()],
)
def test_create_wrangled_dataset_retry_after_success(self, mock_post_request):
self.hook.create_wrangled_dataset.retry.sleep = mock.Mock()
self.hook.create_wrangled_dataset(body_request=self._create_wrangled_dataset_body_request)
assert mock_post_request.call_count == 1
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[
HTTPError(),
HTTPError(),
HTTPError(),
HTTPError(),
mock.MagicMock(),
],
)
def test_create_wrangled_dataset_four_errors(self, mock_post_request):
self.hook.create_wrangled_dataset.retry.sleep = mock.Mock()
self.hook.create_wrangled_dataset(body_request=self._create_wrangled_dataset_body_request)
assert mock_post_request.call_count == 5
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[HTTPError(), HTTPError(), HTTPError(), HTTPError(), HTTPError()],
)
def test_create_wrangled_dataset_five_calls(self, mock_post_request):
self.hook.create_wrangled_dataset.retry.sleep = mock.Mock()
with pytest.raises(RetryError) as ctx:
self.hook.create_wrangled_dataset(body_request=self._create_wrangled_dataset_body_request)
assert "HTTPError" in str(ctx.value)
assert mock_post_request.call_count == 5
@patch("airflow.providers.google.cloud.hooks.dataprep.requests.post")
def test_create_output_object_should_be_called_once_with_params(self, mock_post_request):
self.hook.create_output_object(body_request=self._create_output_object_body_request)
mock_post_request.assert_called_once_with(
URL_OUTPUT_OBJECTS,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {TOKEN}",
},
data=self._expected_create_output_object_hook_data,
)
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[HTTPError(), mock.MagicMock()],
)
def test_create_output_objects_should_pass_after_retry(self, mock_post_request):
self.hook.create_output_object(body_request=self._create_output_object_body_request)
assert mock_post_request.call_count == 2
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[mock.MagicMock(), HTTPError()],
)
def test_create_output_objects_retry_after_success(self, mock_post_request):
self.hook.create_output_object.retry.sleep = mock.Mock()
self.hook.create_output_object(body_request=self._create_output_object_body_request)
assert mock_post_request.call_count == 1
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[
HTTPError(),
HTTPError(),
HTTPError(),
HTTPError(),
mock.MagicMock(),
],
)
def test_create_output_objects_four_errors(self, mock_post_request):
self.hook.create_output_object.retry.sleep = mock.Mock()
self.hook.create_output_object(body_request=self._create_output_object_body_request)
assert mock_post_request.call_count == 5
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[HTTPError(), HTTPError(), HTTPError(), HTTPError(), HTTPError()],
)
def test_create_output_objects_five_calls(self, mock_post_request):
self.hook.create_output_object.retry.sleep = mock.Mock()
with pytest.raises(RetryError) as ctx:
self.hook.create_output_object(body_request=self._create_output_object_body_request)
assert "HTTPError" in str(ctx.value)
assert mock_post_request.call_count == 5
@patch("airflow.providers.google.cloud.hooks.dataprep.requests.post")
def test_create_write_settings_should_be_called_once_with_params(self, mock_post_request):
self.hook.create_write_settings(body_request=self._create_write_settings_body_request)
mock_post_request.assert_called_once_with(
URL_WRITE_SETTINGS,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {TOKEN}",
},
data=self._expected_create_write_settings_hook_data,
)
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[HTTPError(), mock.MagicMock()],
)
def test_create_write_settings_should_pass_after_retry(self, mock_post_request):
self.hook.create_write_settings(body_request=self._create_write_settings_body_request)
assert mock_post_request.call_count == 2
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[mock.MagicMock(), HTTPError()],
)
def test_create_write_settings_retry_after_success(self, mock_post_request):
self.hook.create_write_settings.retry.sleep = mock.Mock()
self.hook.create_write_settings(body_request=self._create_write_settings_body_request)
assert mock_post_request.call_count == 1
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[
HTTPError(),
HTTPError(),
HTTPError(),
HTTPError(),
mock.MagicMock(),
],
)
def test_create_write_settings_four_errors(self, mock_post_request):
self.hook.create_write_settings.retry.sleep = mock.Mock()
self.hook.create_write_settings(body_request=self._create_write_settings_body_request)
assert mock_post_request.call_count == 5
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.post",
side_effect=[HTTPError(), HTTPError(), HTTPError(), HTTPError(), HTTPError()],
)
def test_create_write_settings_five_calls(self, mock_post_request):
self.hook.create_write_settings.retry.sleep = mock.Mock()
with pytest.raises(RetryError) as ctx:
self.hook.create_write_settings(body_request=self._create_write_settings_body_request)
assert "HTTPError" in str(ctx.value)
assert mock_post_request.call_count == 5
@patch("airflow.providers.google.cloud.hooks.dataprep.requests.delete")
def test_delete_imported_dataset_should_be_called_once_with_params(self, mock_delete_request):
self.hook.delete_imported_dataset(dataset_id=self._imported_dataset_id)
mock_delete_request.assert_called_once_with(
f"{URL_IMPORTED_DATASETS}/{self._imported_dataset_id}",
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {TOKEN}",
},
)
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.delete",
side_effect=[HTTPError(), mock.MagicMock()],
)
def test_delete_imported_dataset_should_pass_after_retry(self, mock_delete_request):
self.hook.delete_imported_dataset(dataset_id=self._imported_dataset_id)
assert mock_delete_request.call_count == 2
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.delete",
side_effect=[mock.MagicMock(), HTTPError()],
)
def test_delete_imported_dataset_retry_after_success(self, mock_delete_request):
self.hook.delete_imported_dataset.retry.sleep = mock.Mock()
self.hook.delete_imported_dataset(dataset_id=self._imported_dataset_id)
assert mock_delete_request.call_count == 1
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.delete",
side_effect=[
HTTPError(),
HTTPError(),
HTTPError(),
HTTPError(),
mock.MagicMock(),
],
)
def test_delete_imported_dataset_four_errors(self, mock_delete_request):
self.hook.delete_imported_dataset.retry.sleep = mock.Mock()
self.hook.delete_imported_dataset(dataset_id=self._imported_dataset_id)
assert mock_delete_request.call_count == 5
@patch(
"airflow.providers.google.cloud.hooks.dataprep.requests.delete",
side_effect=[HTTPError(), HTTPError(), HTTPError(), HTTPError(), HTTPError()],
)
def test_delete_imported_dataset_five_calls(self, mock_delete_request):
self.hook.delete_imported_dataset.retry.sleep = mock.Mock()
with pytest.raises(RetryError) as ctx:
self.hook.delete_imported_dataset(dataset_id=self._imported_dataset_id)
assert "HTTPError" in str(ctx.value)
assert mock_delete_request.call_count == 5
| TestGoogleDataprepHook |
python | coleifer__peewee | tests/sqlite.py | {
"start": 2236,
"end": 2287
} | class ____(TestModel):
message = TextField()
| Post |
python | spack__spack | lib/spack/spack/test/concretization/core.py | {
"start": 6616,
"end": 7066
} | class ____(Package):
homepage = "http://www.example.com"
url = "http://www.example.com/root-1.0.tar.gz"
version("1.0", sha256="abcde")
depends_on("changing")
"""
package_py = packages_dir / "middle" / "package.py"
package_py.parent.mkdir(parents=True)
package_py.write_text(middle_pkg_str)
changing_template = """
from spack_repo.builtin_mock.build_systems.generic import Package
from spack.package import *
| Middle |
python | sqlalchemy__sqlalchemy | test/orm/test_unitofworkv2.py | {
"start": 56335,
"end": 65692
} | class ____(fixtures.MappedTest):
__sparse_driver_backend__ = True
@classmethod
def define_tables(cls, metadata):
Table(
"parent",
metadata,
Column("id", Integer, primary_key=True),
Column("data", Integer),
)
Table(
"child",
metadata,
Column("id", Integer, ForeignKey("parent.id"), primary_key=True),
Column("data", Integer),
)
def _fixture(self, confirm_deleted_rows=True):
parent, child = self.tables.parent, self.tables.child
class Parent(BasicEntity):
pass
class Child(BasicEntity):
pass
self.mapper_registry.map_imperatively(
Parent,
parent,
properties={
"child": relationship(
Child,
uselist=False,
cascade="all, delete-orphan",
backref="parent",
)
},
confirm_deleted_rows=confirm_deleted_rows,
)
self.mapper_registry.map_imperatively(Child, child)
return Parent, Child
@testing.requires.sane_rowcount
def test_update_single_missing(self):
Parent, Child = self._fixture()
sess = fixture_session()
p1 = Parent(id=1, data=2)
sess.add(p1)
sess.flush()
sess.execute(self.tables.parent.delete())
p1.data = 3
assert_raises_message(
orm_exc.StaleDataError,
r"UPDATE statement on table 'parent' expected to "
r"update 1 row\(s\); 0 were matched.",
sess.flush,
)
@testing.requires.sane_rowcount
def test_update_single_missing_broken_multi_rowcount(self):
@util.memoized_property
def rowcount(self):
if len(self.context.compiled_parameters) > 1:
return -1
else:
return self.context.rowcount
with patch.object(
config.db.dialect, "supports_sane_multi_rowcount", False
):
with patch(
"sqlalchemy.engine.cursor.CursorResult.rowcount", rowcount
):
Parent, Child = self._fixture()
sess = fixture_session()
p1 = Parent(id=1, data=2)
sess.add(p1)
sess.flush()
sess.execute(self.tables.parent.delete())
p1.data = 3
assert_raises_message(
orm_exc.StaleDataError,
r"UPDATE statement on table 'parent' expected to "
r"update 1 row\(s\); 0 were matched.",
sess.flush,
)
def test_update_multi_missing_broken_multi_rowcount(self):
@util.memoized_property
def rowcount(self):
if len(self.context.compiled_parameters) > 1:
return -1
else:
return self.context.rowcount
with patch.object(
config.db.dialect, "supports_sane_multi_rowcount", False
):
with patch(
"sqlalchemy.engine.cursor.CursorResult.rowcount", rowcount
):
Parent, Child = self._fixture()
sess = fixture_session()
p1 = Parent(id=1, data=2)
p2 = Parent(id=2, data=3)
sess.add_all([p1, p2])
sess.flush()
sess.execute(self.tables.parent.delete().where(Parent.id == 1))
p1.data = 3
p2.data = 4
sess.flush() # no exception
# update occurred for remaining row
eq_(sess.query(Parent.id, Parent.data).all(), [(2, 4)])
def test_update_value_missing_broken_multi_rowcount(self):
@util.memoized_property
def rowcount(self):
if len(self.context.compiled_parameters) > 1:
return -1
else:
return self.context.rowcount
with patch.object(
config.db.dialect, "supports_sane_multi_rowcount", False
):
with patch(
"sqlalchemy.engine.cursor.CursorResult.rowcount", rowcount
):
Parent, Child = self._fixture()
sess = fixture_session()
p1 = Parent(id=1, data=1)
sess.add(p1)
sess.flush()
sess.execute(self.tables.parent.delete())
p1.data = literal(1)
assert_raises_message(
orm_exc.StaleDataError,
r"UPDATE statement on table 'parent' expected to "
r"update 1 row\(s\); 0 were matched.",
sess.flush,
)
@testing.requires.sane_rowcount
def test_delete_twice(self):
Parent, Child = self._fixture()
sess = fixture_session()
p1 = Parent(id=1, data=2, child=None)
sess.add(p1)
sess.commit()
sess.delete(p1)
sess.flush()
sess.delete(p1)
assert_warns_message(
exc.SAWarning,
r"DELETE statement on table 'parent' expected to "
r"delete 1 row\(s\); 0 were matched.",
sess.commit,
)
@testing.requires.sane_multi_rowcount
def test_delete_multi_missing_warning(self):
Parent, Child = self._fixture()
sess = fixture_session()
p1 = Parent(id=1, data=2, child=None)
p2 = Parent(id=2, data=3, child=None)
sess.add_all([p1, p2])
sess.flush()
sess.execute(self.tables.parent.delete())
sess.delete(p1)
sess.delete(p2)
assert_warns_message(
exc.SAWarning,
r"DELETE statement on table 'parent' expected to "
r"delete 2 row\(s\); 0 were matched.",
sess.flush,
)
def test_update_single_broken_multi_rowcount_still_raises(self):
# raise occurs for single row UPDATE that misses even if
# supports_sane_multi_rowcount is False
Parent, Child = self._fixture()
sess = fixture_session()
p1 = Parent(id=1, data=2, child=None)
sess.add(p1)
sess.flush()
sess.execute(self.tables.parent.delete())
p1.data = 3
with patch.object(
config.db.dialect, "supports_sane_multi_rowcount", False
):
assert_raises_message(
orm_exc.StaleDataError,
r"UPDATE statement on table 'parent' expected to "
r"update 1 row\(s\); 0 were matched.",
sess.flush,
)
def test_update_multi_broken_multi_rowcount_doesnt_raise(self):
# raise does not occur for multirow UPDATE that misses if
# supports_sane_multi_rowcount is False, even if rowcount is still
# correct
Parent, Child = self._fixture()
sess = fixture_session()
p1 = Parent(id=1, data=2, child=None)
p2 = Parent(id=2, data=3, child=None)
sess.add_all([p1, p2])
sess.flush()
sess.execute(self.tables.parent.delete())
p1.data = 3
p2.data = 4
with patch.object(
config.db.dialect, "supports_sane_multi_rowcount", False
):
# no raise
sess.flush()
def test_delete_single_broken_multi_rowcount_still_warns(self):
Parent, Child = self._fixture()
sess = fixture_session()
p1 = Parent(id=1, data=2, child=None)
sess.add(p1)
sess.flush()
sess.flush()
sess.execute(self.tables.parent.delete())
sess.delete(p1)
# only one row, so it warns
with patch.object(
config.db.dialect, "supports_sane_multi_rowcount", False
):
assert_warns_message(
exc.SAWarning,
r"DELETE statement on table 'parent' expected to "
r"delete 1 row\(s\); 0 were matched.",
sess.flush,
)
def test_delete_multi_broken_multi_rowcount_doesnt_warn(self):
Parent, Child = self._fixture()
sess = fixture_session()
p1 = Parent(id=1, data=2, child=None)
p2 = Parent(id=2, data=3, child=None)
sess.add_all([p1, p2])
sess.flush()
sess.execute(self.tables.parent.delete())
sess.delete(p1)
sess.delete(p2)
# if the dialect reports supports_sane_multi_rowcount as false,
# if there were more than one row deleted, need to ensure the
# rowcount result is ignored. psycopg2 + batch mode reports the
# wrong number, not -1. see issue #4661
with patch.object(
config.db.dialect, "supports_sane_multi_rowcount", False
):
# no warning
sess.flush()
def test_delete_multi_missing_allow(self):
Parent, Child = self._fixture(confirm_deleted_rows=False)
sess = fixture_session()
p1 = Parent(id=1, data=2, child=None)
p2 = Parent(id=2, data=3, child=None)
sess.add_all([p1, p2])
sess.flush()
sess.execute(self.tables.parent.delete())
sess.delete(p1)
sess.delete(p2)
sess.flush()
| BasicStaleChecksTest |
python | getsentry__sentry | tests/sentry/relocation/api/endpoints/test_retry.py | {
"start": 1817,
"end": 18284
} | class ____(APITestCase):
endpoint = "sentry-api-0-relocations-retry"
method = "POST"
def setUp(self) -> None:
super().setUp()
self.owner = self.create_user(
email="owner", is_superuser=False, is_staff=True, is_active=True
)
self.superuser = self.create_user(is_superuser=True)
self.staff_user = self.create_user(is_staff=True)
self.relocation: Relocation = Relocation.objects.create(
date_added=TEST_DATE_ADDED,
creator_id=self.superuser.id,
owner_id=self.owner.id,
status=Relocation.Status.FAILURE.value,
step=Relocation.Step.PREPROCESSING.value,
provenance=Relocation.Provenance.SELF_HOSTED.value,
want_org_slugs=["foo", "bar"],
want_usernames=["alice", "bob"],
scheduled_pause_at_step=Relocation.Step.IMPORTING.value,
scheduled_cancel_at_step=Relocation.Step.NOTIFYING.value,
latest_notified=Relocation.EmailKind.FAILED.value,
latest_task=OrderedTask.PREPROCESSING_SCAN.name,
latest_task_attempts=1,
)
# Make two files - one to be referenced by our existing `Relocation`, the other not.
self.file: File = File.objects.create(
name="raw-relocation-data.tar", type=RELOCATION_FILE_TYPE
)
self.file.putfile(get_test_tarball())
other_file: File = File.objects.create(
name="raw-relocation-data.tar", type=RELOCATION_FILE_TYPE
)
other_file.putfile(get_test_tarball())
self.relocation_file = RelocationFile.objects.create(
relocation=self.relocation,
file=self.file,
kind=RelocationFile.Kind.RAW_USER_DATA.value,
)
@override_options({"relocation.enabled": True, "relocation.daily-limit.small": 2})
@patch("sentry.relocation.tasks.process.uploading_start.delay")
def test_good_simple(self, uploading_start_mock: Mock, analytics_record_mock: Mock) -> None:
self.login_as(user=self.owner, superuser=False)
relocation_count = Relocation.objects.count()
relocation_file_count = RelocationFile.objects.count()
file_count = File.objects.count()
response = self.get_success_response(self.relocation.uuid, status_code=201)
assert response.data["uuid"] != self.relocation.uuid
assert self.relocation.date_added is not None
assert response.data["dateAdded"] > self.relocation.date_added
assert response.data["dateUpdated"] > self.relocation.date_updated
assert response.data["status"] == Relocation.Status.IN_PROGRESS.name
assert response.data["step"] == Relocation.Step.UPLOADING.name
assert response.data["wantOrgSlugs"] == self.relocation.want_org_slugs
assert response.data["creator"]["id"] == str(self.owner.id)
assert response.data["creator"]["email"] == str(self.owner.email)
assert response.data["creator"]["username"] == str(self.owner.username)
assert response.data["owner"]["id"] == str(self.owner.id)
assert response.data["owner"]["email"] == str(self.owner.email)
assert response.data["owner"]["username"] == str(self.owner.username)
assert response.data["latestNotified"] is None
assert response.data["latestUnclaimedEmailsSentAt"] is None
assert response.data["scheduledPauseAtStep"] is None
assert response.data["wantUsernames"] is None
assert response.data["importedUserIds"] == []
assert response.data["importedOrgIds"] == []
assert (
Relocation.objects.filter(owner_id=self.owner.id)
.exclude(uuid=self.relocation.uuid)
.exists()
)
assert Relocation.objects.count() == relocation_count + 1
assert RelocationFile.objects.count() == relocation_file_count + 1
assert File.objects.count() == file_count
assert uploading_start_mock.call_count == 1
assert_last_analytics_event(
analytics_record_mock,
RelocationCreatedEvent(
creator_id=int(response.data["creator"]["id"]),
owner_id=int(response.data["owner"]["id"]),
uuid=response.data["uuid"],
),
)
@override_options(
{"relocation.enabled": False, "relocation.daily-limit.small": 2, "staff.ga-rollout": True}
)
@patch("sentry.relocation.tasks.process.uploading_start.delay")
def test_good_staff_when_feature_disabled(
self, uploading_start_mock: Mock, analytics_record_mock: Mock
) -> None:
self.login_as(user=self.staff_user, staff=True)
relocation_count = Relocation.objects.count()
relocation_file_count = RelocationFile.objects.count()
file_count = File.objects.count()
response = self.get_success_response(self.relocation.uuid, status_code=201)
assert response.data["uuid"] != self.relocation.uuid
assert response.data["creator"]["id"] == str(self.staff_user.id)
assert response.data["creator"]["email"] == str(self.staff_user.email)
assert response.data["creator"]["username"] == str(self.staff_user.username)
assert response.data["owner"]["id"] == str(self.owner.id)
assert response.data["owner"]["email"] == str(self.owner.email)
assert response.data["owner"]["username"] == str(self.owner.username)
assert (
Relocation.objects.filter(owner_id=self.owner.id)
.exclude(uuid=self.relocation.uuid)
.exists()
)
assert Relocation.objects.count() == relocation_count + 1
assert RelocationFile.objects.count() == relocation_file_count + 1
assert File.objects.count() == file_count
assert uploading_start_mock.call_count == 1
assert_last_analytics_event(
analytics_record_mock,
RelocationCreatedEvent(
creator_id=int(response.data["creator"]["id"]),
owner_id=int(response.data["owner"]["id"]),
uuid=response.data["uuid"],
),
)
@override_options({"relocation.enabled": False, "relocation.daily-limit.small": 2})
@patch("sentry.relocation.tasks.process.uploading_start.delay")
def test_good_superuser_when_feature_disabled(
self, uploading_start_mock: Mock, analytics_record_mock: Mock
) -> None:
self.login_as(user=self.superuser, superuser=True)
relocation_count = Relocation.objects.count()
relocation_file_count = RelocationFile.objects.count()
file_count = File.objects.count()
response = self.get_success_response(self.relocation.uuid, status_code=201)
assert response.data["uuid"] != self.relocation.uuid
assert response.data["creator"]["id"] == str(self.superuser.id)
assert response.data["creator"]["email"] == str(self.superuser.email)
assert response.data["creator"]["username"] == str(self.superuser.username)
assert response.data["owner"]["id"] == str(self.owner.id)
assert response.data["owner"]["email"] == str(self.owner.email)
assert response.data["owner"]["username"] == str(self.owner.username)
assert (
Relocation.objects.filter(owner_id=self.owner.id)
.exclude(uuid=self.relocation.uuid)
.exists()
)
assert Relocation.objects.count() == relocation_count + 1
assert RelocationFile.objects.count() == relocation_file_count + 1
assert File.objects.count() == file_count
assert uploading_start_mock.call_count == 1
assert_last_analytics_event(
analytics_record_mock,
RelocationCreatedEvent(
creator_id=int(response.data["creator"]["id"]),
owner_id=int(response.data["owner"]["id"]),
uuid=response.data["uuid"],
),
)
@override_options({"relocation.enabled": False, "relocation.daily-limit.small": 2})
@patch("sentry.relocation.tasks.process.uploading_start.delay")
def test_bad_without_superuser_when_feature_disabled(
self, uploading_start_mock: Mock, analytics_record_mock: Mock
) -> None:
self.login_as(user=self.owner, superuser=False)
relocation_count = Relocation.objects.count()
relocation_file_count = RelocationFile.objects.count()
file_count = File.objects.count()
response = self.get_error_response(self.relocation.uuid, status_code=403)
assert response.data.get("detail") == ERR_FEATURE_DISABLED
assert not (
Relocation.objects.filter(owner_id=self.owner.id)
.exclude(uuid=self.relocation.uuid)
.exists()
)
assert Relocation.objects.count() == relocation_count
assert RelocationFile.objects.count() == relocation_file_count
assert File.objects.count() == file_count
assert uploading_start_mock.call_count == 0
@override_options({"relocation.enabled": False, "relocation.daily-limit.small": 2})
@patch("sentry.relocation.tasks.process.uploading_start.delay")
def test_bad_expired_superuser_when_feature_disabled(
self, uploading_start_mock: Mock, analytics_record_mock: Mock
) -> None:
self.login_as(user=self.owner, superuser=True)
relocation_count = Relocation.objects.count()
relocation_file_count = RelocationFile.objects.count()
file_count = File.objects.count()
response = self.get_error_response(self.relocation.uuid, status_code=403)
assert response.data.get("detail") == ERR_FEATURE_DISABLED
assert not (
Relocation.objects.filter(owner_id=self.owner.id)
.exclude(uuid=self.relocation.uuid)
.exists()
)
assert Relocation.objects.count() == relocation_count
assert RelocationFile.objects.count() == relocation_file_count
assert File.objects.count() == file_count
assert uploading_start_mock.call_count == 0
analytics_record_mock.assert_not_called()
@override_options({"relocation.enabled": True, "relocation.daily-limit.small": 2})
@patch("sentry.relocation.tasks.process.uploading_start.delay")
def test_bad_relocation_not_found(
self, uploading_start_mock: Mock, analytics_record_mock: Mock
) -> None:
self.login_as(user=self.owner, superuser=False)
self.get_error_response(str(uuid4().hex), status_code=404)
assert uploading_start_mock.call_count == 0
analytics_record_mock.assert_not_called()
@override_options({"relocation.enabled": True, "relocation.daily-limit.small": 2})
@patch("sentry.relocation.tasks.process.uploading_start.delay")
def test_bad_relocation_file_not_found(
self, uploading_start_mock: Mock, analytics_record_mock: Mock
) -> None:
self.login_as(user=self.owner, superuser=False)
RelocationFile.objects.all().delete()
response = self.get_error_response(self.relocation.uuid, status_code=400)
assert response.data.get("detail") == ERR_FILE_NO_LONGER_EXISTS
assert uploading_start_mock.call_count == 0
analytics_record_mock.assert_not_called()
@override_options({"relocation.enabled": True, "relocation.daily-limit.small": 2})
@patch("sentry.relocation.tasks.process.uploading_start.delay")
def test_bad_file_not_found(
self, uploading_start_mock: Mock, analytics_record_mock: Mock
) -> None:
self.login_as(user=self.owner, superuser=False)
File.objects.all().delete()
response = self.get_error_response(self.relocation.uuid, status_code=400)
assert response.data.get("detail") == ERR_FILE_NO_LONGER_EXISTS
assert uploading_start_mock.call_count == 0
@override_options(
{"relocation.enabled": True, "relocation.daily-limit.small": 2, "staff.ga-rollout": True}
)
@patch("sentry.relocation.tasks.process.uploading_start.delay")
def test_bad_staff_owner_not_found(
self, uploading_start_mock: Mock, analytics_record_mock: Mock
) -> None:
self.login_as(user=self.staff_user, staff=True)
with assume_test_silo_mode(SiloMode.CONTROL):
User.objects.filter(id=self.owner.id).delete()
response = self.get_error_response(self.relocation.uuid, status_code=400)
assert response.data.get("detail") == ERR_OWNER_NO_LONGER_EXISTS
assert uploading_start_mock.call_count == 0
analytics_record_mock.assert_not_called()
@override_options({"relocation.enabled": True, "relocation.daily-limit.small": 2})
@patch("sentry.relocation.tasks.process.uploading_start.delay")
def test_bad_superuser_owner_not_found(
self, uploading_start_mock: Mock, analytics_record_mock: Mock
) -> None:
self.login_as(user=self.superuser, superuser=True)
with assume_test_silo_mode(SiloMode.CONTROL):
User.objects.filter(id=self.owner.id).delete()
response = self.get_error_response(self.relocation.uuid, status_code=400)
assert response.data.get("detail") == ERR_OWNER_NO_LONGER_EXISTS
assert uploading_start_mock.call_count == 0
analytics_record_mock.assert_not_called()
@override_options({"relocation.enabled": True, "relocation.daily-limit.small": 1})
@patch("sentry.relocation.tasks.process.uploading_start.delay")
def test_bad_throttled(self, uploading_start_mock: Mock, analytics_record_mock: Mock) -> None:
self.login_as(user=self.owner, superuser=False)
response = self.get_error_response(self.relocation.uuid, status_code=429)
assert response.data.get("detail") == ERR_THROTTLED_RELOCATION
assert uploading_start_mock.call_count == 0
analytics_record_mock.assert_not_called()
for stat in [
Relocation.Status.IN_PROGRESS,
Relocation.Status.PAUSE,
]:
@override_options({"relocation.enabled": True, "relocation.daily-limit.small": 2})
@patch("sentry.relocation.tasks.process.uploading_start.delay")
def test_bad_relocation_still_ongoing(
self,
uploading_start_mock: Mock,
analytics_record_mock: Mock,
stat: Relocation.Status = stat,
) -> None:
self.login_as(user=self.owner, superuser=False)
self.relocation.status = stat.value
self.relocation.latest_notified = Relocation.EmailKind.STARTED.value
self.relocation.save()
response = self.get_error_response(self.relocation.uuid, status_code=400)
assert response.data.get("detail") == ERR_NOT_RETRYABLE_STATUS.substitute(
status=stat.name
)
assert uploading_start_mock.call_count == 0
analytics_record_mock.assert_not_called()
for stat in [
Relocation.Status.IN_PROGRESS,
Relocation.Status.PAUSE,
]:
@override_options({"relocation.enabled": True, "relocation.daily-limit.small": 3})
@patch("sentry.relocation.tasks.process.uploading_start.delay")
def test_bad_owner_has_another_active_relocation(
self,
uploading_start_mock: Mock,
analytics_record_mock: Mock,
stat: Relocation.Status = stat,
) -> None:
self.login_as(user=self.owner, superuser=False)
Relocation.objects.create(
date_added=TEST_DATE_ADDED,
creator_id=self.superuser.id,
owner_id=self.owner.id,
status=stat.value,
step=Relocation.Step.PREPROCESSING.value,
want_org_slugs=["foo", "bar"],
want_usernames=["alice", "bob"],
scheduled_pause_at_step=Relocation.Step.IMPORTING.value,
scheduled_cancel_at_step=Relocation.Step.NOTIFYING.value,
latest_notified=Relocation.EmailKind.STARTED.value,
latest_task=OrderedTask.PREPROCESSING_SCAN.name,
latest_task_attempts=1,
)
response = self.get_error_response(self.relocation.uuid, status_code=409)
assert response.data.get("detail") == ERR_DUPLICATE_RELOCATION
assert uploading_start_mock.call_count == 0
analytics_record_mock.assert_not_called()
| RetryRelocationTest |
python | run-llama__llama_index | llama-index-integrations/readers/llama-index-readers-opensearch/llama_index/readers/opensearch/base.py | {
"start": 244,
"end": 2397
} | class ____(BaseReader):
"""
Read documents from an Opensearch index.
These documents can then be used in a downstream Llama Index data structure.
Args:
endpoint (str): URL (http/https) of cluster without port
index (str): Name of the index (required)
basic_auth (set): basic authentication username password
"""
def __init__(
self, host: str, port: int, index: str, basic_auth: Optional[set] = None
):
"""Initialize with parameters."""
from opensearchpy import OpenSearch
self._opster_client = OpenSearch(
hosts=[{"host": host, "port": port}],
http_compress=True, # enables gzip compression for request bodies
http_auth=basic_auth,
use_ssl=True,
verify_certs=False,
ssl_assert_hostname=False,
ssl_show_warn=False,
)
self._index = index
def load_data(
self,
field: str,
query: Optional[dict] = None,
embedding_field: Optional[str] = None,
) -> List[Document]:
"""
Read data from the Opensearch index.
Args:
field (str): Field in the document to retrieve text from
query (Optional[dict]): Opensearch JSON query DSL object.
For example:
{ "query" : {"match": {"message": {"query": "this is a test"}}}}
embedding_field (Optional[str]): If there are embeddings stored in
this index, this field can be used
to set the embedding field on the returned Document list.
Returns:
List[Document]: A list of documents.
"""
res = self._opster_client.search(body=query, index=self._index)
documents = []
for hit in res["hits"]["hits"]:
value = hit["_source"][field]
_ = hit["_source"].pop(field)
embedding = hit["_source"].get(embedding_field or "", None)
documents.append(
Document(text=value, extra_info=hit["_source"], embedding=embedding)
)
return documents
| OpensearchReader |
python | dagster-io__dagster | python_modules/libraries/dagster-gcp/dagster_gcp/gcs/file_manager.py | {
"start": 1034,
"end": 3860
} | class ____(FileManager):
def __init__(self, client, gcs_bucket, gcs_base_key):
self._client = check.inst_param(client, "client", storage.client.Client)
self._gcs_bucket = check.str_param(gcs_bucket, "gcs_bucket")
self._gcs_base_key = check.str_param(gcs_base_key, "gcs_base_key")
self._local_handle_cache = {}
self._temp_file_manager = TempfileManager()
def copy_handle_to_local_temp(self, file_handle):
self._download_if_not_cached(file_handle)
return self._get_local_path(file_handle)
def _download_if_not_cached(self, file_handle):
if not self._file_handle_cached(file_handle):
# instigate download
temp_file_obj = self._temp_file_manager.tempfile()
temp_name = temp_file_obj.name
bucket_obj = self._client.bucket(file_handle.gcs_bucket)
bucket_obj.blob(file_handle.gcs_key).download_to_file(temp_file_obj)
temp_file_obj.flush()
self._local_handle_cache[file_handle.gcs_path] = temp_name
return file_handle
@contextmanager
def read(self, file_handle, mode="rb"): # pyright: ignore[reportIncompatibleMethodOverride]
check.inst_param(file_handle, "file_handle", GCSFileHandle)
check.str_param(mode, "mode")
check.param_invariant(mode in {"r", "rb"}, "mode")
self._download_if_not_cached(file_handle)
encoding = None if mode == "rb" else "utf-8"
with open(self._get_local_path(file_handle), mode, encoding=encoding) as file_obj:
yield file_obj
def _file_handle_cached(self, file_handle):
return file_handle.gcs_path in self._local_handle_cache
def _get_local_path(self, file_handle):
return self._local_handle_cache[file_handle.gcs_path]
def read_data(self, file_handle):
with self.read(file_handle, mode="rb") as file_obj:
return file_obj.read()
def write_data(self, data, ext=None, key: Optional[str] = None):
key = check.opt_str_param(key, "key", default=str(uuid.uuid4()))
check.inst_param(data, "data", bytes)
return self.write(io.BytesIO(data), mode="wb", key=key, ext=ext)
def write(self, file_obj, mode="wb", ext=None, key: Optional[str] = None):
key = check.opt_str_param(key, "key", default=str(uuid.uuid4()))
check_file_like_obj(file_obj)
gcs_key = self.get_full_key(key + (("." + ext) if ext is not None else ""))
bucket_obj = self._client.bucket(self._gcs_bucket)
bucket_obj.blob(gcs_key).upload_from_file(file_obj)
return GCSFileHandle(self._gcs_bucket, gcs_key)
def get_full_key(self, file_key):
return f"{self._gcs_base_key}/{file_key}"
def delete_local_temp(self):
self._temp_file_manager.close()
| GCSFileManager |
python | numpy__numpy | numpy/distutils/system_info.py | {
"start": 86930,
"end": 87226
} | class ____(openblas_ilp64_info):
_require_symbols = ['dgemm_', 'cblas_dgemm', 'zungqr_', 'LAPACKE_zungqr']
def _calc_info(self):
info = super()._calc_info()
if info:
info['define_macros'] += [('HAVE_LAPACKE', None)]
return info
| openblas_ilp64_lapack_info |
python | huggingface__transformers | tests/models/layoutlmv2/test_tokenization_layoutlmv2.py | {
"start": 1342,
"end": 115213
} | class ____(TokenizerTesterMixin, unittest.TestCase):
from_pretrained_id = "microsoft/layoutlmv2-base-uncased"
tokenizer_class = LayoutLMv2Tokenizer
rust_tokenizer_class = LayoutLMv2Tokenizer
test_rust_tokenizer = False
space_between_special_tokens = True
from_pretrained_filter = filter_non_english
test_seq2seq = False
def get_words_and_boxes(self):
words = ["a", "weirdly", "test"]
boxes = [[423, 237, 440, 251], [427, 272, 441, 287], [419, 115, 437, 129]]
return words, boxes
def get_words_and_boxes_batch(self):
words = [["a", "weirdly", "test"], ["hello", "my", "name", "is", "bob"]]
boxes = [
[[423, 237, 440, 251], [427, 272, 441, 287], [419, 115, 437, 129]],
[[961, 885, 992, 912], [256, 38, 330, 58], [256, 38, 330, 58], [336, 42, 353, 57], [34, 42, 66, 69]],
]
return words, boxes
def get_question_words_and_boxes(self):
question = "what's his name?"
words = ["a", "weirdly", "test"]
boxes = [[423, 237, 440, 251], [427, 272, 441, 287], [419, 115, 437, 129]]
return question, words, boxes
def get_question_words_and_boxes_batch(self):
questions = ["what's his name?", "how is he called?"]
words = [["a", "weirdly", "test"], ["what", "a", "laif", "gastn"]]
boxes = [
[[423, 237, 440, 251], [427, 272, 441, 287], [419, 115, 437, 129]],
[[256, 38, 330, 58], [256, 38, 330, 58], [336, 42, 353, 57], [34, 42, 66, 69]],
]
return questions, words, boxes
def get_empty_words_and_boxes(self):
words = ["test", "empty", ""]
boxes = [[423, 237, 440, 251], [427, 272, 441, 287], [419, 115, 437, 129]]
return words, boxes
def get_empty_words_and_boxes_batch(self):
words = [["test", "empty", ""], ["one", "more", "empty", ""]]
boxes = [
[[423, 237, 440, 251], [427, 272, 441, 287], [419, 115, 437, 129]],
[[961, 885, 992, 912], [256, 38, 330, 58], [256, 38, 330, 58], [336, 42, 353, 57]],
]
return words, boxes
def get_empty_question_words_and_boxes(self):
question = ""
words = ["test", "empty", ""]
boxes = [[423, 237, 440, 251], [427, 272, 441, 287], [419, 115, 437, 129]]
return question, words, boxes
def get_empty_question_words_and_boxes_batch(self):
questions = ["what's his name?", ""]
words = [["test", "empty", ""], ["one", "more", "empty", ""]]
boxes = [
[[423, 237, 440, 251], [427, 272, 441, 287], [419, 115, 437, 129]],
[[961, 885, 992, 912], [256, 38, 330, 58], [256, 38, 330, 58], [336, 42, 353, 57]],
]
return questions, words, boxes
@classmethod
def setUpClass(cls):
super().setUpClass()
vocab_tokens = [
"[UNK]",
"[CLS]",
"[SEP]",
"[PAD]",
"[MASK]",
"what",
"s",
"his",
"name",
"?",
"a",
"weird",
"##ly",
"test",
"lowest",
]
cls.vocab_file = os.path.join(cls.tmpdirname, VOCAB_FILES_NAMES["vocab_file"])
with open(cls.vocab_file, "w", encoding="utf-8") as vocab_writer:
vocab_writer.write("".join([x + "\n" for x in vocab_tokens]))
# Load vocab from file and pass to tokenizer
vocab = {}
with open(cls.vocab_file, "r", encoding="utf-8") as reader:
for index, line in enumerate(reader):
token = line.rstrip("\n")
vocab[token] = index
tokenizer = cls.tokenizer_class(vocab=vocab)
tokenizer.save_pretrained(cls.tmpdirname)
def get_input_output_texts(self, tokenizer):
input_text = "UNwant\u00e9d,running"
output_text = "unwanted, running"
return input_text, output_text
def convert_batch_encode_plus_format_to_encode_plus(self, batch_encode_plus_sequences):
"""Helper method to convert batch_encode_plus output to list of encode_plus outputs"""
# Get the batch size
first_key = list(batch_encode_plus_sequences.keys())[0]
batch_size = len(batch_encode_plus_sequences[first_key])
# Convert to list of dicts
encode_plus_sequences = []
for i in range(batch_size):
single_sequence = {}
for key, value in batch_encode_plus_sequences.items():
if key != "encodings": # Skip the encodings attribute
single_sequence[key] = value[i]
encode_plus_sequences.append(single_sequence)
return encode_plus_sequences
@unittest.skip(reason="Chat template tests don't play well with table/layout models.")
def test_chat_template_batched(self):
pass
@unittest.skip(reason="LayoutLMv2 requires pre-tokenized words, not strings.")
def test_bos_token_with_add_bos_token_false(self):
pass
@unittest.skip(reason="LayoutLMv2 requires pre-tokenized words, not strings.")
def test_bos_token_with_add_bos_token_true(self):
pass
@unittest.skip(reason="LayoutLMv2 requires pre-tokenized words with boxes.")
def test_encode_basic_padding(self):
pass
@unittest.skip(reason="LayoutLMv2 requires pre-tokenized words with boxes.")
def test_pad_token_initialization(self):
pass
def test_clean_text(self):
tokenizer = self.get_tokenizer()
# Example taken from the issue https://github.com/huggingface/tokenizers/issues/340
self.assertListEqual([tokenizer.tokenize(t) for t in ["Hello", "\xad", "hello"]], [["[UNK]"], [], ["[UNK]"]])
@slow
def test_sequence_builders(self):
tokenizer = self.tokenizer_class.from_pretrained("microsoft/layoutlmv2-base-uncased")
question, words, boxes = self.get_question_words_and_boxes()
text = tokenizer.encode(
question.split(),
boxes=[tokenizer.pad_token_box for _ in range(len(question.split()))],
add_special_tokens=False,
)
text_2 = tokenizer.encode(words, boxes=boxes, add_special_tokens=False)
encoded_pair = tokenizer.build_inputs_with_special_tokens(text, text_2)
assert encoded_pair == [101] + text + [102] + text_2 + [102]
def test_offsets_with_special_characters(self):
for tokenizer, pretrained_name, kwargs in self.tokenizers_list:
with self.subTest(f"{tokenizer.__class__.__name__} ({pretrained_name})"):
tokenizer_r = self.get_tokenizer(pretrained_name, **kwargs)
words, boxes = self.get_words_and_boxes()
words[1] = tokenizer_r.mask_token
tokens = tokenizer_r(
words,
boxes=boxes,
return_attention_mask=False,
return_token_type_ids=False,
return_offsets_mapping=True,
add_special_tokens=True,
)
expected_results = [
((0, 0), tokenizer_r.cls_token),
((0, 1), "a"),
((0, 6), tokenizer_r.mask_token),
((0, 4), "test"),
((0, 0), tokenizer_r.sep_token),
]
self.assertEqual(
[e[1] for e in expected_results], tokenizer_r.convert_ids_to_tokens(tokens["input_ids"])
)
self.assertEqual([e[0] for e in expected_results], tokens["offset_mapping"])
def test_add_special_tokens(self):
tokenizers: list[LayoutLMv2Tokenizer] = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
special_token = "[SPECIAL_TOKEN]"
special_token_box = [1000, 1000, 1000, 1000]
tokenizer.add_special_tokens({"cls_token": special_token})
encoded_special_token = tokenizer.encode(
[special_token], boxes=[special_token_box], add_special_tokens=False
)
self.assertEqual(len(encoded_special_token), 1)
decoded = tokenizer.decode(encoded_special_token, skip_special_tokens=True)
self.assertTrue(special_token not in decoded)
def test_add_tokens_tokenizer(self):
tokenizers: list[LayoutLMv2Tokenizer] = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
vocab_size = tokenizer.vocab_size
all_size = len(tokenizer)
self.assertNotEqual(vocab_size, 0)
# We usually have added tokens from the start in tests because our vocab fixtures are
# smaller than the original vocabs - let's not assert this
# self.assertEqual(vocab_size, all_size)
new_toks = ["aaaaa", "bbbbbb", "cccccccccdddddddd"]
added_toks = tokenizer.add_tokens(new_toks)
vocab_size_2 = tokenizer.vocab_size
all_size_2 = len(tokenizer)
self.assertNotEqual(vocab_size_2, 0)
self.assertEqual(vocab_size, vocab_size_2)
self.assertEqual(added_toks, len(new_toks))
self.assertEqual(all_size_2, all_size + len(new_toks))
words = "aaaaa bbbbbb low cccccccccdddddddd l".split()
boxes = [[1000, 1000, 1000, 1000] for _ in range(len(words))]
tokens = tokenizer.encode(words, boxes=boxes, add_special_tokens=False)
self.assertGreaterEqual(len(tokens), 4)
self.assertGreater(tokens[0], tokenizer.vocab_size - 1)
self.assertGreater(tokens[-2], tokenizer.vocab_size - 1)
new_toks_2 = {"eos_token": ">>>>|||<||<<|<<", "pad_token": "<<<<<|||>|>>>>|>"}
added_toks_2 = tokenizer.add_special_tokens(new_toks_2)
vocab_size_3 = tokenizer.vocab_size
all_size_3 = len(tokenizer)
self.assertNotEqual(vocab_size_3, 0)
self.assertEqual(vocab_size, vocab_size_3)
self.assertEqual(added_toks_2, len(new_toks_2))
self.assertEqual(all_size_3, all_size_2 + len(new_toks_2))
words = ">>>>|||<||<<|<< aaaaabbbbbb low cccccccccdddddddd <<<<<|||>|>>>>|> l".split()
boxes = [[1000, 1000, 1000, 1000] for _ in range(len(words))]
tokens = tokenizer.encode(
words,
boxes=boxes,
add_special_tokens=False,
)
self.assertGreaterEqual(len(tokens), 6)
self.assertGreater(tokens[0], tokenizer.vocab_size - 1)
self.assertGreater(tokens[0], tokens[1])
self.assertGreater(tokens[-2], tokenizer.vocab_size - 1)
self.assertGreater(tokens[-2], tokens[-3])
self.assertEqual(tokens[0], tokenizer.eos_token_id)
self.assertEqual(tokens[-2], tokenizer.pad_token_id)
@require_tokenizers
def test_encode_decode_with_spaces(self):
tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
words, boxes = self.get_words_and_boxes()
new_toks = [AddedToken("[ABC]", normalized=False), AddedToken("[DEF]", normalized=False)]
tokenizer.add_tokens(new_toks)
input = "[ABC][DEF][ABC][DEF]"
if self.space_between_special_tokens:
output = "[ABC] [DEF] [ABC] [DEF]"
else:
output = input
encoded = tokenizer.encode(input.split(), boxes=boxes, add_special_tokens=False)
decoded = tokenizer.decode(encoded, spaces_between_special_tokens=self.space_between_special_tokens)
self.assertIn(decoded, [output, output.lower()])
@unittest.skip(reason="Not implemented")
def test_right_and_left_truncation(self):
pass
@unittest.skip(reason="Not implemented")
def test_split_special_tokens(self):
pass
@parameterized.expand([(True,), (False,)])
def test_encode_plus_with_padding(self, use_padding_as_call_kwarg: bool):
tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
words, boxes = self.get_words_and_boxes()
# check correct behaviour if no pad_token_id exists and add it eventually
self._check_no_pad_token_padding(tokenizer, words)
padding_size = 10
padding_idx = tokenizer.pad_token_id
encoded_sequence = tokenizer(words, boxes=boxes, return_special_tokens_mask=True)
input_ids = encoded_sequence["input_ids"]
special_tokens_mask = encoded_sequence["special_tokens_mask"]
sequence_length = len(input_ids)
# Test 'longest' and 'no_padding' don't do anything
tokenizer.padding_side = "right"
not_padded_sequence = tokenizer(
words,
boxes=boxes,
padding=False,
return_special_tokens_mask=True,
)
not_padded_input_ids = not_padded_sequence["input_ids"]
not_padded_special_tokens_mask = not_padded_sequence["special_tokens_mask"]
not_padded_sequence_length = len(not_padded_input_ids)
self.assertTrue(sequence_length == not_padded_sequence_length)
self.assertTrue(input_ids == not_padded_input_ids)
self.assertTrue(special_tokens_mask == not_padded_special_tokens_mask)
not_padded_sequence = tokenizer(
words,
boxes=boxes,
padding=False,
return_special_tokens_mask=True,
)
not_padded_input_ids = not_padded_sequence["input_ids"]
not_padded_special_tokens_mask = not_padded_sequence["special_tokens_mask"]
not_padded_sequence_length = len(not_padded_input_ids)
self.assertTrue(sequence_length == not_padded_sequence_length)
self.assertTrue(input_ids == not_padded_input_ids)
self.assertTrue(special_tokens_mask == not_padded_special_tokens_mask)
# Test right padding
tokenizer_kwargs_right = {
"max_length": sequence_length + padding_size,
"padding": "max_length",
"return_special_tokens_mask": True,
}
if not use_padding_as_call_kwarg:
tokenizer.padding_side = "right"
else:
tokenizer_kwargs_right["padding_side"] = "right"
right_padded_sequence = tokenizer(words, boxes=boxes, **tokenizer_kwargs_right)
right_padded_input_ids = right_padded_sequence["input_ids"]
right_padded_special_tokens_mask = right_padded_sequence["special_tokens_mask"]
right_padded_sequence_length = len(right_padded_input_ids)
self.assertTrue(sequence_length + padding_size == right_padded_sequence_length)
self.assertTrue(input_ids + [padding_idx] * padding_size == right_padded_input_ids)
self.assertTrue(special_tokens_mask + [1] * padding_size == right_padded_special_tokens_mask)
# Test left padding
tokenizer_kwargs_left = {
"max_length": sequence_length + padding_size,
"padding": "max_length",
"return_special_tokens_mask": True,
}
if not use_padding_as_call_kwarg:
tokenizer.padding_side = "left"
else:
tokenizer_kwargs_left["padding_side"] = "left"
left_padded_sequence = tokenizer(words, boxes=boxes, **tokenizer_kwargs_left)
left_padded_input_ids = left_padded_sequence["input_ids"]
left_padded_special_tokens_mask = left_padded_sequence["special_tokens_mask"]
left_padded_sequence_length = len(left_padded_input_ids)
self.assertTrue(sequence_length + padding_size == left_padded_sequence_length)
self.assertTrue([padding_idx] * padding_size + input_ids == left_padded_input_ids)
self.assertTrue([1] * padding_size + special_tokens_mask == left_padded_special_tokens_mask)
if "token_type_ids" in tokenizer.model_input_names:
token_type_ids = encoded_sequence["token_type_ids"]
left_padded_token_type_ids = left_padded_sequence["token_type_ids"]
right_padded_token_type_ids = right_padded_sequence["token_type_ids"]
assert token_type_ids + [0] * padding_size == right_padded_token_type_ids
assert [0] * padding_size + token_type_ids == left_padded_token_type_ids
if "attention_mask" in tokenizer.model_input_names:
attention_mask = encoded_sequence["attention_mask"]
right_padded_attention_mask = right_padded_sequence["attention_mask"]
left_padded_attention_mask = left_padded_sequence["attention_mask"]
self.assertTrue(attention_mask + [0] * padding_size == right_padded_attention_mask)
self.assertTrue([0] * padding_size + attention_mask == left_padded_attention_mask)
def test_internal_consistency(self):
tokenizers = self.get_tokenizers()
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
words, boxes = self.get_words_and_boxes()
tokens = []
for word in words:
tokens.extend(tokenizer.tokenize(word))
ids = tokenizer.convert_tokens_to_ids(tokens)
ids_2 = tokenizer.encode(words, boxes=boxes, add_special_tokens=False)
self.assertListEqual(ids, ids_2)
tokens_2 = tokenizer.convert_ids_to_tokens(ids)
self.assertNotEqual(len(tokens_2), 0)
text_2 = tokenizer.decode(ids)
self.assertIsInstance(text_2, str)
output_text = "a weirdly test"
self.assertEqual(text_2, output_text)
def test_mask_output(self):
tokenizers = self.get_tokenizers(fast=False, do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
words, boxes = self.get_words_and_boxes()
if (
tokenizer.build_inputs_with_special_tokens.__qualname__.split(".")[0] != "PreTrainedTokenizer"
and "token_type_ids" in tokenizer.model_input_names
):
information = tokenizer(words, boxes=boxes, add_special_tokens=True)
sequences, mask = information["input_ids"], information["token_type_ids"]
self.assertEqual(len(sequences), len(mask))
def test_number_of_added_tokens(self):
tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
# test 1: single sequence
words, boxes = self.get_words_and_boxes()
sequences = tokenizer.encode(words, boxes=boxes, add_special_tokens=False)
attached_sequences = tokenizer.encode(words, boxes=boxes, add_special_tokens=True)
# Method is implemented (e.g. not GPT-2)
if len(attached_sequences) != 2:
self.assertEqual(
tokenizer.num_special_tokens_to_add(pair=False), len(attached_sequences) - len(sequences)
)
# test 2: two sequences
question, words, boxes = self.get_question_words_and_boxes()
sequences = tokenizer.encode(question, words, boxes=boxes, add_special_tokens=False)
attached_sequences = tokenizer.encode(question, words, boxes=boxes, add_special_tokens=True)
# Method is implemented (e.g. not GPT-2)
if len(attached_sequences) != 2:
self.assertEqual(
tokenizer.num_special_tokens_to_add(pair=True), len(attached_sequences) - len(sequences)
)
def test_padding(self, max_length=50):
for tokenizer, pretrained_name, kwargs in self.tokenizers_list:
with self.subTest(f"{tokenizer.__class__.__name__} ({pretrained_name})"):
tokenizer_r = self.get_tokenizer(pretrained_name, **kwargs)
tokenizer_p = self.get_tokenizer(pretrained_name, **kwargs)
self.assertEqual(tokenizer_p.pad_token_id, tokenizer_r.pad_token_id)
pad_token_id = tokenizer_p.pad_token_id
# Encode - Simple input
words, boxes = self.get_words_and_boxes()
input_r = tokenizer_r.encode(words, boxes=boxes, max_length=max_length, padding="max_length")
input_p = tokenizer_p.encode(words, boxes=boxes, max_length=max_length, padding="max_length")
self.assert_padded_input_match(input_r, input_p, max_length, pad_token_id)
input_r = tokenizer_r.encode(words, boxes=boxes, padding="longest")
input_p = tokenizer_p.encode(words, boxes=boxes, padding=True)
self.assert_padded_input_match(input_r, input_p, len(input_r), pad_token_id)
# Encode - Pair input
question, words, boxes = self.get_question_words_and_boxes()
input_r = tokenizer_r.encode(question, words, boxes=boxes, max_length=max_length, padding="max_length")
input_p = tokenizer_p.encode(question, words, boxes=boxes, max_length=max_length, padding="max_length")
self.assert_padded_input_match(input_r, input_p, max_length, pad_token_id)
input_r = tokenizer_r.encode(question, words, boxes=boxes, padding=True)
input_p = tokenizer_p.encode(question, words, boxes=boxes, padding="longest")
self.assert_padded_input_match(input_r, input_p, len(input_r), pad_token_id)
# Encode_plus - Simple input
words, boxes = self.get_words_and_boxes()
input_r = tokenizer_r(words, boxes=boxes, max_length=max_length, padding="max_length")
input_p = tokenizer_p(words, boxes=boxes, max_length=max_length, padding="max_length")
self.assert_padded_input_match(input_r["input_ids"], input_p["input_ids"], max_length, pad_token_id)
self.assertSequenceEqual(input_r["attention_mask"], input_p["attention_mask"])
input_r = tokenizer_r(words, boxes=boxes, padding="longest")
input_p = tokenizer_p(words, boxes=boxes, padding=True)
self.assert_padded_input_match(
input_r["input_ids"], input_p["input_ids"], len(input_r["input_ids"]), pad_token_id
)
self.assertSequenceEqual(input_r["attention_mask"], input_p["attention_mask"])
# Encode_plus - Pair input
question, words, boxes = self.get_question_words_and_boxes()
input_r = tokenizer_r(question, words, boxes=boxes, max_length=max_length, padding="max_length")
input_p = tokenizer_p(question, words, boxes=boxes, max_length=max_length, padding="max_length")
self.assert_padded_input_match(input_r["input_ids"], input_p["input_ids"], max_length, pad_token_id)
self.assertSequenceEqual(input_r["attention_mask"], input_p["attention_mask"])
input_r = tokenizer_r(question, words, boxes=boxes, padding="longest")
input_p = tokenizer_p(question, words, boxes=boxes, padding=True)
self.assert_padded_input_match(
input_r["input_ids"], input_p["input_ids"], len(input_r["input_ids"]), pad_token_id
)
self.assertSequenceEqual(input_r["attention_mask"], input_p["attention_mask"])
# Batch_encode_plus - Simple input
words, boxes = self.get_words_and_boxes_batch()
input_r = tokenizer_r.batch_encode_plus(
words,
boxes=boxes,
max_length=max_length,
padding="max_length",
)
input_p = tokenizer_p.batch_encode_plus(
words,
boxes=boxes,
max_length=max_length,
padding="max_length",
)
self.assert_batch_padded_input_match(input_r, input_p, max_length, pad_token_id)
input_r = tokenizer_r.batch_encode_plus(
words,
boxes=boxes,
max_length=max_length,
padding="longest",
)
input_p = tokenizer_p.batch_encode_plus(
words,
boxes=boxes,
max_length=max_length,
padding=True,
)
self.assert_batch_padded_input_match(input_r, input_p, len(input_r["input_ids"][0]), pad_token_id)
input_r = tokenizer_r.batch_encode_plus(words, boxes=boxes, padding="longest")
input_p = tokenizer_p.batch_encode_plus(words, boxes=boxes, padding=True)
self.assert_batch_padded_input_match(input_r, input_p, len(input_r["input_ids"][0]), pad_token_id)
# Batch_encode_plus - Pair input
questions, words, boxes = self.get_question_words_and_boxes_batch()
input_r = tokenizer_r.batch_encode_plus(
list(zip(questions, words)),
is_pair=True,
boxes=boxes,
max_length=max_length,
truncation=True,
padding="max_length",
)
input_p = tokenizer_p.batch_encode_plus(
list(zip(questions, words)),
is_pair=True,
boxes=boxes,
max_length=max_length,
truncation=True,
padding="max_length",
)
self.assert_batch_padded_input_match(input_r, input_p, max_length, pad_token_id)
input_r = tokenizer_r.batch_encode_plus(
list(zip(questions, words)),
is_pair=True,
boxes=boxes,
padding=True,
)
input_p = tokenizer_p.batch_encode_plus(
list(zip(questions, words)),
is_pair=True,
boxes=boxes,
padding="longest",
)
self.assert_batch_padded_input_match(input_r, input_p, len(input_r["input_ids"][0]), pad_token_id)
# Using pad on single examples after tokenization
words, boxes = self.get_words_and_boxes()
input_r = tokenizer_r(words, boxes=boxes)
input_r = tokenizer_r.pad(input_r)
input_p = tokenizer_r(words, boxes=boxes)
input_p = tokenizer_r.pad(input_p)
self.assert_padded_input_match(
input_r["input_ids"], input_p["input_ids"], len(input_r["input_ids"]), pad_token_id
)
# Using pad on single examples after tokenization
input_r = tokenizer_r(words, boxes=boxes)
input_r = tokenizer_r.pad(input_r, max_length=max_length, padding="max_length")
input_p = tokenizer_r(words, boxes=boxes)
input_p = tokenizer_r.pad(input_p, max_length=max_length, padding="max_length")
self.assert_padded_input_match(input_r["input_ids"], input_p["input_ids"], max_length, pad_token_id)
# Using pad after tokenization
words, boxes = self.get_words_and_boxes_batch()
input_r = tokenizer_r.batch_encode_plus(
words,
boxes=boxes,
)
input_r = tokenizer_r.pad(input_r)
input_p = tokenizer_r.batch_encode_plus(
words,
boxes=boxes,
)
input_p = tokenizer_r.pad(input_p)
self.assert_batch_padded_input_match(input_r, input_p, len(input_r["input_ids"][0]), pad_token_id)
# Using pad after tokenization
words, boxes = self.get_words_and_boxes_batch()
input_r = tokenizer_r.batch_encode_plus(
words,
boxes=boxes,
)
input_r = tokenizer_r.pad(input_r, max_length=max_length, padding="max_length")
input_p = tokenizer_r.batch_encode_plus(
words,
boxes=boxes,
)
input_p = tokenizer_r.pad(input_p, max_length=max_length, padding="max_length")
self.assert_batch_padded_input_match(input_r, input_p, max_length, pad_token_id)
def test_call(self):
# Tests that all call wrap to encode_plus and batch_encode_plus
tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
# Test not batched
words, boxes = self.get_words_and_boxes()
encoded_sequences_1 = tokenizer(words, boxes=boxes)
encoded_sequences_2 = tokenizer(words, boxes=boxes)
self.assertEqual(encoded_sequences_1, encoded_sequences_2)
# Test not batched pairs
question, words, boxes = self.get_question_words_and_boxes()
encoded_sequences_1 = tokenizer(words, boxes=boxes)
encoded_sequences_2 = tokenizer(words, boxes=boxes)
self.assertEqual(encoded_sequences_1, encoded_sequences_2)
# Test batched
words, boxes = self.get_words_and_boxes_batch()
encoded_sequences_1 = tokenizer.batch_encode_plus(words, is_pair=False, boxes=boxes)
encoded_sequences_2 = tokenizer(words, boxes=boxes)
self.assertEqual(encoded_sequences_1, encoded_sequences_2)
def test_batch_encode_plus_batch_sequence_length(self):
# Tests that all encoded values have the correct size
tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
words, boxes = self.get_words_and_boxes_batch()
encoded_sequences = [
tokenizer(words_example, boxes=boxes_example) for words_example, boxes_example in zip(words, boxes)
]
encoded_sequences_batch = tokenizer.batch_encode_plus(words, is_pair=False, boxes=boxes, padding=False)
self.assertListEqual(
encoded_sequences, self.convert_batch_encode_plus_format_to_encode_plus(encoded_sequences_batch)
)
maximum_length = len(
max([encoded_sequence["input_ids"] for encoded_sequence in encoded_sequences], key=len)
)
# check correct behaviour if no pad_token_id exists and add it eventually
self._check_no_pad_token_padding(tokenizer, words)
encoded_sequences_padded = [
tokenizer(words_example, boxes=boxes_example, max_length=maximum_length, padding="max_length")
for words_example, boxes_example in zip(words, boxes)
]
encoded_sequences_batch_padded = tokenizer.batch_encode_plus(
words, is_pair=False, boxes=boxes, padding=True
)
self.assertListEqual(
encoded_sequences_padded,
self.convert_batch_encode_plus_format_to_encode_plus(encoded_sequences_batch_padded),
)
# check 'longest' is unsensitive to a max length
encoded_sequences_batch_padded_1 = tokenizer.batch_encode_plus(
words, is_pair=False, boxes=boxes, padding=True
)
encoded_sequences_batch_padded_2 = tokenizer.batch_encode_plus(
words, is_pair=False, boxes=boxes, max_length=maximum_length + 10, padding="longest"
)
for key in encoded_sequences_batch_padded_1:
self.assertListEqual(
encoded_sequences_batch_padded_1[key],
encoded_sequences_batch_padded_2[key],
)
# check 'no_padding' is unsensitive to a max length
encoded_sequences_batch_padded_1 = tokenizer.batch_encode_plus(
words, is_pair=False, boxes=boxes, padding=False
)
encoded_sequences_batch_padded_2 = tokenizer.batch_encode_plus(
words, is_pair=False, boxes=boxes, max_length=maximum_length + 10, padding=False
)
for key in encoded_sequences_batch_padded_1:
self.assertListEqual(
encoded_sequences_batch_padded_1[key],
encoded_sequences_batch_padded_2[key],
)
@unittest.skip(reason="batch_encode_plus does not handle overflowing tokens.")
def test_batch_encode_plus_overflowing_tokens(self):
pass
def test_batch_encode_plus_padding(self):
# Test that padded sequences are equivalent between batch_encode_plus and encode_plus
# Right padding tests
tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
words, boxes = self.get_words_and_boxes_batch()
max_length = 100
# check correct behaviour if no pad_token_id exists and add it eventually
self._check_no_pad_token_padding(tokenizer, words)
encoded_sequences = [
tokenizer(words_example, boxes=boxes_example, max_length=max_length, padding="max_length")
for words_example, boxes_example in zip(words, boxes)
]
encoded_sequences_batch = tokenizer.batch_encode_plus(
words, is_pair=False, boxes=boxes, max_length=max_length, padding="max_length"
)
self.assertListEqual(
encoded_sequences, self.convert_batch_encode_plus_format_to_encode_plus(encoded_sequences_batch)
)
# Left padding tests
tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
tokenizer.padding_side = "left"
words, boxes = self.get_words_and_boxes_batch()
max_length = 100
# check correct behaviour if no pad_token_id exists and add it eventually
self._check_no_pad_token_padding(tokenizer, words)
encoded_sequences = [
tokenizer(words_example, boxes=boxes_example, max_length=max_length, padding="max_length")
for words_example, boxes_example in zip(words, boxes)
]
encoded_sequences_batch = tokenizer.batch_encode_plus(
words, is_pair=False, boxes=boxes, max_length=max_length, padding="max_length"
)
self.assertListEqual(
encoded_sequences, self.convert_batch_encode_plus_format_to_encode_plus(encoded_sequences_batch)
)
def test_padding_to_multiple_of(self):
tokenizers = self.get_tokenizers()
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
if tokenizer.pad_token is None:
self.skipTest(reason="No padding token.")
else:
words, boxes = self.get_words_and_boxes()
# empty_tokens = tokenizer([""], [[]], padding=True, pad_to_multiple_of=8)
normal_tokens = tokenizer(words, boxes=boxes, padding=True, pad_to_multiple_of=8)
# for key, value in empty_tokens.items():
# self.assertEqual(len(value) % 8, 0, f"BatchEncoding.{key} is not multiple of 8")
for key, value in normal_tokens.items():
self.assertEqual(len(value) % 8, 0, f"BatchEncoding.{key} is not multiple of 8")
normal_tokens = tokenizer(words, boxes=boxes, pad_to_multiple_of=8)
for key, value in normal_tokens.items():
self.assertNotEqual(len(value) % 8, 0, f"BatchEncoding.{key} is not multiple of 8")
# Should also work with truncation
normal_tokens = tokenizer(words, boxes=boxes, padding=True, truncation=True, pad_to_multiple_of=8)
for key, value in normal_tokens.items():
self.assertEqual(len(value) % 8, 0, f"BatchEncoding.{key} is not multiple of 8")
# truncation to something which is not a multiple of pad_to_multiple_of raises an error
self.assertRaises(
ValueError,
tokenizer.__call__,
words,
boxes=boxes,
padding=True,
truncation=True,
max_length=12,
pad_to_multiple_of=8,
)
def test_special_tokens_mask_input_pairs(self):
tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
words, boxes = self.get_words_and_boxes()
encoded_sequence = tokenizer.encode(words, boxes=boxes, add_special_tokens=False)
encoded_sequence_dict = tokenizer(
words,
boxes=boxes,
add_special_tokens=True,
return_special_tokens_mask=True,
# add_prefix_space=False,
)
encoded_sequence_w_special = encoded_sequence_dict["input_ids"]
special_tokens_mask = encoded_sequence_dict["special_tokens_mask"]
self.assertEqual(len(special_tokens_mask), len(encoded_sequence_w_special))
filtered_sequence = [
(x if not special_tokens_mask[i] else None) for i, x in enumerate(encoded_sequence_w_special)
]
filtered_sequence = [x for x in filtered_sequence if x is not None]
self.assertEqual(encoded_sequence, filtered_sequence)
def test_special_tokens_mask(self):
tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
words, boxes = self.get_words_and_boxes()
# Testing single inputs
encoded_sequence = tokenizer.encode(words, boxes=boxes, add_special_tokens=False)
encoded_sequence_dict = tokenizer(
words, boxes=boxes, add_special_tokens=True, return_special_tokens_mask=True
)
encoded_sequence_w_special = encoded_sequence_dict["input_ids"]
special_tokens_mask = encoded_sequence_dict["special_tokens_mask"]
self.assertEqual(len(special_tokens_mask), len(encoded_sequence_w_special))
filtered_sequence = [x for i, x in enumerate(encoded_sequence_w_special) if not special_tokens_mask[i]]
self.assertEqual(encoded_sequence, filtered_sequence)
def test_save_and_load_tokenizer(self):
# safety check on max_len default value so we are sure the test works
tokenizers = self.get_tokenizers()
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
self.assertNotEqual(tokenizer.model_max_length, 42)
# Now let's start the test
tokenizers = self.get_tokenizers()
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
# Isolate this from the other tests because we save additional tokens/etc
words, boxes = self.get_words_and_boxes()
tmpdirname = tempfile.mkdtemp()
before_tokens = tokenizer.encode(words, boxes=boxes, add_special_tokens=False)
before_vocab = tokenizer.get_vocab()
tokenizer.save_pretrained(tmpdirname)
after_tokenizer = tokenizer.__class__.from_pretrained(tmpdirname)
after_tokens = after_tokenizer.encode(words, boxes=boxes, add_special_tokens=False)
after_vocab = after_tokenizer.get_vocab()
self.assertListEqual(before_tokens, after_tokens)
self.assertDictEqual(before_vocab, after_vocab)
shutil.rmtree(tmpdirname)
def test_right_and_left_padding(self):
tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
words, boxes = self.get_words_and_boxes()
sequence = "Sequence"
padding_size = 10
# check correct behaviour if no pad_token_id exists and add it eventually
self._check_no_pad_token_padding(tokenizer, sequence)
padding_idx = tokenizer.pad_token_id
# RIGHT PADDING - Check that it correctly pads when a maximum length is specified along with the padding flag set to True
tokenizer.padding_side = "right"
encoded_sequence = tokenizer.encode(words, boxes=boxes)
sequence_length = len(encoded_sequence)
padded_sequence = tokenizer.encode(
words, boxes=boxes, max_length=sequence_length + padding_size, padding="max_length"
)
padded_sequence_length = len(padded_sequence)
assert sequence_length + padding_size == padded_sequence_length
assert encoded_sequence + [padding_idx] * padding_size == padded_sequence
# LEFT PADDING - Check that it correctly pads when a maximum length is specified along with the padding flag set to True
tokenizer.padding_side = "left"
encoded_sequence = tokenizer.encode(words, boxes=boxes)
sequence_length = len(encoded_sequence)
padded_sequence = tokenizer.encode(
words, boxes=boxes, max_length=sequence_length + padding_size, padding="max_length"
)
padded_sequence_length = len(padded_sequence)
assert sequence_length + padding_size == padded_sequence_length
assert [padding_idx] * padding_size + encoded_sequence == padded_sequence
# RIGHT & LEFT PADDING - Check that nothing is done for 'longest' and 'no_padding'
encoded_sequence = tokenizer.encode(words, boxes=boxes)
sequence_length = len(encoded_sequence)
tokenizer.padding_side = "right"
padded_sequence_right = tokenizer.encode(words, boxes=boxes, padding=True)
padded_sequence_right_length = len(padded_sequence_right)
assert sequence_length == padded_sequence_right_length
assert encoded_sequence == padded_sequence_right
tokenizer.padding_side = "left"
padded_sequence_left = tokenizer.encode(words, boxes=boxes, padding="longest")
padded_sequence_left_length = len(padded_sequence_left)
assert sequence_length == padded_sequence_left_length
assert encoded_sequence == padded_sequence_left
tokenizer.padding_side = "right"
padded_sequence_right = tokenizer.encode(words, boxes=boxes)
padded_sequence_right_length = len(padded_sequence_right)
assert sequence_length == padded_sequence_right_length
assert encoded_sequence == padded_sequence_right
tokenizer.padding_side = "left"
padded_sequence_left = tokenizer.encode(words, boxes=boxes, padding=False)
padded_sequence_left_length = len(padded_sequence_left)
assert sequence_length == padded_sequence_left_length
assert encoded_sequence == padded_sequence_left
def test_token_type_ids(self):
tokenizers = self.get_tokenizers()
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
# test 1: single sequence
words, boxes = self.get_words_and_boxes()
output = tokenizer(words, boxes=boxes, return_token_type_ids=True)
# Assert that the token type IDs have the same length as the input IDs
self.assertEqual(len(output["token_type_ids"]), len(output["input_ids"]))
# Assert that the token type IDs have the same length as the attention mask
self.assertEqual(len(output["token_type_ids"]), len(output["attention_mask"]))
self.assertIn(0, output["token_type_ids"])
self.assertNotIn(1, output["token_type_ids"])
# test 2: two sequences (question + words)
question, words, boxes = self.get_question_words_and_boxes()
output = tokenizer(question, words, boxes, return_token_type_ids=True)
# Assert that the token type IDs have the same length as the input IDs
self.assertEqual(len(output["token_type_ids"]), len(output["input_ids"]))
# Assert that the token type IDs have the same length as the attention mask
self.assertEqual(len(output["token_type_ids"]), len(output["attention_mask"]))
self.assertIn(0, output["token_type_ids"])
self.assertIn(1, output["token_type_ids"])
def test_offsets_mapping(self):
for tokenizer, pretrained_name, kwargs in self.tokenizers_list:
with self.subTest(f"{tokenizer.__class__.__name__} ({pretrained_name})"):
tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs)
text = ["a", "wonderful", "test"]
boxes = [[1, 8, 12, 20] for _ in range(len(text))]
# No pair
tokens_with_offsets = tokenizer_r(
text,
boxes=boxes,
return_special_tokens_mask=True,
return_offsets_mapping=True,
add_special_tokens=True,
)
added_tokens = tokenizer_r.num_special_tokens_to_add(False)
offsets = tokens_with_offsets["offset_mapping"]
# Assert there is the same number of tokens and offsets
self.assertEqual(len(offsets), len(tokens_with_offsets["input_ids"]))
# Assert there is online added_tokens special_tokens
self.assertEqual(sum(tokens_with_offsets["special_tokens_mask"]), added_tokens)
# Pairs
text = "what's his name"
pair = ["a", "wonderful", "test"]
boxes = [[1, 8, 12, 20] for _ in range(len(pair))]
tokens_with_offsets = tokenizer_r(
text,
pair,
boxes=boxes,
return_special_tokens_mask=True,
return_offsets_mapping=True,
add_special_tokens=True,
)
added_tokens = tokenizer_r.num_special_tokens_to_add(True)
offsets = tokens_with_offsets["offset_mapping"]
# Assert there is the same number of tokens and offsets
self.assertEqual(len(offsets), len(tokens_with_offsets["input_ids"]))
# Assert there is online added_tokens special_tokens
self.assertEqual(sum(tokens_with_offsets["special_tokens_mask"]), added_tokens)
@require_torch
@require_detectron2
@slow
def test_torch_encode_plus_sent_to_model(self):
import torch
from transformers import MODEL_MAPPING, TOKENIZER_MAPPING
MODEL_TOKENIZER_MAPPING = merge_model_tokenizer_mappings(MODEL_MAPPING, TOKENIZER_MAPPING)
tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
if tokenizer.__class__ not in MODEL_TOKENIZER_MAPPING:
self.skipTest(f"{tokenizer.__class__} is not in the MODEL_TOKENIZER_MAPPING")
config_class, model_class = MODEL_TOKENIZER_MAPPING[tokenizer.__class__]
config = config_class()
if config.is_encoder_decoder or config.pad_token_id is None:
self.skipTest(reason="Model is an encoder-decoder or has no pad token id set.")
model = model_class(config)
# Make sure the model contains at least the full vocabulary size in its embedding matrix
is_using_common_embeddings = hasattr(model.get_input_embeddings(), "weight")
assert (
(model.get_input_embeddings().weight.shape[0] >= len(tokenizer))
if is_using_common_embeddings
else True
)
# Build sequence
words, boxes = self.get_words_and_boxes()
encoded_sequence = tokenizer(words, boxes=boxes, return_tensors="pt")
batch_encoded_sequence = tokenizer.batch_encode_plus(
[words, words], boxes=[boxes, boxes], return_tensors="pt"
)
# We add dummy image keys (as LayoutLMv2 actually also requires a feature extractor
# to prepare the image input)
encoded_sequence["image"] = torch.randn(1, 3, 224, 224)
batch_encoded_sequence["image"] = torch.randn(2, 3, 224, 224)
# This should not fail
with torch.no_grad(): # saves some time
model(**encoded_sequence)
model(**batch_encoded_sequence)
def test_compare_add_special_tokens(self):
for tokenizer, pretrained_name, kwargs in self.tokenizers_list:
with self.subTest(f"{tokenizer.__class__.__name__} ({pretrained_name})"):
tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs)
simple_num_special_tokens_to_add = tokenizer_r.num_special_tokens_to_add(pair=False)
words, boxes = self.get_words_and_boxes()
# tokenize()
no_special_tokens = tokenizer_r.tokenize(" ".join(words), add_special_tokens=False)
with_special_tokens = tokenizer_r.tokenize(" ".join(words), add_special_tokens=True)
self.assertEqual(len(no_special_tokens), len(with_special_tokens) - simple_num_special_tokens_to_add)
# encode()
no_special_tokens = tokenizer_r.encode(words, boxes=boxes, add_special_tokens=False)
with_special_tokens = tokenizer_r.encode(words, boxes=boxes, add_special_tokens=True)
self.assertEqual(len(no_special_tokens), len(with_special_tokens) - simple_num_special_tokens_to_add)
# encode_plus()
no_special_tokens = tokenizer_r(words, boxes=boxes, add_special_tokens=False)
with_special_tokens = tokenizer_r(words, boxes=boxes, add_special_tokens=True)
for key in no_special_tokens:
self.assertEqual(
len(no_special_tokens[key]),
len(with_special_tokens[key]) - simple_num_special_tokens_to_add,
)
# # batch_encode_plus
words, boxes = self.get_words_and_boxes_batch()
no_special_tokens = tokenizer_r.batch_encode_plus(words, boxes=boxes, add_special_tokens=False)
with_special_tokens = tokenizer_r.batch_encode_plus(words, boxes=boxes, add_special_tokens=True)
for key in no_special_tokens:
for i_no, i_with in zip(no_special_tokens[key], with_special_tokens[key]):
self.assertEqual(len(i_no), len(i_with) - simple_num_special_tokens_to_add)
@slow
def test_layoutlmv2_truncation_integration_test(self):
words, boxes = self.get_words_and_boxes()
tokenizer = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased", model_max_length=512)
for i in range(12, 512):
new_encoded_inputs = tokenizer.encode(words, boxes=boxes, max_length=i, truncation=True)
# Ensure that the input IDs are less than the max length defined.
self.assertLessEqual(len(new_encoded_inputs), i)
tokenizer.model_max_length = 20
new_encoded_inputs = tokenizer.encode(words, boxes=boxes, truncation=True)
dropped_encoded_inputs = tokenizer.encode(words, boxes=boxes, truncation=True)
# Ensure that the input IDs are still truncated when no max_length is specified
self.assertListEqual(new_encoded_inputs, dropped_encoded_inputs)
self.assertLessEqual(len(new_encoded_inputs), 20)
def test_sequence_ids(self):
tokenizers = self.get_tokenizers()
for tokenizer in tokenizers:
if not tokenizer.is_fast:
continue
with self.subTest(f"{tokenizer.__class__.__name__}"):
seq_0 = "Test this method."
seq_1 = ["With", "these", "inputs."]
boxes = [[1000, 1000, 1000, 1000] for _ in range(len(seq_1))]
# We want to have sequence 0 and sequence 1 are tagged
# respectively with 0 and 1 token_ids
# (regardless of whether the model use token type ids)
# We use this assumption in the QA pipeline among other place
output = tokenizer(seq_0.split(), boxes=boxes)
self.assertIn(0, output.sequence_ids())
output = tokenizer(seq_0, seq_1, boxes=boxes)
self.assertIn(0, output.sequence_ids())
self.assertIn(1, output.sequence_ids())
if tokenizer.num_special_tokens_to_add(pair=True):
self.assertIn(None, output.sequence_ids())
def test_special_tokens_initialization(self):
for tokenizer, pretrained_name, kwargs in self.tokenizers_list:
with self.subTest(f"{tokenizer.__class__.__name__} ({pretrained_name})"):
added_tokens = [AddedToken("<special>", lstrip=True)]
tokenizer_r = self.rust_tokenizer_class.from_pretrained(
pretrained_name, additional_special_tokens=added_tokens, **kwargs
)
words = "Hey this is a <special> token".split()
boxes = [[1000, 1000, 1000, 1000] for _ in range(len(words))]
r_output = tokenizer_r.encode(words, boxes=boxes)
special_token_id = tokenizer_r.encode(
["<special>"], boxes=[1000, 1000, 1000, 1000], add_special_tokens=False
)[0]
self.assertTrue(special_token_id in r_output)
def test_training_new_tokenizer(self):
tokenizer = self.get_tokenizer()
new_tokenizer = tokenizer.train_new_from_iterator(SMALL_TRAINING_CORPUS, 100)
# Test we can use the new tokenizer with something not seen during training
text = [["this", "is", "the"], ["how", "are", "you"]]
boxes = [[[1, 2, 3, 4], [5, 6, 7, 8], [1, 3, 4, 8]], [[5, 6, 7, 8], [4, 5, 6, 7], [3, 9, 2, 7]]]
inputs = new_tokenizer(text, boxes=boxes)
self.assertEqual(len(inputs["input_ids"]), 2)
decoded_input = new_tokenizer.decode(inputs["input_ids"][0], skip_special_tokens=True)
expected_result = "this is the"
if tokenizer.backend_tokenizer.normalizer is not None:
expected_result = tokenizer.backend_tokenizer.normalizer.normalize_str(expected_result)
self.assertEqual(expected_result, decoded_input)
# We check that the parameters of the tokenizer remained the same
# Check we have the same number of added_tokens for both pair and non-pair inputs.
self.assertEqual(tokenizer.num_special_tokens_to_add(False), new_tokenizer.num_special_tokens_to_add(False))
self.assertEqual(tokenizer.num_special_tokens_to_add(True), new_tokenizer.num_special_tokens_to_add(True))
# Check we have the correct max_length for both pair and non-pair inputs.
self.assertEqual(tokenizer.max_len_single_sentence, new_tokenizer.max_len_single_sentence)
self.assertEqual(tokenizer.max_len_sentences_pair, new_tokenizer.max_len_sentences_pair)
# Assert the set of special tokens match as we didn't ask to change them
self.assertSequenceEqual(
tokenizer.all_special_tokens,
new_tokenizer.all_special_tokens,
)
self.assertDictEqual(tokenizer.special_tokens_map, new_tokenizer.special_tokens_map)
def test_training_new_tokenizer_with_special_tokens_change(self):
tokenizer = self.get_tokenizer()
# Test with a special tokens map
class_signature = inspect.signature(tokenizer.__class__)
if "cls_token" in class_signature.parameters:
new_tokenizer = tokenizer.train_new_from_iterator(
SMALL_TRAINING_CORPUS, 100, special_tokens_map={tokenizer.cls_token: "<cls>"}
)
cls_id = new_tokenizer.get_vocab()["<cls>"]
self.assertEqual(new_tokenizer.cls_token, "<cls>")
self.assertEqual(new_tokenizer.cls_token_id, cls_id)
# Create a new mapping from the special tokens defined in the original tokenizer
special_tokens_list = PreTrainedTokenizerBase.SPECIAL_TOKENS_ATTRIBUTES.copy()
special_tokens_map = {}
for token in special_tokens_list:
# Get the private one to avoid unnecessary warnings.
if getattr(tokenizer, token) is not None:
special_token = getattr(tokenizer, token)
special_tokens_map[special_token] = f"{special_token}a"
# Train new tokenizer
new_tokenizer = tokenizer.train_new_from_iterator(
SMALL_TRAINING_CORPUS, 100, special_tokens_map=special_tokens_map
)
# Check the changes
for token in special_tokens_list:
# Get the private one to avoid unnecessary warnings.
if getattr(tokenizer, token) is None:
continue
special_token = getattr(tokenizer, token)
if special_token in special_tokens_map:
new_special_token = getattr(new_tokenizer, token)
self.assertEqual(special_tokens_map[special_token], new_special_token)
new_id = new_tokenizer.get_vocab()[new_special_token]
self.assertEqual(getattr(new_tokenizer, f"{token}_id"), new_id)
# Check if the AddedToken / string format has been kept
for special_token in tokenizer.all_special_tokens:
if isinstance(special_token, AddedToken) and special_token.content not in special_tokens_map:
# The special token must appear identically in the list of the new tokenizer.
self.assertTrue(
special_token in new_tokenizer.all_special_tokens,
f"'{special_token}' should be in {new_tokenizer.all_special_tokens}",
)
elif isinstance(special_token, AddedToken):
# The special token must appear in the list of the new tokenizer as an object of type AddedToken with
# the same parameters as the old AddedToken except the content that the user has requested to change.
special_token_str = special_token.content
new_special_token_str = special_tokens_map[special_token_str]
find = False
for candidate in new_tokenizer.all_special_tokens:
if (
isinstance(candidate, AddedToken)
and candidate.content == new_special_token_str
and candidate.lstrip == special_token.lstrip
and candidate.rstrip == special_token.rstrip
and candidate.normalized == special_token.normalized
and candidate.single_word == special_token.single_word
):
find = True
break
self.assertTrue(
find,
f"'{new_special_token_str}' doesn't appear in the list "
f"'{new_tokenizer.all_special_tokens}' as an AddedToken with the same parameters as "
f"'{special_token}' in the list {tokenizer.all_special_tokens}",
)
elif special_token not in special_tokens_map:
# The special token must appear identically in the list of the new tokenizer.
self.assertTrue(
special_token in new_tokenizer.all_special_tokens,
f"'{special_token}' should be in {new_tokenizer.all_special_tokens}",
)
else:
# The special token must appear in the list of the new tokenizer as an object of type string.
self.assertTrue(special_tokens_map[special_token] in new_tokenizer.all_special_tokens)
# Test we can use the new tokenizer with something not seen during training
words = [["this", "is"], ["hello", "🤗"]]
boxes = [[[1, 2, 3, 4], [5, 6, 7, 8]], [[1, 2, 3, 4], [5, 6, 7, 8]]]
inputs = new_tokenizer(words, boxes=boxes)
self.assertEqual(len(inputs["input_ids"]), 2)
decoded_input = new_tokenizer.decode(inputs["input_ids"][0], skip_special_tokens=True)
expected_result = "this is"
if tokenizer.backend_tokenizer.normalizer is not None:
expected_result = tokenizer.backend_tokenizer.normalizer.normalize_str(expected_result)
self.assertEqual(expected_result, decoded_input)
def test_prepare_for_model(self):
tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
# only test prepare_for_model for the slow tokenizer
if tokenizer.__class__.__name__ == "LayoutLMv2Tokenizer":
continue
with self.subTest(f"{tokenizer.__class__.__name__}"):
words, boxes = self.get_words_and_boxes()
prepared_input_dict = tokenizer.prepare_for_model(words, boxes=boxes, add_special_tokens=True)
input_dict = tokenizer(words, boxes=boxes, add_special_tokens=True)
self.assertEqual(input_dict, prepared_input_dict)
def test_batch_encode_dynamic_overflowing(self):
"""
When calling batch_encode with multiple sequences, it can return different number of
overflowing encoding for each sequence:
[
Sequence 1: [Encoding 1, Encoding 2],
Sequence 2: [Encoding 1],
Sequence 3: [Encoding 1, Encoding 2, ... Encoding N]
]
This needs to be padded so that it can represented as a tensor
"""
for tokenizer, pretrained_name, kwargs in self.tokenizers_list:
tokenizer = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs)
with self.subTest(f"{tokenizer.__class__.__name__} ({pretrained_name}, {tokenizer.__class__.__name__})"):
returned_tensor = "pt"
# Single example
words, boxes = self.get_words_and_boxes()
tokens = tokenizer(
words,
boxes=boxes,
max_length=6,
padding=True,
truncation=True,
return_tensors=returned_tensor,
return_overflowing_tokens=True,
)
for key in filter(lambda x: "overflow_to_sample_mapping" not in x, tokens.keys()):
if key != "bbox":
self.assertEqual(len(tokens[key].shape), 2)
else:
self.assertEqual(len(tokens[key].shape), 3)
# Batch of examples
# For these 2 examples, 3 training examples will be created
words, boxes = self.get_words_and_boxes_batch()
tokens = tokenizer.batch_encode_plus(
words,
boxes=boxes,
max_length=6,
padding=True,
truncation="only_first",
return_tensors=returned_tensor,
return_overflowing_tokens=True,
)
for key in filter(lambda x: "overflow_to_sample_mapping" not in x, tokens.keys()):
if key != "bbox":
self.assertEqual(len(tokens[key].shape), 2)
self.assertEqual(tokens[key].shape[-1], 6)
else:
self.assertEqual(len(tokens[key].shape), 3)
self.assertEqual(tokens[key].shape[-1], 4)
@unittest.skip(reason="TO DO: overwrite this very extensive test.")
def test_alignment_methods(self):
pass
def get_clean_sequence(self, tokenizer, with_prefix_space=False, max_length=20, min_length=5):
toks = [(i, tokenizer.decode([i], clean_up_tokenization_spaces=False)) for i in range(len(tokenizer))]
toks = list(filter(lambda t: re.match(r"^[ a-zA-Z]+$", t[1]), toks))
toks = list(
filter(
lambda t: [t[0]]
== tokenizer.encode(t[1].split(" "), boxes=len(t[1]) * [[1, 1, 1, 1]], add_special_tokens=False),
toks,
)
)
if max_length is not None and len(toks) > max_length:
toks = toks[:max_length]
if min_length is not None and len(toks) < min_length and len(toks) > 0:
while len(toks) < min_length:
toks = toks + toks
# toks_str = [t[1] for t in toks]
toks_ids = [t[0] for t in toks]
# Ensure consistency
output_txt = tokenizer.decode(toks_ids, clean_up_tokenization_spaces=False)
if " " not in output_txt and len(toks_ids) > 1:
output_txt = (
tokenizer.decode([toks_ids[0]], clean_up_tokenization_spaces=False)
+ " "
+ tokenizer.decode(toks_ids[1:], clean_up_tokenization_spaces=False)
)
if with_prefix_space:
output_txt = " " + output_txt
words = output_txt.split(" ")
boxes = [[i, i, i, i] for i in range(len(words))]
output_ids = tokenizer.encode(words, boxes=boxes, add_special_tokens=False)
return words, boxes, output_ids
# @unittest.skip(reason="LayoutLMv2 tokenizer requires boxes besides sequences.")
def test_maximum_encoding_length_pair_input(self):
tokenizers = self.get_tokenizers(do_lower_case=False, model_max_length=100)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
# Build a sequence from our model's vocabulary
stride = 2
seq_0, boxes_0, ids = self.get_clean_sequence(tokenizer, max_length=20)
question_0 = " ".join(map(str, seq_0))
if len(ids) <= 2 + stride:
seq_0 = (seq_0 + " ") * (2 + stride)
ids = None
seq0_tokens = tokenizer(seq_0, boxes=boxes_0, add_special_tokens=False)
self.assertGreater(len(seq0_tokens["input_ids"]), 2 + stride)
question_1 = "This is another sentence to be encoded."
seq_1 = ["what", "a", "weird", "test", "weirdly", "weird"]
boxes_1 = [[i, i, i, i] for i in range(len(seq_1))]
seq1_tokens = tokenizer(seq_1, boxes=boxes_1, add_special_tokens=False)
if abs(len(seq0_tokens["input_ids"]) - len(seq1_tokens["input_ids"])) <= 2:
seq1_tokens_input_ids = seq1_tokens["input_ids"] + seq1_tokens["input_ids"]
seq_1 = tokenizer.decode(seq1_tokens_input_ids, clean_up_tokenization_spaces=False)
seq_1 = seq_1.split(" ")
boxes_1 = [[i, i, i, i] for i in range(len(seq_1))]
seq1_tokens = tokenizer(seq_1, boxes=boxes_1, add_special_tokens=False)
self.assertGreater(len(seq1_tokens["input_ids"]), 2 + stride)
smallest = (
seq1_tokens["input_ids"]
if len(seq0_tokens["input_ids"]) > len(seq1_tokens["input_ids"])
else seq0_tokens["input_ids"]
)
# We are not using the special tokens - a bit too hard to test all the tokenizers with this
# TODO try this again later
sequence = tokenizer(
question_0, seq_1, boxes=boxes_1, add_special_tokens=False
) # , add_prefix_space=False)
# Test with max model input length
model_max_length = tokenizer.model_max_length
self.assertEqual(model_max_length, 100)
seq_2 = seq_0 * model_max_length
question_2 = " ".join(map(str, seq_2))
boxes_2 = boxes_0 * model_max_length
self.assertGreater(len(seq_2), model_max_length)
sequence1 = tokenizer(seq_1, boxes=boxes_1, add_special_tokens=False)
total_length1 = len(sequence1["input_ids"])
sequence2 = tokenizer(question_2, seq_1, boxes=boxes_1, add_special_tokens=False)
total_length2 = len(sequence2["input_ids"])
self.assertLess(total_length1, model_max_length, "Issue with the testing sequence, please update it.")
self.assertGreater(
total_length2, model_max_length, "Issue with the testing sequence, please update it."
)
# Simple
padding_strategies = (
[False, True, "longest"] if tokenizer.pad_token and tokenizer.pad_token_id >= 0 else [False]
)
for padding_state in padding_strategies:
with self.subTest(f"{tokenizer.__class__.__name__} Padding: {padding_state}"):
for truncation_state in [True, "longest_first", "only_first"]:
with self.subTest(f"{tokenizer.__class__.__name__} Truncation: {truncation_state}"):
output = tokenizer(
question_2,
seq_1,
boxes=boxes_1,
padding=padding_state,
truncation=truncation_state,
)
self.assertEqual(len(output["input_ids"]), model_max_length)
self.assertEqual(len(output["bbox"]), model_max_length)
output = tokenizer(
[question_2],
[seq_1],
boxes=[boxes_1],
padding=padding_state,
truncation=truncation_state,
)
self.assertEqual(len(output["input_ids"][0]), model_max_length)
self.assertEqual(len(output["bbox"][0]), model_max_length)
# Simple
output = tokenizer(
question_1, seq_2, boxes=boxes_2, padding=padding_state, truncation="only_second"
)
self.assertEqual(len(output["input_ids"]), model_max_length)
self.assertEqual(len(output["bbox"]), model_max_length)
output = tokenizer(
[question_1], [seq_2], boxes=[boxes_2], padding=padding_state, truncation="only_second"
)
self.assertEqual(len(output["input_ids"][0]), model_max_length)
self.assertEqual(len(output["bbox"][0]), model_max_length)
# Simple with no truncation
# Reset warnings
tokenizer.deprecation_warnings = {}
with self.assertLogs("transformers", level="WARNING") as cm:
output = tokenizer(
question_1, seq_2, boxes=boxes_2, padding=padding_state, truncation=False
)
self.assertNotEqual(len(output["input_ids"]), model_max_length)
self.assertNotEqual(len(output["bbox"]), model_max_length)
self.assertEqual(len(cm.records), 1)
self.assertTrue(
cm.records[0].message.startswith(
"Token indices sequence length is longer than the specified maximum sequence length"
" for this model"
)
)
tokenizer.deprecation_warnings = {}
with self.assertLogs("transformers", level="WARNING") as cm:
output = tokenizer(
[question_1], [seq_2], boxes=[boxes_2], padding=padding_state, truncation=False
)
self.assertNotEqual(len(output["input_ids"][0]), model_max_length)
self.assertNotEqual(len(output["bbox"][0]), model_max_length)
self.assertEqual(len(cm.records), 1)
self.assertTrue(
cm.records[0].message.startswith(
"Token indices sequence length is longer than the specified maximum sequence length"
" for this model"
)
)
# Check the order of Sequence of input ids, overflowing tokens and bbox sequence with truncation
truncated_first_sequence = (
tokenizer(seq_0, boxes=boxes_0, add_special_tokens=False)["input_ids"][:-2]
+ tokenizer(seq_1, boxes=boxes_1, add_special_tokens=False)["input_ids"]
)
truncated_second_sequence = (
tokenizer(seq_0, boxes=boxes_0, add_special_tokens=False)["input_ids"]
+ tokenizer(seq_1, boxes=boxes_1, add_special_tokens=False)["input_ids"][:-2]
)
truncated_longest_sequence = (
truncated_first_sequence if len(seq0_tokens) > len(seq1_tokens) else truncated_second_sequence
)
overflow_first_sequence = (
tokenizer(seq_0, boxes=boxes_0, add_special_tokens=False)["input_ids"][-(2 + stride) :]
+ tokenizer(seq_1, boxes=boxes_1, add_special_tokens=False)["input_ids"]
)
overflow_second_sequence = (
tokenizer(seq_0, boxes=boxes_0, add_special_tokens=False)["input_ids"]
+ tokenizer(seq_1, boxes=boxes_1, add_special_tokens=False)["input_ids"][-(2 + stride) :]
)
overflow_longest_sequence = (
overflow_first_sequence if len(seq0_tokens) > len(seq1_tokens) else overflow_second_sequence
)
bbox_first = [[0, 0, 0, 0]] * (len(seq_0) - 2)
bbox_first_sequence = bbox_first + tokenizer(seq_1, boxes=boxes_1, add_special_tokens=False)["bbox"]
overflowing_token_bbox_first_sequence_slow = [[0, 0, 0, 0]] * (2 + stride)
overflowing_token_bbox_first_sequence_fast = [[0, 0, 0, 0]] * (2 + stride) + tokenizer(
seq_1, boxes=boxes_1, add_special_tokens=False
)["bbox"]
bbox_second = [[0, 0, 0, 0]] * len(seq_0)
bbox_second_sequence = (
bbox_second + tokenizer(seq_1, boxes=boxes_1, add_special_tokens=False)["bbox"][:-2]
)
overflowing_token_bbox_second_sequence_slow = tokenizer(
seq_1, boxes=boxes_1, add_special_tokens=False
)["bbox"][-(2 + stride) :]
overflowing_token_bbox_second_sequence_fast = [[0, 0, 0, 0]] * len(seq_0) + tokenizer(
seq_1, boxes=boxes_1, add_special_tokens=False
)["bbox"][-(2 + stride) :]
bbox_longest_sequence = (
bbox_first_sequence if len(seq0_tokens) > len(seq1_tokens) else bbox_second_sequence
)
overflowing_token_bbox_longest_sequence_fast = (
overflowing_token_bbox_first_sequence_fast
if len(seq0_tokens) > len(seq1_tokens)
else overflowing_token_bbox_second_sequence_fast
)
# Overflowing tokens are handled quite differently in slow and fast tokenizers
if isinstance(tokenizer, LayoutLMv2Tokenizer):
information = tokenizer(
question_0,
seq_1,
boxes=boxes_1,
max_length=len(sequence["input_ids"]) - 2,
add_special_tokens=False,
stride=stride,
truncation="longest_first",
return_overflowing_tokens=True,
# add_prefix_space=False,
)
truncated_sequence = information["input_ids"][0]
overflowing_tokens = information["input_ids"][1]
bbox = information["bbox"][0]
overflowing_bbox = information["bbox"][1]
self.assertEqual(len(information["input_ids"]), 2)
self.assertEqual(len(truncated_sequence), len(sequence["input_ids"]) - 2)
self.assertEqual(truncated_sequence, truncated_longest_sequence)
self.assertEqual(len(overflowing_tokens), 2 + stride + len(smallest))
self.assertEqual(overflowing_tokens, overflow_longest_sequence)
self.assertEqual(bbox, bbox_longest_sequence)
self.assertEqual(len(overflowing_bbox), 2 + stride + len(smallest))
self.assertEqual(overflowing_bbox, overflowing_token_bbox_longest_sequence_fast)
else:
# No overflowing tokens when using 'longest' in python tokenizers
with self.assertRaises(ValueError) as context:
information = tokenizer(
question_0,
seq_1,
boxes=boxes_1,
max_length=len(sequence["input_ids"]) - 2,
add_special_tokens=False,
stride=stride,
truncation="longest_first",
return_overflowing_tokens=True,
# add_prefix_space=False,
)
self.assertTrue(
context.exception.args[0].startswith(
"Not possible to return overflowing tokens for pair of sequences with the "
"`longest_first`. Please select another truncation strategy than `longest_first`, "
"for instance `only_second` or `only_first`."
)
)
# Overflowing tokens are handled quite differently in slow and fast tokenizers
if isinstance(tokenizer, LayoutLMv2Tokenizer):
information = tokenizer(
question_0,
seq_1,
boxes=boxes_1,
max_length=len(sequence["input_ids"]) - 2,
add_special_tokens=False,
stride=stride,
truncation=True,
return_overflowing_tokens=True,
# add_prefix_space=False,
)
truncated_sequence = information["input_ids"][0]
overflowing_tokens = information["input_ids"][1]
bbox = information["bbox"][0]
overflowing_bbox = information["bbox"][1]
self.assertEqual(len(information["input_ids"]), 2)
self.assertEqual(len(truncated_sequence), len(sequence["input_ids"]) - 2)
self.assertEqual(truncated_sequence, truncated_longest_sequence)
self.assertEqual(len(overflowing_tokens), 2 + stride + len(smallest))
self.assertEqual(overflowing_tokens, overflow_longest_sequence)
self.assertEqual(bbox, bbox_longest_sequence)
self.assertEqual(overflowing_bbox, overflowing_token_bbox_longest_sequence_fast)
else:
# No overflowing tokens when using 'longest' in python tokenizers
with self.assertRaises(ValueError) as context:
information = tokenizer(
question_0,
seq_1,
boxes=boxes_1,
max_length=len(sequence["input_ids"]) - 2,
add_special_tokens=False,
stride=stride,
truncation=True,
return_overflowing_tokens=True,
# add_prefix_space=False,
)
self.assertTrue(
context.exception.args[0].startswith(
"Not possible to return overflowing tokens for pair of sequences with the "
"`longest_first`. Please select another truncation strategy than `longest_first`, "
"for instance `only_second` or `only_first`."
)
)
information_first_truncated = tokenizer(
question_0,
seq_1,
boxes=boxes_1,
max_length=len(sequence["input_ids"]) - 2,
add_special_tokens=False,
stride=stride,
truncation="only_first",
return_overflowing_tokens=True,
# add_prefix_space=False,
)
# Overflowing tokens are handled quite differently in slow and fast tokenizers
if isinstance(tokenizer, LayoutLMv2Tokenizer):
truncated_sequence = information_first_truncated["input_ids"][0]
overflowing_tokens = information_first_truncated["input_ids"][1]
bbox = information_first_truncated["bbox"][0]
overflowing_bbox = information_first_truncated["bbox"][1]
self.assertEqual(len(information_first_truncated["input_ids"]), 2)
self.assertEqual(len(truncated_sequence), len(sequence["input_ids"]) - 2)
self.assertEqual(truncated_sequence, truncated_first_sequence)
self.assertEqual(len(overflowing_tokens), 2 + stride + len(seq1_tokens["input_ids"]))
self.assertEqual(overflowing_tokens, overflow_first_sequence)
self.assertEqual(bbox, bbox_first_sequence)
self.assertEqual(overflowing_bbox, overflowing_token_bbox_first_sequence_fast)
else:
truncated_sequence = information_first_truncated["input_ids"]
overflowing_tokens = information_first_truncated["overflowing_tokens"]
overflowing_bbox = information_first_truncated["overflowing_token_boxes"]
bbox = information_first_truncated["bbox"]
self.assertEqual(len(truncated_sequence), len(sequence["input_ids"]) - 2)
self.assertEqual(truncated_sequence, truncated_first_sequence)
self.assertEqual(len(overflowing_tokens), 2 + stride)
self.assertEqual(overflowing_tokens, seq0_tokens["input_ids"][-(2 + stride) :])
self.assertEqual(bbox, bbox_first_sequence)
self.assertEqual(overflowing_bbox, overflowing_token_bbox_first_sequence_slow)
information_second_truncated = tokenizer(
question_0,
seq_1,
boxes=boxes_1,
max_length=len(sequence["input_ids"]) - 2,
add_special_tokens=False,
stride=stride,
truncation="only_second",
return_overflowing_tokens=True,
# add_prefix_space=False,
)
# Overflowing tokens are handled quite differently in slow and fast tokenizers
if isinstance(tokenizer, LayoutLMv2Tokenizer):
truncated_sequence = information_second_truncated["input_ids"][0]
overflowing_tokens = information_second_truncated["input_ids"][1]
bbox = information_second_truncated["bbox"][0]
overflowing_bbox = information_second_truncated["bbox"][1]
self.assertEqual(len(information_second_truncated["input_ids"]), 2)
self.assertEqual(len(truncated_sequence), len(sequence["input_ids"]) - 2)
self.assertEqual(truncated_sequence, truncated_second_sequence)
self.assertEqual(len(overflowing_tokens), 2 + stride + len(seq0_tokens["input_ids"]))
self.assertEqual(overflowing_tokens, overflow_second_sequence)
self.assertEqual(bbox, bbox_second_sequence)
self.assertEqual(overflowing_bbox, overflowing_token_bbox_second_sequence_fast)
else:
truncated_sequence = information_second_truncated["input_ids"]
overflowing_tokens = information_second_truncated["overflowing_tokens"]
bbox = information_second_truncated["bbox"]
overflowing_bbox = information_second_truncated["overflowing_token_boxes"]
self.assertEqual(len(truncated_sequence), len(sequence["input_ids"]) - 2)
self.assertEqual(truncated_sequence, truncated_second_sequence)
self.assertEqual(len(overflowing_tokens), 2 + stride)
self.assertEqual(overflowing_tokens, seq1_tokens["input_ids"][-(2 + stride) :])
self.assertEqual(bbox, bbox_second_sequence)
self.assertEqual(overflowing_bbox, overflowing_token_bbox_second_sequence_slow)
# @unittest.skip(reason="LayoutLMv2 tokenizer requires boxes besides sequences.")
def test_maximum_encoding_length_single_input(self):
tokenizers = self.get_tokenizers(do_lower_case=False, model_max_length=100)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
seq_0, boxes_0, ids = self.get_clean_sequence(tokenizer, max_length=20)
sequence = tokenizer(seq_0, boxes=boxes_0, add_special_tokens=False)
total_length = len(sequence["input_ids"])
self.assertGreater(
total_length, 4, "Issue with the testing sequence, please update it, it's too short"
)
# Test with max model input length
model_max_length = tokenizer.model_max_length
self.assertEqual(model_max_length, 100)
seq_1 = seq_0 * model_max_length
boxes_1 = boxes_0 * model_max_length
sequence1 = tokenizer(seq_1, boxes=boxes_1, add_special_tokens=False)
total_length1 = len(sequence1["input_ids"])
self.assertGreater(
total_length1,
model_max_length,
"Issue with the testing sequence, please update it, it's too short",
)
# Simple
padding_strategies = (
[False, True, "longest"] if tokenizer.pad_token and tokenizer.pad_token_id >= 0 else [False]
)
for padding_state in padding_strategies:
with self.subTest(f"Padding: {padding_state}"):
for truncation_state in [True, "longest_first", "only_first"]:
with self.subTest(f"Truncation: {truncation_state}"):
output = tokenizer(
seq_1,
boxes=boxes_1,
padding=padding_state,
truncation=truncation_state,
)
self.assertEqual(len(output["input_ids"]), model_max_length)
self.assertEqual(len(output["bbox"]), model_max_length)
output = tokenizer(
[seq_1],
boxes=[boxes_1],
padding=padding_state,
truncation=truncation_state,
)
self.assertEqual(len(output["input_ids"][0]), model_max_length)
self.assertEqual(len(output["bbox"][0]), model_max_length)
# Simple with no truncation
# Reset warnings
tokenizer.deprecation_warnings = {}
with self.assertLogs("transformers", level="WARNING") as cm:
output = tokenizer(seq_1, boxes=boxes_1, padding=padding_state, truncation=False)
self.assertNotEqual(len(output["input_ids"]), model_max_length)
self.assertNotEqual(len(output["bbox"]), model_max_length)
self.assertEqual(len(cm.records), 1)
self.assertTrue(
cm.records[0].message.startswith(
"Token indices sequence length is longer than the specified maximum sequence length"
" for this model"
)
)
tokenizer.deprecation_warnings = {}
with self.assertLogs("transformers", level="WARNING") as cm:
output = tokenizer([seq_1], boxes=[boxes_1], padding=padding_state, truncation=False)
self.assertNotEqual(len(output["input_ids"][0]), model_max_length)
self.assertNotEqual(len(output["bbox"][0]), model_max_length)
self.assertEqual(len(cm.records), 1)
self.assertTrue(
cm.records[0].message.startswith(
"Token indices sequence length is longer than the specified maximum sequence length"
" for this model"
)
)
# Check the order of Sequence of input ids, overflowing tokens and bbox sequence with truncation
stride = 2
information = tokenizer(
seq_0,
boxes=boxes_0,
max_length=total_length - 2,
add_special_tokens=False,
stride=stride,
truncation=True,
return_overflowing_tokens=True,
# add_prefix_space=False,
)
# Overflowing tokens are handled quite differently in slow and fast tokenizers
if isinstance(tokenizer, LayoutLMv2Tokenizer):
truncated_sequence = information["input_ids"][0]
overflowing_tokens = information["input_ids"][1]
bbox = information["bbox"][0]
overflowing_bbox = information["bbox"][1]
self.assertEqual(len(information["input_ids"]), 2)
self.assertEqual(len(truncated_sequence), total_length - 2)
self.assertEqual(truncated_sequence, sequence["input_ids"][:-2])
self.assertEqual(len(overflowing_tokens), 2 + stride)
self.assertEqual(overflowing_tokens, sequence["input_ids"][-(2 + stride) :])
self.assertEqual(bbox, sequence["bbox"][:-2])
self.assertEqual(overflowing_bbox, sequence["bbox"][-(2 + stride) :])
else:
truncated_sequence = information["input_ids"]
overflowing_tokens = information["overflowing_tokens"]
bbox = information["bbox"]
overflowing_bbox = information["overflowing_token_boxes"]
self.assertEqual(len(truncated_sequence), total_length - 2)
self.assertEqual(truncated_sequence, sequence["input_ids"][:-2])
self.assertEqual(len(overflowing_tokens), 2 + stride)
self.assertEqual(overflowing_tokens, sequence["input_ids"][-(2 + stride) :])
self.assertEqual(bbox, sequence["bbox"][:-2])
self.assertEqual(overflowing_bbox, sequence["bbox"][-(2 + stride) :])
@unittest.skip(reason="LayoutLMv2 tokenizer requires boxes besides sequences.")
def test_pretokenized_inputs(self):
pass
@unittest.skip(reason="LayoutLMv2 tokenizer always expects pretokenized inputs.")
def test_compare_pretokenized_inputs(self):
pass
@unittest.skip(reason="LayoutLMv2 fast tokenizer does not support prepare_for_model")
def test_compare_prepare_for_model(self):
pass
@slow
def test_only_label_first_subword(self):
words = ["hello", "niels"]
boxes = [[1000, 1000, 1000, 1000] for _ in range(len(words))]
word_labels = [0, 1]
# test slow tokenizer
tokenizer_p = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased")
encoding = tokenizer_p(words, boxes=boxes, word_labels=word_labels)
self.assertListEqual(encoding.labels, [-100, 0, 1, -100, -100])
tokenizer_p = LayoutLMv2Tokenizer.from_pretrained(
"microsoft/layoutlmv2-base-uncased", only_label_first_subword=False
)
encoding = tokenizer_p(words, boxes=boxes, word_labels=word_labels)
self.assertListEqual(encoding.labels, [-100, 0, 1, 1, -100])
# test fast tokenizer
tokenizer_r = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased")
encoding = tokenizer_r(words, boxes=boxes, word_labels=word_labels)
self.assertListEqual(encoding.labels, [-100, 0, 1, -100, -100])
tokenizer_r = LayoutLMv2Tokenizer.from_pretrained(
"microsoft/layoutlmv2-base-uncased", only_label_first_subword=False
)
encoding = tokenizer_r(words, boxes=boxes, word_labels=word_labels)
self.assertListEqual(encoding.labels, [-100, 0, 1, 1, -100])
@slow
def test_layoutlmv2_integration_test(self):
tokenizer_p = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased")
tokenizer_r = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased")
# There are 3 cases:
# CASE 1: document image classification (training + inference), document image token classification (inference),
# in which case only words and normalized bounding boxes are provided to the tokenizer
# CASE 2: document image token classification (training),
# in which case one also provides word labels to the tokenizer
# CASE 3: document image visual question answering (inference),
# in which case one also provides a question to the tokenizer
# We need to test all 3 cases both on batched and non-batched inputs.
# CASE 1: not batched
words, boxes = self.get_words_and_boxes()
expected_results = {'input_ids': [101, 1037, 6881, 2135, 3231, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'bbox': [[0, 0, 0, 0], [423, 237, 440, 251], [427, 272, 441, 287], [427, 272, 441, 287], [419, 115, 437, 129], [1000, 1000, 1000, 1000], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]} # fmt: skip
encoding_p = tokenizer_p(words, boxes=boxes, padding="max_length", max_length=20)
encoding_r = tokenizer_r(words, boxes=boxes, padding="max_length", max_length=20)
self.assertDictEqual(dict(encoding_p), expected_results)
self.assertDictEqual(dict(encoding_r), expected_results)
# CASE 1: batched
words, boxes = self.get_words_and_boxes_batch()
expected_results = {'input_ids': [[101, 1037, 6881, 2135, 3231, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 7592, 2026, 2171, 2003, 3960, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'bbox': [[[0, 0, 0, 0], [423, 237, 440, 251], [427, 272, 441, 287], [427, 272, 441, 287], [419, 115, 437, 129], [1000, 1000, 1000, 1000], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], [[0, 0, 0, 0], [961, 885, 992, 912], [256, 38, 330, 58], [256, 38, 330, 58], [336, 42, 353, 57], [34, 42, 66, 69], [1000, 1000, 1000, 1000], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]} # fmt: skip
encoding_p = tokenizer_p(words, boxes=boxes, padding="max_length", max_length=20)
encoding_r = tokenizer_r(words, boxes=boxes, padding="max_length", max_length=20)
self.assertDictEqual(dict(encoding_p), expected_results)
self.assertDictEqual(dict(encoding_r), expected_results)
# CASE 2: not batched
words, boxes = self.get_words_and_boxes()
word_labels = [1, 2, 3]
expected_results = {'input_ids': [101, 1037, 6881, 2135, 3231, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'bbox': [[0, 0, 0, 0], [423, 237, 440, 251], [427, 272, 441, 287], [427, 272, 441, 287], [419, 115, 437, 129], [1000, 1000, 1000, 1000], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'labels': [-100, 1, 2, -100, 3, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100], 'attention_mask': [1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]} # fmt: skip
encoding_p = tokenizer_p(words, boxes=boxes, word_labels=word_labels, padding="max_length", max_length=20)
encoding_r = tokenizer_r(words, boxes=boxes, word_labels=word_labels, padding="max_length", max_length=20)
self.assertDictEqual(dict(encoding_p), expected_results)
self.assertDictEqual(dict(encoding_r), expected_results)
# CASE 2: batched
words, boxes = self.get_words_and_boxes_batch()
word_labels = [[1, 2, 3], [2, 46, 17, 22, 3]]
expected_results = {'input_ids': [[101, 1037, 6881, 2135, 3231, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 7592, 2026, 2171, 2003, 3960, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'bbox': [[[0, 0, 0, 0], [423, 237, 440, 251], [427, 272, 441, 287], [427, 272, 441, 287], [419, 115, 437, 129], [1000, 1000, 1000, 1000], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], [[0, 0, 0, 0], [961, 885, 992, 912], [256, 38, 330, 58], [256, 38, 330, 58], [336, 42, 353, 57], [34, 42, 66, 69], [1000, 1000, 1000, 1000], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'labels': [[-100, 1, 2, -100, 3, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100], [-100, 2, 46, 17, 22, 3, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]} # fmt: skip
encoding_p = tokenizer_p(words, boxes=boxes, word_labels=word_labels, padding="max_length", max_length=20)
encoding_r = tokenizer_r(words, boxes=boxes, word_labels=word_labels, padding="max_length", max_length=20)
self.assertDictEqual(dict(encoding_p), expected_results)
self.assertDictEqual(dict(encoding_r), expected_results)
# CASE 3: not batched
question, words, boxes = self.get_question_words_and_boxes()
expected_results = {'input_ids': [101, 2054, 1005, 1055, 2010, 2171, 1029, 102, 1037, 6881, 2135, 3231, 102, 0, 0, 0, 0, 0, 0, 0], 'bbox': [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1000, 1000, 1000, 1000], [423, 237, 440, 251], [427, 272, 441, 287], [427, 272, 441, 287], [419, 115, 437, 129], [1000, 1000, 1000, 1000], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]} # fmt: skip
encoding_p = tokenizer_p(question, words, boxes, padding="max_length", max_length=20)
encoding_r = tokenizer_r(question, words, boxes, padding="max_length", max_length=20)
self.assertDictEqual(dict(encoding_p), expected_results)
self.assertDictEqual(dict(encoding_r), expected_results)
# CASE 3: batched
questions, words, boxes = self.get_question_words_and_boxes_batch()
expected_results = {'input_ids': [[101, 2054, 1005, 1055, 2010, 2171, 1029, 102, 1037, 6881, 2135, 3231, 102, 0, 0, 0, 0, 0, 0, 0], [101, 2129, 2003, 2002, 2170, 1029, 102, 2054, 1037, 21110, 2546, 3806, 2102, 2078, 102, 0, 0, 0, 0, 0]], 'bbox': [[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1000, 1000, 1000, 1000], [423, 237, 440, 251], [427, 272, 441, 287], [427, 272, 441, 287], [419, 115, 437, 129], [1000, 1000, 1000, 1000], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1000, 1000, 1000, 1000], [256, 38, 330, 58], [256, 38, 330, 58], [336, 42, 353, 57], [336, 42, 353, 57], [34, 42, 66, 69], [34, 42, 66, 69], [34, 42, 66, 69], [1000, 1000, 1000, 1000], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0]]} # fmt: skip
encoding_p = tokenizer_p(questions, words, boxes, padding="max_length", max_length=20)
encoding_r = tokenizer_r(questions, words, boxes, padding="max_length", max_length=20)
self.assertDictEqual(dict(encoding_p), expected_results)
self.assertDictEqual(dict(encoding_r), expected_results)
@unittest.skip(reason="Doesn't support returning Numpy arrays")
def test_np_encode_plus_sent_to_model(self):
pass
@unittest.skip(reason="Chat is not supported")
def test_chat_template(self):
pass
@unittest.skip("Chat is not supported")
def test_chat_template_return_assistant_tokens_mask(self):
pass
@unittest.skip("Chat is not supported")
def test_chat_template_return_assistant_tokens_mask_truncated(self):
pass
def test_empty_input_string(self):
tokenizer_return_type = []
output_tensor_type = []
if is_torch_available():
import numpy as np
import torch
tokenizer_return_type.append("pt")
output_tensor_type.append(torch.int64)
tokenizer_return_type.append("np")
output_tensor_type.append(np.int64)
if is_mlx_available():
import mlx.core as mx
tokenizer_return_type.append("mlx")
output_tensor_type.append(mx.int32)
if len(tokenizer_return_type) == 0:
self.skipTest(reason="No expected framework from PT or MLX found")
tokenizers = self.get_tokenizers()
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
words, boxes = self.get_empty_words_and_boxes()
for return_type, target_type in zip(tokenizer_return_type, output_tensor_type):
output = tokenizer(words, boxes=boxes, return_tensors=return_type)
self.assertEqual(output.input_ids.dtype, target_type)
question, words, boxes = self.get_empty_question_words_and_boxes()
for return_type, target_type in zip(tokenizer_return_type, output_tensor_type):
output = tokenizer(words, boxes=boxes, return_tensors=return_type)
self.assertEqual(output.input_ids.dtype, target_type)
words, boxes = self.get_empty_words_and_boxes_batch()
for return_type, target_type in zip(tokenizer_return_type, output_tensor_type):
output = tokenizer(words, boxes=boxes, padding=True, return_tensors=return_type)
self.assertEqual(output.input_ids.dtype, target_type)
question, words, boxes = self.get_empty_question_words_and_boxes_batch()
for return_type, target_type in zip(tokenizer_return_type, output_tensor_type):
output = tokenizer(words, boxes=boxes, padding=True, return_tensors=return_type)
self.assertEqual(output.input_ids.dtype, target_type)
def test_integration(self):
"""Integration test with hardcoded expectations for LayoutLMv2."""
input_words = ["a", "weirdly", "test", "hello", "my", "name", "is", "bob"]
input_boxes = [
[423, 237, 440, 251],
[427, 272, 441, 287],
[419, 115, 437, 129],
[961, 885, 992, 912],
[256, 38, 330, 58],
[256, 38, 330, 58],
[336, 42, 353, 57],
[34, 42, 66, 69],
]
expected_tokens = [
"a",
"weird",
"##ly",
"test",
"hello",
"my",
"name",
"is",
"bob",
]
expected_ids = [1037, 6881, 2135, 3231, 7592, 2026, 2171, 2003, 3960]
expected_tokens_from_ids = ['a', 'weird', '##ly', 'test', 'hello', 'my', 'name', 'is', 'bob'] # fmt: skip
expected_decoded_text = "a weirdly test hello my name is bob"
tokenizer = self.tokenizer_class.from_pretrained("microsoft/layoutlmv2-base-uncased")
# 1) tokens (flattened per word)
tokens = []
for word in input_words:
tokens.extend(tokenizer.tokenize(word))
self.assertListEqual(tokens, expected_tokens)
# 2) ids from encode on pretokenized words with boxes
ids = tokenizer.encode(input_words, boxes=input_boxes, add_special_tokens=False)
self.assertListEqual(ids, expected_ids)
# 3) tokens from ids
roundtrip_tokens = tokenizer.convert_ids_to_tokens(ids)
self.assertListEqual(roundtrip_tokens, expected_tokens_from_ids)
# 4) decoded text
decoded_text = tokenizer.decode(ids, clean_up_tokenization_spaces=False)
self.assertEqual(decoded_text, expected_decoded_text)
def test_integration_from_extractor(self):
"""Integration test using pretokenized words and boxes as if coming from an extractor."""
input_words = ["a", "weirdly", "test", "hello", "my", "name", "is", "bob"]
input_boxes = [
[423, 237, 440, 251],
[427, 272, 441, 287],
[419, 115, 437, 129],
[961, 885, 992, 912],
[256, 38, 330, 58],
[256, 38, 330, 58],
[336, 42, 353, 57],
[34, 42, 66, 69],
]
expected_tokens = [
"a",
"weird",
"##ly",
"test",
"hello",
"my",
"name",
"is",
"bob",
]
expected_ids = [1037, 6881, 2135, 3231, 7592, 2026, 2171, 2003, 3960]
expected_tokens_from_ids = ['a', 'weird', '##ly', 'test', 'hello', 'my', 'name', 'is', 'bob'] # fmt: skip
expected_decoded_text = "a weirdly test hello my name is bob"
tokenizer = self.tokenizer_class.from_pretrained("microsoft/layoutlmv2-base-uncased")
# As if produced by an image/box extractor upstream
tokens = []
for word in input_words:
tokens.extend(tokenizer.tokenize(word))
self.assertListEqual(tokens, expected_tokens)
ids = tokenizer.encode(input_words, boxes=input_boxes, add_special_tokens=False)
self.assertListEqual(ids, expected_ids)
roundtrip_tokens = tokenizer.convert_ids_to_tokens(ids)
self.assertListEqual(roundtrip_tokens, expected_tokens_from_ids)
decoded_text = tokenizer.decode(ids, clean_up_tokenization_spaces=False)
self.assertEqual(decoded_text, expected_decoded_text)
| LayoutLMv2TokenizationTest |
python | numpy__numpy | numpy/linalg/tests/test_linalg.py | {
"start": 14939,
"end": 17834
} | class ____(SolveCases):
@pytest.mark.parametrize('dtype', [single, double, csingle, cdouble])
def test_types(self, dtype):
x = np.array([[1, 0.5], [0.5, 1]], dtype=dtype)
assert_equal(linalg.solve(x, x).dtype, dtype)
def test_1_d(self):
class ArraySubclass(np.ndarray):
pass
a = np.arange(8).reshape(2, 2, 2)
b = np.arange(2).view(ArraySubclass)
result = linalg.solve(a, b)
assert result.shape == (2, 2)
# If b is anything other than 1-D it should be treated as a stack of
# matrices
b = np.arange(4).reshape(2, 2).view(ArraySubclass)
result = linalg.solve(a, b)
assert result.shape == (2, 2, 2)
b = np.arange(2).reshape(1, 2).view(ArraySubclass)
assert_raises(ValueError, linalg.solve, a, b)
def test_0_size(self):
class ArraySubclass(np.ndarray):
pass
# Test system of 0x0 matrices
a = np.arange(8).reshape(2, 2, 2)
b = np.arange(6).reshape(1, 2, 3).view(ArraySubclass)
expected = linalg.solve(a, b)[:, 0:0, :]
result = linalg.solve(a[:, 0:0, 0:0], b[:, 0:0, :])
assert_array_equal(result, expected)
assert_(isinstance(result, ArraySubclass))
# Test errors for non-square and only b's dimension being 0
assert_raises(linalg.LinAlgError, linalg.solve, a[:, 0:0, 0:1], b)
assert_raises(ValueError, linalg.solve, a, b[:, 0:0, :])
# Test broadcasting error
b = np.arange(6).reshape(1, 3, 2) # broadcasting error
assert_raises(ValueError, linalg.solve, a, b)
assert_raises(ValueError, linalg.solve, a[0:0], b[0:0])
# Test zero "single equations" with 0x0 matrices.
b = np.arange(2).view(ArraySubclass)
expected = linalg.solve(a, b)[:, 0:0]
result = linalg.solve(a[:, 0:0, 0:0], b[0:0])
assert_array_equal(result, expected)
assert_(isinstance(result, ArraySubclass))
b = np.arange(3).reshape(1, 3)
assert_raises(ValueError, linalg.solve, a, b)
assert_raises(ValueError, linalg.solve, a[0:0], b[0:0])
assert_raises(ValueError, linalg.solve, a[:, 0:0, 0:0], b)
def test_0_size_k(self):
# test zero multiple equation (K=0) case.
class ArraySubclass(np.ndarray):
pass
a = np.arange(4).reshape(1, 2, 2)
b = np.arange(6).reshape(3, 2, 1).view(ArraySubclass)
expected = linalg.solve(a, b)[:, :, 0:0]
result = linalg.solve(a, b[:, :, 0:0])
assert_array_equal(result, expected)
assert_(isinstance(result, ArraySubclass))
# test both zero.
expected = linalg.solve(a, b)[:, 0:0, 0:0]
result = linalg.solve(a[:, 0:0, 0:0], b[:, 0:0, 0:0])
assert_array_equal(result, expected)
assert_(isinstance(result, ArraySubclass))
| TestSolve |
python | ansible__ansible | lib/ansible/modules/user.py | {
"start": 111854,
"end": 116556
} | class ____(User):
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.system:
cmd.append('-S')
if self.uid_min is not None:
cmd.append('-K')
cmd.append('UID_MIN=' + str(self.uid_min))
if self.uid_max is not None:
cmd.append('-K')
cmd.append('UID_MAX=' + str(self.uid_max))
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Add to additional groups
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
# Manage group membership
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Manage password
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
| BusyBox |
python | apache__airflow | providers/google/tests/unit/google/firebase/operators/test_firestore.py | {
"start": 1181,
"end": 1812
} | class ____:
@mock.patch("airflow.providers.google.firebase.operators.firestore.CloudFirestoreHook")
def test_execute(self, mock_firestore_hook):
op = CloudFirestoreExportDatabaseOperator(
task_id="test-task",
body=EXPORT_DOCUMENT_BODY,
gcp_conn_id="google_cloud_default",
project_id=TEST_PROJECT_ID,
)
op.execute(mock.MagicMock())
mock_firestore_hook.return_value.export_documents.assert_called_once_with(
body=EXPORT_DOCUMENT_BODY, database_id="(default)", project_id=TEST_PROJECT_ID
)
| TestCloudFirestoreExportDatabaseOperator |
python | qdrant__qdrant-client | qdrant_client/http/models/models.py | {
"start": 28891,
"end": 29594
} | class ____(BaseModel, extra="forbid"):
"""
This data structure is used in API interface and applied across multiple shards
"""
keys: List[str] = Field(..., description="List of payload keys to remove from payload")
points: Optional[List["ExtendedPointId"]] = Field(
default=None, description="Deletes values from each point in this list"
)
filter: Optional["Filter"] = Field(
default=None, description="Deletes values from points that satisfy this filter condition"
)
shard_key: Optional["ShardKeySelector"] = Field(
default=None, description="This data structure is used in API interface and applied across multiple shards"
)
| DeletePayload |
python | ray-project__ray | python/ray/experimental/tqdm_ray.py | {
"start": 8663,
"end": 13609
} | class ____:
"""Central tqdm manager run on the driver.
This class holds a collection of BarGroups and updates their `pos_offset` as
needed to ensure individual progress bars do not collide in position, kind of
like a virtual memory manager.
"""
def __init__(self):
import ray._private.services as services
self.ip = services.get_node_ip_address()
self.pid = os.getpid()
self.bar_groups = {}
self.in_hidden_state = False
self.num_hides = 0
self.lock = threading.RLock()
# Avoid colorizing Jupyter output, since the tqdm bar is rendered in
# ipywidgets instead of in the console.
self.should_colorize = not ray.widgets.util.in_notebook()
def process_state_update(self, state: ProgressBarState) -> None:
"""Apply the remote progress bar state update.
This creates a new bar locally if it doesn't already exist. When a bar is
created or destroyed, we also recalculate and update the `pos_offset` of each
BarGroup on the screen.
"""
with self.lock:
self._process_state_update_locked(state)
def _process_state_update_locked(self, state: ProgressBarState) -> None:
if not real_tqdm:
if log_once("no_tqdm"):
logger.warning("tqdm is not installed. Progress bars will be disabled.")
return
if state["ip"] == self.ip:
if state["pid"] == self.pid:
prefix = ""
else:
prefix = "(pid={}) ".format(state.get("pid"))
if self.should_colorize:
prefix = "{}{}{}{}".format(
colorama.Style.DIM,
colorama.Fore.CYAN,
prefix,
colorama.Style.RESET_ALL,
)
else:
prefix = "(pid={}, ip={}) ".format(
state.get("pid"),
state.get("ip"),
)
if self.should_colorize:
prefix = "{}{}{}{}".format(
colorama.Style.DIM,
colorama.Fore.CYAN,
prefix,
colorama.Style.RESET_ALL,
)
state["desc"] = prefix + state["desc"]
process = self._get_or_allocate_bar_group(state)
if process.has_bar(state["uuid"]):
# Always call `update_bar` to sync any last remaining updates
# prior to closing. Otherwise, the displayed progress bars
# can be left incomplete, even after execution finishes.
# Fixes https://github.com/ray-project/ray/issues/44983
process.update_bar(state)
if state["closed"]:
process.close_bar(state)
self._update_offsets()
else:
process.allocate_bar(state)
self._update_offsets()
def hide_bars(self) -> None:
"""Temporarily hide visible bars to avoid conflict with other log messages."""
with self.lock:
if not self.in_hidden_state:
self.in_hidden_state = True
self.num_hides += 1
for group in self.bar_groups.values():
group.hide_bars()
def unhide_bars(self) -> None:
"""Opposite of hide_bars()."""
with self.lock:
if self.in_hidden_state:
self.in_hidden_state = False
for group in self.bar_groups.values():
group.unhide_bars()
def _get_or_allocate_bar_group(self, state: ProgressBarState):
ptuple = (state["ip"], state["pid"])
if ptuple not in self.bar_groups:
offset = sum(p.slots_required() for p in self.bar_groups.values())
self.bar_groups[ptuple] = _BarGroup(state["ip"], state["pid"], offset)
return self.bar_groups[ptuple]
def _update_offsets(self):
offset = 0
for proc in self.bar_groups.values():
proc.update_offset(offset)
offset += proc.slots_required()
def instance() -> _BarManager:
"""Get or create a BarManager for this process."""
global _manager
with _mgr_lock:
if _manager is None:
_manager = _BarManager()
if env_bool("RAY_TQDM_PATCH_PRINT", True):
import builtins
builtins.print = safe_print
return _manager
if __name__ == "__main__":
@ray.remote
def processing(delay):
def sleep(x):
print("Intermediate result", x)
time.sleep(delay)
return x
ray.data.range(1000, override_num_blocks=100).map(
sleep, compute=ray.data.ActorPoolStrategy(size=1)
).count()
ray.get(
[
processing.remote(0.03),
processing.remote(0.01),
processing.remote(0.05),
]
)
| _BarManager |
python | huggingface__transformers | src/transformers/models/efficientnet/modeling_efficientnet.py | {
"start": 15065,
"end": 15647
} | class ____(PreTrainedModel):
config: EfficientNetConfig
base_model_prefix = "efficientnet"
main_input_name = "pixel_values"
input_modalities = ("image",)
_no_split_modules = []
@torch.no_grad()
def _init_weights(self, module: nn.Module):
"""Initialize the weights"""
if isinstance(module, (nn.Linear, nn.Conv2d, nn.BatchNorm2d)):
init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
if module.bias is not None:
init.zeros_(module.bias)
@auto_docstring
| EfficientNetPreTrainedModel |
python | davidhalter__jedi | jedi/inference/value/instance.py | {
"start": 11416,
"end": 15464
} | class ____(_BaseTreeInstance):
def __init__(self, inference_state, parent_context, class_value, arguments):
# I don't think that dynamic append lookups should happen here. That
# sounds more like something that should go to py__iter__.
if class_value.py__name__() in ['list', 'set'] \
and parent_context.get_root_context().is_builtins_module():
# compare the module path with the builtin name.
if settings.dynamic_array_additions:
arguments = get_dynamic_array_instance(self, arguments)
super().__init__(inference_state, parent_context, class_value)
self._arguments = arguments
self.tree_node = class_value.tree_node
# This can recurse, if the initialization of the class includes a reference
# to itself.
@inference_state_method_cache(default=None)
def _get_annotated_class_object(self):
from jedi.inference.gradual.annotation import py__annotations__, \
infer_type_vars_for_execution
args = InstanceArguments(self, self._arguments)
for signature in self.class_value.py__getattribute__('__init__').get_signatures():
# Just take the first result, it should always be one, because we
# control the typeshed code.
funcdef = signature.value.tree_node
if funcdef is None or funcdef.type != 'funcdef' \
or not signature.matches_signature(args):
# First check if the signature even matches, if not we don't
# need to infer anything.
continue
bound_method = BoundMethod(self, self.class_value.as_context(), signature.value)
all_annotations = py__annotations__(funcdef)
type_var_dict = infer_type_vars_for_execution(bound_method, args, all_annotations)
if type_var_dict:
defined, = self.class_value.define_generics(
infer_type_vars_for_execution(signature.value, args, all_annotations),
)
debug.dbg('Inferred instance value as %s', defined, color='BLUE')
return defined
return None
def get_annotated_class_object(self):
return self._get_annotated_class_object() or self.class_value
def get_key_values(self):
values = NO_VALUES
if self.array_type == 'dict':
for i, (key, instance) in enumerate(self._arguments.unpack()):
if key is None and i == 0:
values |= ValueSet.from_sets(
v.get_key_values()
for v in instance.infer()
if v.array_type == 'dict'
)
if key:
values |= ValueSet([compiled.create_simple_object(
self.inference_state,
key,
)])
return values
def py__simple_getitem__(self, index):
if self.array_type == 'dict':
# Logic for dict({'foo': bar}) and dict(foo=bar)
# reversed, because:
# >>> dict({'a': 1}, a=3)
# {'a': 3}
# TODO tuple initializations
# >>> dict([('a', 4)])
# {'a': 4}
for key, lazy_context in reversed(list(self._arguments.unpack())):
if key is None:
values = ValueSet.from_sets(
dct_value.py__simple_getitem__(index)
for dct_value in lazy_context.infer()
if dct_value.array_type == 'dict'
)
if values:
return values
else:
if key == index:
return lazy_context.infer()
return super().py__simple_getitem__(index)
def __repr__(self):
return "<%s of %s(%s)>" % (self.__class__.__name__, self.class_value,
self._arguments)
| TreeInstance |
python | apache__airflow | providers/fab/tests/unit/fab/auth_manager/api_endpoints/test_user_endpoint.py | {
"start": 20715,
"end": 29213
} | class ____(TestUserEndpoint):
@pytest.mark.usefixtures("autoclean_admin_user")
def test_change(self, autoclean_username, autoclean_user_payload):
autoclean_user_payload["first_name"] = "Changed"
response = self.client.patch(
f"/fab/v1/users/{autoclean_username}",
json=autoclean_user_payload,
environ_overrides={"REMOTE_USER": "test"},
)
assert response.status_code == 200, response.json
# The first name is changed.
data = response.json
assert data["first_name"] == "Changed"
assert data["last_name"] == ""
@pytest.mark.usefixtures("autoclean_admin_user")
def test_change_with_update_mask(self, autoclean_username, autoclean_user_payload):
autoclean_user_payload["first_name"] = "Changed"
autoclean_user_payload["last_name"] = "McTesterson"
response = self.client.patch(
f"/fab/v1/users/{autoclean_username}?update_mask=last_name",
json=autoclean_user_payload,
environ_overrides={"REMOTE_USER": "test"},
)
assert response.status_code == 200, response.json
# The first name is changed, but the last name isn't since we masked it.
data = response.json
assert data["first_name"] == "Tester"
assert data["last_name"] == "McTesterson"
@pytest.mark.parametrize(
("payload", "error_message"),
[
({"username": "another_user"}, "The username `another_user` already exists"),
({"email": "another_user@example.com"}, "The email `another_user@example.com` already exists"),
],
ids=["username", "email"],
)
@pytest.mark.usefixtures("user_different")
@pytest.mark.usefixtures("autoclean_admin_user")
def test_patch_already_exists(
self,
payload,
error_message,
autoclean_user_payload,
autoclean_username,
):
autoclean_user_payload.update(payload)
response = self.client.patch(
f"/fab/v1/users/{autoclean_username}",
json=autoclean_user_payload,
environ_overrides={"REMOTE_USER": "test"},
)
assert response.status_code == 409, response.json
assert response.json["detail"] == error_message
@pytest.mark.parametrize(
"field",
["username", "first_name", "last_name", "email"],
)
@pytest.mark.usefixtures("autoclean_admin_user")
def test_required_fields(
self,
field,
autoclean_user_payload,
autoclean_username,
):
autoclean_user_payload.pop(field)
response = self.client.patch(
f"/fab/v1/users/{autoclean_username}",
json=autoclean_user_payload,
environ_overrides={"REMOTE_USER": "test"},
)
assert response.status_code == 400, response.json
assert response.json["detail"] == f"{{'{field}': ['Missing data for required field.']}}"
@pytest.mark.usefixtures("autoclean_admin_user")
def test_username_can_be_updated(self, autoclean_user_payload, autoclean_username):
testusername = "testusername"
autoclean_user_payload.update({"username": testusername})
response = self.client.patch(
f"/fab/v1/users/{autoclean_username}",
json=autoclean_user_payload,
environ_overrides={"REMOTE_USER": "test"},
)
_delete_user(username=testusername)
assert response.json["username"] == testusername
@pytest.mark.usefixtures("autoclean_admin_user")
@unittest.mock.patch(
"airflow.providers.fab.auth_manager.api_endpoints.user_endpoint.generate_password_hash",
return_value="fake-hashed-pass",
)
def test_password_hashed(
self,
mock_generate_password_hash,
autoclean_username,
autoclean_user_payload,
):
autoclean_user_payload["password"] = "new-pass"
response = self.client.patch(
f"/fab/v1/users/{autoclean_username}",
json=autoclean_user_payload,
environ_overrides={"REMOTE_USER": "test"},
)
assert response.status_code == 200, response.json
assert "password" not in response.json
mock_generate_password_hash.assert_called_once_with("new-pass")
password_in_db = self.session.scalar(select(User.password).where(User.username == autoclean_username))
assert password_in_db == "fake-hashed-pass"
@pytest.mark.usefixtures("autoclean_admin_user")
def test_replace_roles(self, autoclean_username, autoclean_user_payload):
# Patching a user's roles should replace the entire list.
autoclean_user_payload["roles"] = [{"name": "User"}, {"name": "Viewer"}]
response = self.client.patch(
f"/fab/v1/users/{autoclean_username}?update_mask=roles",
json=autoclean_user_payload,
environ_overrides={"REMOTE_USER": "test"},
)
assert response.status_code == 200, response.json
assert {d["name"] for d in response.json["roles"]} == {"User", "Viewer"}
@pytest.mark.usefixtures("autoclean_admin_user")
def test_unchanged(self, autoclean_username, autoclean_user_payload):
# Should allow a PATCH that changes nothing.
response = self.client.patch(
f"/fab/v1/users/{autoclean_username}",
json=autoclean_user_payload,
environ_overrides={"REMOTE_USER": "test"},
)
assert response.status_code == 200, response.json
expected = {k: v for k, v in autoclean_user_payload.items() if k != "password"}
assert {k: response.json[k] for k in expected} == expected
@pytest.mark.usefixtures("autoclean_admin_user")
def test_unauthenticated(self, autoclean_username, autoclean_user_payload):
response = self.client.patch(
f"/fab/v1/users/{autoclean_username}",
json=autoclean_user_payload,
)
assert response.status_code == 401, response.json
@pytest.mark.usefixtures("autoclean_admin_user")
def test_forbidden(self, autoclean_username, autoclean_user_payload):
response = self.client.patch(
f"/fab/v1/users/{autoclean_username}",
json=autoclean_user_payload,
environ_overrides={"REMOTE_USER": "test_no_permissions"},
)
assert response.status_code == 403, response.json
def test_not_found(self, autoclean_username, autoclean_user_payload):
# This test does not populate autoclean_admin_user into the database.
response = self.client.patch(
f"/fab/v1/users/{autoclean_username}",
json=autoclean_user_payload,
environ_overrides={"REMOTE_USER": "test"},
)
assert response.status_code == 404, response.json
@pytest.mark.parametrize(
("payload_converter", "error_message"),
[
pytest.param(
lambda p: {k: v for k, v in p.items() if k != "username"},
"{'username': ['Missing data for required field.']}",
id="missing-required",
),
pytest.param(
lambda p: {"i-am": "a typo", **p},
"{'i-am': ['Unknown field.']}",
id="unknown-user-field",
),
pytest.param(
lambda p: {**p, "roles": [{"also": "a typo", "name": "User"}]},
"{'roles': {0: {'also': ['Unknown field.']}}}",
id="unknown-role-field",
),
pytest.param(
lambda p: {**p, "roles": [{"name": "God"}, {"name": "User"}, {"name": "Overlord"}]},
"Unknown roles: 'God', 'Overlord'",
id="unknown-role",
),
],
)
@pytest.mark.usefixtures("autoclean_admin_user")
def test_invalid_payload(
self,
autoclean_username,
autoclean_user_payload,
payload_converter,
error_message,
):
response = self.client.patch(
f"/fab/v1/users/{autoclean_username}",
json=payload_converter(autoclean_user_payload),
environ_overrides={"REMOTE_USER": "test"},
)
assert response.status_code == 400, response.json
assert response.json == {
"detail": error_message,
"status": 400,
"title": "Bad Request",
"type": EXCEPTIONS_LINK_MAP[400],
}
| TestPatchUser |
python | mkdocs__mkdocs | mkdocs/utils/__init__.py | {
"start": 10672,
"end": 10963
} | class ____:
"""Avoid logging duplicate messages."""
def __init__(self) -> None:
self.msgs: set[str] = set()
def __call__(self, record: logging.LogRecord) -> bool:
rv = record.msg not in self.msgs
self.msgs.add(record.msg)
return rv
| DuplicateFilter |
python | fastapi__sqlmodel | docs_src/tutorial/where/tutorial006.py | {
"start": 100,
"end": 1584
} | class ____(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str
secret_name: str
age: Optional[int] = None
sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
engine = create_engine(sqlite_url, echo=True)
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
def create_heroes():
hero_1 = Hero(name="Deadpond", secret_name="Dive Wilson")
hero_2 = Hero(name="Spider-Boy", secret_name="Pedro Parqueador")
hero_3 = Hero(name="Rusty-Man", secret_name="Tommy Sharp", age=48)
hero_4 = Hero(name="Tarantula", secret_name="Natalia Roman-on", age=32)
hero_5 = Hero(name="Black Lion", secret_name="Trevor Challa", age=35)
hero_6 = Hero(name="Dr. Weird", secret_name="Steve Weird", age=36)
hero_7 = Hero(name="Captain North America", secret_name="Esteban Rogelios", age=93)
with Session(engine) as session:
session.add(hero_1)
session.add(hero_2)
session.add(hero_3)
session.add(hero_4)
session.add(hero_5)
session.add(hero_6)
session.add(hero_7)
session.commit()
def select_heroes():
with Session(engine) as session:
statement = select(Hero).where(Hero.age <= 35)
results = session.exec(statement)
for hero in results:
print(hero)
def main():
create_db_and_tables()
create_heroes()
select_heroes()
if __name__ == "__main__":
main()
| Hero |
python | astropy__astropy | astropy/time/formats.py | {
"start": 50386,
"end": 52210
} | class ____(datetime.tzinfo):
"""
Subclass of the `~datetime.tzinfo` object, used in the
to_datetime method to specify timezones.
It may be safer in most cases to use a timezone database package like
pytz rather than defining your own timezones - this class is mainly
a workaround for users without pytz.
"""
@u.quantity_input(utc_offset=u.day, dst=u.day)
def __init__(self, utc_offset=0 * u.day, dst=0 * u.day, tzname=None):
"""
Parameters
----------
utc_offset : `~astropy.units.Quantity`, optional
Offset from UTC in days. Defaults to zero.
dst : `~astropy.units.Quantity`, optional
Daylight Savings Time offset in days. Defaults to zero
(no daylight savings).
tzname : str or None, optional
Name of timezone
Examples
--------
>>> from datetime import datetime
>>> from astropy.time import TimezoneInfo # Specifies a timezone
>>> import astropy.units as u
>>> utc = TimezoneInfo() # Defaults to UTC
>>> utc_plus_one_hour = TimezoneInfo(utc_offset=1*u.hour) # UTC+1
>>> dt_aware = datetime(2000, 1, 1, 0, 0, 0, tzinfo=utc_plus_one_hour)
>>> print(dt_aware)
2000-01-01 00:00:00+01:00
>>> print(dt_aware.astimezone(utc))
1999-12-31 23:00:00+00:00
"""
if utc_offset == 0 and dst == 0 and tzname is None:
tzname = "UTC"
self._utcoffset = datetime.timedelta(utc_offset.to_value(u.day))
self._tzname = tzname
self._dst = datetime.timedelta(dst.to_value(u.day))
def utcoffset(self, dt):
return self._utcoffset
def tzname(self, dt):
return str(self._tzname)
def dst(self, dt):
return self._dst
| TimezoneInfo |
python | astropy__astropy | astropy/units/quantity.py | {
"start": 8019,
"end": 80739
} | class ____(np.ndarray):
"""A `~astropy.units.Quantity` represents a number with some associated unit.
See also: https://docs.astropy.org/en/stable/units/quantity.html
Parameters
----------
value : number, `~numpy.ndarray`, `~astropy.units.Quantity` (sequence), or str
The numerical value of this quantity in the units given by unit. If a
`Quantity` or sequence of them (or any other valid object with a
``unit`` attribute), creates a new `Quantity` object, converting to
`unit` units as needed. If a string, it is converted to a number or
`Quantity`, depending on whether a unit is present.
unit : unit-like
An object that represents the unit associated with the input value.
Must be an `~astropy.units.UnitBase` object or a string parseable by
the :mod:`~astropy.units` package.
dtype : ~numpy.dtype, optional
The dtype of the resulting Numpy array or scalar that will
hold the value. If not provided, it is determined from the input,
except that any integer and (non-Quantity) object inputs are converted
to float by default.
If `None`, the normal `numpy.dtype` introspection is used, e.g.
preventing upcasting of integers.
copy : bool, optional
If `True` (default), then the value is copied. Otherwise, a copy will
only be made if ``__array__`` returns a copy, if value is a nested
sequence, or if a copy is needed to satisfy an explicitly given
``dtype``. (The `False` option is intended mostly for internal use,
to speed up initialization where a copy is known to have been made.
Use with care.)
order : {'C', 'F', 'A'}, optional
Specify the order of the array. As in `~numpy.array`. This parameter
is ignored if the input is a `Quantity` and ``copy=False``.
subok : bool, optional
If `False` (default), the returned array will be forced to be a
`Quantity`. Otherwise, `Quantity` subclasses will be passed through,
or a subclass appropriate for the unit will be used (such as
`~astropy.units.Dex` for ``u.dex(u.AA)``).
ndmin : int, optional
Specifies the minimum number of dimensions that the resulting array
should have. Ones will be prepended to the shape as needed to meet
this requirement. This parameter is ignored if the input is a
`Quantity` and ``copy=False``.
Raises
------
TypeError
If the value provided is not a Python numeric type.
TypeError
If the unit provided is not either a :class:`~astropy.units.Unit`
object or a parseable string unit.
Notes
-----
Quantities can also be created by multiplying a number or array with a
:class:`~astropy.units.Unit`. See https://docs.astropy.org/en/latest/units/
Unless the ``dtype`` argument is explicitly specified, integer
or (non-Quantity) object inputs are converted to `float` by default.
"""
# Need to set a class-level default for _equivalencies, or
# Constants can not initialize properly
_equivalencies = []
# Default unit for initialization; can be overridden by subclasses,
# possibly to `None` to indicate there is no default unit.
_default_unit = dimensionless_unscaled
# Ensures views have an undefined unit.
_unit = None
__array_priority__ = 10000
def __class_getitem__(cls, unit_shape_dtype):
"""Quantity Type Hints.
Unit-aware type hints are ``Annotated`` objects that encode the class,
the unit, and possibly shape and dtype information, depending on the
python and :mod:`numpy` versions.
Schematically, ``Annotated[cls[shape, dtype], unit]``
As a classmethod, the type is the class, ie ``Quantity``
produces an ``Annotated[Quantity, ...]`` while a subclass
like :class:`~astropy.coordinates.Angle` returns
``Annotated[Angle, ...]``.
Parameters
----------
unit_shape_dtype : :class:`~astropy.units.UnitBase`, str, `~astropy.units.PhysicalType`, or tuple
Unit specification, can be the physical type (ie str or class).
If tuple, then the first element is the unit specification
and all other elements are for `numpy.ndarray` type annotations.
Whether they are included depends on the python and :mod:`numpy`
versions.
Returns
-------
`typing.Annotated`, `astropy.units.Unit`, or `astropy.units.PhysicalType`
Return type in this preference order:
* `typing.Annotated`
* `astropy.units.Unit` or `astropy.units.PhysicalType`
Raises
------
TypeError
If the unit/physical_type annotation is not Unit-like or
PhysicalType-like.
Examples
--------
Create a unit-aware Quantity type annotation
>>> Quantity[Unit("s")]
Annotated[Quantity, Unit("s")]
See Also
--------
`~astropy.units.quantity_input`
Use annotations for unit checks on function arguments and results.
Notes
-----
|Quantity| types are also static-type compatible.
"""
from typing import Annotated
# process whether [unit] or [unit, shape, ptype]
if isinstance(unit_shape_dtype, tuple): # unit, shape, dtype
target = unit_shape_dtype[0]
shape_dtype = unit_shape_dtype[1:]
else: # just unit
target = unit_shape_dtype
shape_dtype = ()
# Allowed unit/physical types. Errors if neither.
try:
unit = Unit(target)
except (TypeError, ValueError):
from astropy.units.physical import get_physical_type
try:
unit = get_physical_type(target)
except (TypeError, ValueError, KeyError): # KeyError for Enum
raise TypeError(
"unit annotation is not a Unit or PhysicalType"
) from None
# Quantity does not (yet) properly extend the NumPy generics types,
# introduced in numpy v1.22+, instead just including the unit info as
# metadata using Annotated.
# TODO: ensure we do interact with NDArray.__class_getitem__.
return Annotated[cls, unit]
def __new__(
cls: type[Self],
value: QuantityLike,
unit=None,
dtype=np.inexact,
copy=True,
order=None,
subok=False,
ndmin=0,
) -> Self:
if unit is not None:
# convert unit first, to avoid multiple string->unit conversions
unit = Unit(unit)
# inexact -> upcast to float dtype
float_default = dtype is np.inexact
if float_default:
dtype = None
# optimize speed for Quantity with no dtype given, copy=COPY_IF_NEEDED
if isinstance(value, Quantity):
if unit is not None and unit is not value.unit:
value = value.to(unit)
# the above already makes a copy (with float dtype)
copy = COPY_IF_NEEDED
if type(value) is not cls and not (subok and isinstance(value, cls)):
value = value.view(cls)
if float_default and value.dtype.kind in "iu":
dtype = float
return np.array(
value, dtype=dtype, copy=copy, order=order, subok=True, ndmin=ndmin
)
# Maybe str, or list/tuple of Quantity? If so, this may set value_unit.
# To ensure array remains fast, we short-circuit it.
value_unit = None
if not isinstance(value, np.ndarray):
if isinstance(value, str):
# The first part of the regex string matches any integer/float;
# the second parts adds possible trailing .+-, which will break
# the float function below and ensure things like 1.2.3deg
# will not work.
pattern = (
r"\s*[+-]?"
r"((\d+\.?\d*)|(\.\d+)|([nN][aA][nN])|"
r"([iI][nN][fF]([iI][nN][iI][tT][yY]){0,1}))"
r"([eE][+-]?\d+)?"
r"[.+-]?"
)
v = re.match(pattern, value)
unit_string = None
try:
value = float(v.group())
except Exception:
raise TypeError(
f'Cannot parse "{value}" as a {cls.__name__}. It does not '
"start with a number."
)
unit_string = v.string[v.end() :].strip()
if unit_string:
value_unit = Unit(unit_string)
if unit is None:
unit = value_unit # signal no conversion needed below.
elif isinstance(value, (list, tuple)) and len(value) > 0:
if all(isinstance(v, Quantity) for v in value):
# If a list/tuple contains only quantities, stack them,
# which also converts them to the same unit.
value = np.stack(value)
copy = False
elif (
dtype is None
and not hasattr(value, "dtype")
and isinstance(unit, StructuredUnit)
):
# Special case for list/tuple of values and a structured unit:
# ``np.array(value, dtype=None)`` would treat tuples as lower
# levels of the array, rather than as elements of a structured
# array, so we use the structure of the unit to help infer the
# structured dtype of the value.
dtype = unit._recursively_get_dtype(value)
using_default_unit = False
if value_unit is None:
# If the value has a `unit` attribute and if not None
# (for Columns with uninitialized unit), treat it like a quantity.
value_unit = getattr(value, "unit", None)
if value_unit is None:
# Default to dimensionless for no (initialized) unit attribute.
if unit is None:
using_default_unit = True
unit = cls._default_unit
value_unit = unit # signal below that no conversion is needed
else:
try:
value_unit = Unit(value_unit)
except Exception as exc:
raise TypeError(
f"The unit attribute {value.unit!r} of the input could "
"not be parsed as an astropy Unit."
) from exc
if unit is None:
unit = value_unit
elif unit is not value_unit:
copy = COPY_IF_NEEDED # copy will be made in conversion at end
value = np.array(
value, dtype=dtype, copy=copy, order=order, subok=True, ndmin=ndmin
)
# For no-user-input unit, make sure the constructed unit matches the
# structure of the data.
if using_default_unit and value.dtype.names is not None:
unit = value_unit = _structured_unit_like_dtype(value_unit, value.dtype)
# check that array contains numbers or long int objects
if value.dtype.kind in "OSU" and not (
value.dtype.kind == "O" and isinstance(value.item(0), numbers.Number)
):
raise TypeError("The value must be a valid Python or Numpy numeric type.")
# by default, cast any integer, boolean, etc., to float
if float_default and value.dtype.kind in "iuO":
value = value.astype(float)
# if we allow subclasses, allow a class from the unit.
if subok:
qcls = getattr(unit, "_quantity_class", cls)
if issubclass(qcls, cls):
cls = qcls
value = value.view(cls)
value._set_unit(value_unit)
if unit is value_unit:
return value
else:
# here we had non-Quantity input that had a "unit" attribute
# with a unit different from the desired one. So, convert.
return value.to(unit)
def __array_finalize__(self, obj):
super().__array_finalize__(obj)
# If we're a new object or viewing an ndarray, nothing has to be done.
if obj is None or obj.__class__ is np.ndarray:
return
# Copy over the unit and possibly info. Note that the only way the
# unit can already be set is if one enters via _new_view(), where the
# unit is often different from that of self, and where propagation of
# info is not always desirable.
if self._unit is None:
unit = getattr(obj, "_unit", None)
if unit is not None:
self._set_unit(unit)
# Copy info if the original had `info` defined. Because of the way the
# DataInfo works, `'info' in obj.__dict__` is False until the
# `info` attribute is accessed or set.
if "info" in obj.__dict__:
self.info = obj.info
def __array_wrap__(self, obj, context=None, return_scalar=False):
if context is None:
# Methods like .squeeze() created a new `ndarray` and then call
# __array_wrap__ to turn the array into self's subclass.
return self._new_view(obj)
raise NotImplementedError(
"__array_wrap__ should not be used with a context any more since all "
"use should go through array_function. Please raise an issue on "
"https://github.com/astropy/astropy"
)
def __array_ufunc__(self, function, method, *inputs, **kwargs):
"""Wrap numpy ufuncs, taking care of units.
Parameters
----------
function : callable
ufunc to wrap.
method : str
Ufunc method: ``__call__``, ``at``, ``reduce``, etc.
inputs : tuple
Input arrays.
kwargs : keyword arguments
As passed on, with ``out`` containing possible quantity output.
Returns
-------
result : `~astropy.units.Quantity` or `NotImplemented`
Results of the ufunc, with the unit set properly.
"""
# Determine required conversion functions -- to bring the unit of the
# input to that expected (e.g., radian for np.sin), or to get
# consistent units between two inputs (e.g., in np.add) --
# and the unit of the result (or tuple of units for nout > 1).
try:
converters, unit = converters_and_unit(function, method, *inputs)
out = kwargs.get("out")
# Avoid loop back by turning any Quantity output into array views.
if out is not None:
# If pre-allocated output is used, check it is suitable.
# This also returns array view, to ensure we don't loop back.
if function.nout == 1:
out = out[0]
out_array = check_output(out, unit, inputs, function=function)
# Ensure output argument remains a tuple.
kwargs["out"] = (out_array,) if function.nout == 1 else out_array
if method == "reduce" and "initial" in kwargs and unit is not None:
# Special-case for initial argument for reductions like
# np.add.reduce. This should be converted to the output unit as
# well, which is typically the same as the input unit (but can
# in principle be different: unitless for np.equal, radian
# for np.arctan2, though those are not necessarily useful!)
kwargs["initial"] = self._to_own_unit(
kwargs["initial"], check_precision=False, unit=unit
)
# Same for inputs, but here also convert if necessary.
arrays = []
for input_, converter in zip(inputs, converters):
input_ = getattr(input_, "value", input_)
arrays.append(converter(input_) if converter else input_)
# Call our superclass's __array_ufunc__
result = super().__array_ufunc__(function, method, *arrays, **kwargs)
# If unit is None, a plain array is expected (e.g., comparisons), which
# means we're done.
# We're also done if the result was None (for method 'at') or
# NotImplemented, which can happen if other inputs/outputs override
# __array_ufunc__; hopefully, they can then deal with us.
if unit is None or result is None or result is NotImplemented:
return result
return self._result_as_quantity(result, unit, out)
except (TypeError, ValueError, AttributeError) as e:
out_normalized = kwargs.get("out", ())
inputs_and_outputs = inputs + out_normalized
ignored_ufunc = (
None,
np.ndarray.__array_ufunc__,
type(self).__array_ufunc__,
)
if not all(
getattr(type(io), "__array_ufunc__", None) in ignored_ufunc
for io in inputs_and_outputs
):
return NotImplemented
else:
raise e
def _result_as_quantity(self, result, unit, out):
"""Turn result into a quantity with the given unit.
If no output is given, it will take a view of the array as a quantity,
and set the unit. If output is given, those should be quantity views
of the result arrays, and the function will just set the unit.
Parameters
----------
result : ndarray or tuple thereof
Array(s) which need to be turned into quantity.
unit : `~astropy.units.Unit`
Unit for the quantities to be returned (or `None` if the result
should not be a quantity). Should be tuple if result is a tuple.
out : `~astropy.units.Quantity` or None
Possible output quantity. Should be `None` or a tuple if result
is a tuple.
Returns
-------
out : `~astropy.units.Quantity`
With units set.
"""
if isinstance(result, (tuple, list)):
if out is None:
out = (None,) * len(result)
# Some np.linalg functions return namedtuple, which is handy to access
# elements by name, but cannot be directly initialized with an iterator.
result_cls = getattr(result, "_make", result.__class__)
return result_cls(
self._result_as_quantity(result_, unit_, out_)
for (result_, unit_, out_) in zip(result, unit, out)
)
if out is None:
# View the result array as a Quantity with the proper unit.
return (
result
if unit is None
else self._new_view(result, unit, propagate_info=False)
)
elif isinstance(out, Quantity):
# For given Quantity output, just set the unit. We know the unit
# is not None and the output is of the correct Quantity subclass,
# as it was passed through check_output.
# (We cannot do this unconditionally, though, since it is possible
# for out to be ndarray and the unit to be dimensionless.)
out._set_unit(unit)
return out
def __quantity_subclass__(self, unit):
"""
Overridden by subclasses to change what kind of view is
created based on the output unit of an operation.
Parameters
----------
unit : UnitBase
The unit for which the appropriate class should be returned
Returns
-------
tuple :
- `~astropy.units.Quantity` subclass
- bool: True if subclasses of the given class are ok
"""
return Quantity, True
def _new_view(self, obj=None, unit=None, propagate_info=True):
"""Create a Quantity view of some array-like input, and set the unit.
By default, return a view of ``obj`` of the same class as ``self`` and
with the same unit. Subclasses can override the type of class for a
given unit using ``__quantity_subclass__``, and can ensure properties
other than the unit are copied using ``__array_finalize__``.
If the given unit defines a ``_quantity_class`` of which ``self``
is not an instance, a view using this class is taken.
Parameters
----------
obj : ndarray or scalar, optional
The array to create a view of. If obj is a numpy or python scalar,
it will be converted to an array scalar. By default, ``self``
is converted.
unit : unit-like, optional
The unit of the resulting object. It is used to select a
subclass, and explicitly assigned to the view if given.
If not given, the subclass and unit will be that of ``self``.
propagate_info : bool, optional
Whether to transfer ``info`` if present. Default: `True`, as
appropriate for, e.g., unit conversions or slicing, where the
nature of the object does not change.
Returns
-------
view : `~astropy.units.Quantity` subclass
"""
# Determine the unit and quantity subclass that we need for the view.
if unit is None:
unit = self.unit
quantity_subclass = self.__class__
elif unit is self.unit and self.__class__ is Quantity:
# The second part is because we should not presume what other
# classes want to do for the same unit. E.g., Constant will
# always want to fall back to Quantity, and relies on going
# through `__quantity_subclass__`.
quantity_subclass = Quantity
else:
unit = Unit(unit)
quantity_subclass = getattr(unit, "_quantity_class", Quantity)
if isinstance(self, quantity_subclass):
quantity_subclass, subok = self.__quantity_subclass__(unit)
if subok:
quantity_subclass = self.__class__
# We only want to propagate information from ``self`` to our new view,
# so obj should be a regular array. By using ``np.array``, we also
# convert python and numpy scalars, which cannot be viewed as arrays
# and thus not as Quantity either, to zero-dimensional arrays.
# (These are turned back into scalar in `.value`)
# Note that for an ndarray input, the np.array call takes only double
# ``obj.__class is np.ndarray``. So, not worth special-casing.
if obj is None:
obj = self.view(np.ndarray)
else:
obj = np.array(obj, copy=COPY_IF_NEEDED, subok=True)
# Take the view, set the unit, and update possible other properties
# such as ``info``, ``wrap_angle`` in `Longitude`, etc.
view = obj.view(quantity_subclass)
view._set_unit(unit)
view.__array_finalize__(self)
if propagate_info and "info" in self.__dict__:
view.info = self.info
return view
def _set_unit(self, unit):
"""Set the unit.
This is used anywhere the unit is set or modified, i.e., in the
initializer, in ``__imul__`` and ``__itruediv__`` for in-place
multiplication and division by another unit, as well as in
``__array_finalize__`` for wrapping up views. For Quantity, it just
sets the unit, but subclasses can override it to check that, e.g.,
a unit is consistent.
"""
if not isinstance(unit, UnitBase):
if isinstance(self._unit, StructuredUnit) or isinstance(
unit, StructuredUnit
):
unit = StructuredUnit(unit, self.dtype)
else:
# Trying to go through a string ensures that, e.g., Magnitudes with
# dimensionless physical unit become Quantity with units of mag.
unit = Unit(str(unit), parse_strict="silent")
if not isinstance(unit, (UnitBase, StructuredUnit)):
raise UnitTypeError(
f"{self.__class__.__name__} instances require normal units, "
f"not {unit.__class__} instances."
)
self._unit = unit
def __deepcopy__(self, memo):
# If we don't define this, ``copy.deepcopy(quantity)`` will
# return a bare Numpy array.
return self.copy()
def __reduce__(self):
# patch to pickle Quantity objects (ndarray subclasses), see
# http://www.mail-archive.com/numpy-discussion@scipy.org/msg02446.html
object_state = list(super().__reduce__())
object_state[2] = (object_state[2], self.__dict__)
return tuple(object_state)
def __setstate__(self, state):
# patch to unpickle Quantity objects (ndarray subclasses), see
# http://www.mail-archive.com/numpy-discussion@scipy.org/msg02446.html
nd_state, own_state = state
super().__setstate__(nd_state)
self.__dict__.update(own_state)
info = QuantityInfo()
def _to_value(self, unit, equivalencies=[]):
"""Helper method for to and to_value."""
if equivalencies == []:
equivalencies = self._equivalencies
if not self.dtype.names or isinstance(self.unit, StructuredUnit):
# Standard path, let unit to do work.
return self.unit.to(
unit, self.view(np.ndarray), equivalencies=equivalencies
)
else:
# The .to() method of a simple unit cannot convert a structured
# dtype, so we work around it, by recursing.
# TODO: deprecate this?
# Convert simple to Structured on initialization?
result = np.empty_like(self.view(np.ndarray))
for name in self.dtype.names:
result[name] = self[name]._to_value(unit, equivalencies)
return result
def to(self, unit, equivalencies=[], copy=True):
"""
Return a new `~astropy.units.Quantity` object with the specified unit.
Parameters
----------
unit : unit-like
An object that represents the unit to convert to. Must be
an `~astropy.units.UnitBase` object or a string parseable
by the `~astropy.units` package.
equivalencies : list of tuple
A list of equivalence pairs to try if the units are not
directly convertible. See :ref:`astropy:unit_equivalencies`.
If not provided or ``[]``, class default equivalencies will be used
(none for `~astropy.units.Quantity`, but may be set for subclasses)
If `None`, no equivalencies will be applied at all, not even any
set globally or within a context.
copy : bool, optional
If `True` (default), then the value is copied. Otherwise, a copy
will only be made if necessary.
See Also
--------
to_value : get the numerical value in a given unit.
"""
# We don't use `to_value` below since we always want to make a copy
# and don't want to slow down this method (esp. the scalar case).
unit = Unit(unit)
if copy:
# Avoid using to_value to ensure that we make a copy. We also
# don't want to slow down this method (esp. the scalar case).
value = self._to_value(unit, equivalencies)
else:
# to_value only copies if necessary
value = self.to_value(unit, equivalencies)
return self._new_view(value, unit)
def to_value(self, unit=None, equivalencies=[]):
"""
The numerical value, possibly in a different unit.
Parameters
----------
unit : unit-like, optional
The unit in which the value should be given. If not given or `None`,
use the current unit.
equivalencies : list of tuple, optional
A list of equivalence pairs to try if the units are not directly
convertible (see :ref:`astropy:unit_equivalencies`). If not provided
or ``[]``, class default equivalencies will be used (none for
`~astropy.units.Quantity`, but may be set for subclasses).
If `None`, no equivalencies will be applied at all, not even any
set globally or within a context.
Returns
-------
value : ndarray or scalar
The value in the units specified. For arrays, this will be a view
of the data if no unit conversion was necessary.
See Also
--------
to : Get a new instance in a different unit.
"""
if unit is None or unit is self.unit:
value = self.view(np.ndarray)
elif not self.dtype.names:
# For non-structured, we attempt a short-cut, where we just get
# the scale. If that is 1, we do not have to do anything.
unit = Unit(unit)
# We want a view if the unit does not change. One could check
# with "==", but that calculates the scale that we need anyway.
# TODO: would be better for `unit.to` to have an in-place flag.
try:
scale = self.unit._to(unit)
except Exception:
# Short-cut failed; try default (maybe equivalencies help).
value = self._to_value(unit, equivalencies)
else:
value = self.view(np.ndarray)
if not is_effectively_unity(scale):
# not in-place!
value = value * scale
else:
# For structured arrays, we go the default route.
value = self._to_value(unit, equivalencies)
# Index with empty tuple to decay array scalars in to numpy scalars.
return value if value.shape else value[()]
value = property(
to_value,
doc="""The numerical value of this instance.
See also
--------
to_value : Get the numerical value in a given unit.
""",
)
@property
def unit(self):
"""
A `~astropy.units.UnitBase` object representing the unit of this
quantity.
"""
return self._unit
@property
def equivalencies(self):
"""
A list of equivalencies that will be applied by default during
unit conversions.
"""
return self._equivalencies
def _recursively_apply(self, func):
"""Apply function recursively to every field.
Returns a copy with the result.
"""
result = np.empty_like(self)
result_value = result.view(np.ndarray)
result_unit = ()
for name in self.dtype.names:
part = func(self[name])
result_value[name] = part.value
result_unit += (part.unit,)
result._set_unit(result_unit)
return result
@property
def si(self):
"""
Returns a copy of the current `Quantity` instance with SI units. The
value of the resulting object will be scaled.
"""
if self.dtype.names:
return self._recursively_apply(operator.attrgetter("si"))
si_unit = self.unit.si
return self._new_view(self.value * si_unit.scale, si_unit / si_unit.scale)
@property
def cgs(self):
"""
Returns a copy of the current `Quantity` instance with CGS units. The
value of the resulting object will be scaled.
"""
if self.dtype.names:
return self._recursively_apply(operator.attrgetter("cgs"))
cgs_unit = self.unit.cgs
return self._new_view(self.value * cgs_unit.scale, cgs_unit / cgs_unit.scale)
@property
def isscalar(self):
"""
True if the `value` of this quantity is a scalar, or False if it
is an array-like object.
.. note::
This is subtly different from `numpy.isscalar` in that
`numpy.isscalar` returns False for a zero-dimensional array
(e.g. ``np.array(1)``), while this is True for quantities,
since quantities cannot represent true numpy scalars.
"""
return not self.shape
# This flag controls whether convenience conversion members, such
# as `q.m` equivalent to `q.to_value(u.m)` are available. This is
# not turned on on Quantity itself, but is on some subclasses of
# Quantity, such as `astropy.coordinates.Angle`.
_include_easy_conversion_members = False
def __dir__(self):
"""
Quantities are able to directly convert to other units that
have the same physical type. This function is implemented in
order to make autocompletion still work correctly in IPython.
"""
if not self._include_easy_conversion_members:
return super().__dir__()
dir_values = set(super().__dir__())
equivalencies = Unit._normalize_equivalencies(self.equivalencies)
for equivalent in self.unit._get_units_with_same_physical_type(equivalencies):
dir_values.update(equivalent.names)
return sorted(dir_values)
def __getattr__(self, attr):
"""
Quantities are able to directly convert to other units that
have the same physical type.
"""
if not self._include_easy_conversion_members:
raise AttributeError(
f"'{self.__class__.__name__}' object has no '{attr}' member"
)
def get_virtual_unit_attribute():
registry = get_current_unit_registry().registry
to_unit = registry.get(attr, None)
if to_unit is None:
return None
try:
return self.unit.to(
to_unit, self.value, equivalencies=self.equivalencies
)
except UnitsError:
return None
value = get_virtual_unit_attribute()
if value is None:
raise AttributeError(
f"{self.__class__.__name__} instance has no attribute '{attr}'"
)
else:
return value
# Equality needs to be handled explicitly as ndarray.__eq__ gives
# DeprecationWarnings on any error, which is distracting, and does not
# deal well with structured arrays (nor does the ufunc).
def __eq__(self, other):
try:
other_value = self._to_own_unit(other)
except UnitsError:
return False
except Exception:
return NotImplemented
return self.value.__eq__(other_value)
def __ne__(self, other):
try:
other_value = self._to_own_unit(other)
except UnitsError:
return True
except Exception:
return NotImplemented
return self.value.__ne__(other_value)
# Unit conversion operator (<<).
def __lshift__(self, other):
try:
other = Unit(other, parse_strict="silent")
except UnitTypeError:
return NotImplemented
return self.__class__(self, other, copy=False, subok=True)
def __ilshift__(self, other):
try:
other = Unit(other, parse_strict="silent")
except UnitTypeError:
return NotImplemented # try other.__rlshift__(self)
try:
factor = self.unit._to(other)
except UnitConversionError: # incompatible, or requires an Equivalency
return NotImplemented
except AttributeError: # StructuredUnit does not have `_to`
# In principle, in-place might be possible.
return NotImplemented
view = self.view(np.ndarray)
try:
view *= factor # operates on view
except TypeError:
# The error is `numpy.core._exceptions._UFuncOutputCastingError`,
# which inherits from `TypeError`.
return NotImplemented
self._set_unit(other)
return self
def __rlshift__(self, other):
if not self.isscalar:
return NotImplemented
return Unit(self).__rlshift__(other)
# Give warning for other >> self, since probably other << self was meant.
def __rrshift__(self, other):
warnings.warn(
">> is not implemented. Did you mean to convert "
"something to this quantity as a unit using '<<'?",
AstropyWarning,
)
return NotImplemented
# Also define __rshift__ and __irshift__ so we override default ndarray
# behaviour, but instead of emitting a warning here, let it be done by
# other (which likely is a unit if this was a mistake).
def __rshift__(self, other):
return NotImplemented
def __irshift__(self, other):
return NotImplemented
# Arithmetic operations
def __mul__(self, other):
if isinstance(other, (UnitBase, str)):
try:
return self._new_view(
self.value.copy(), other * self.unit, propagate_info=False
)
except UnitsError: # let other try to deal with it
return NotImplemented
return super().__mul__(other)
def __imul__(self, other):
if isinstance(other, (UnitBase, str)):
self._set_unit(other * self.unit)
return self
return super().__imul__(other)
def __rmul__(self, other):
return self.__mul__(other)
def __truediv__(self, other):
if isinstance(other, (UnitBase, str)):
try:
return self._new_view(
self.value.copy(), self.unit / other, propagate_info=False
)
except UnitsError: # let other try to deal with it
return NotImplemented
return super().__truediv__(other)
def __itruediv__(self, other):
if isinstance(other, (UnitBase, str)):
self._set_unit(self.unit / other)
return self
return super().__itruediv__(other)
def __rtruediv__(self, other):
if isinstance(other, (UnitBase, str)):
return self._new_view(
1.0 / self.value, other / self.unit, propagate_info=False
)
return super().__rtruediv__(other)
def __pow__(self, other):
if isinstance(other, Fraction):
# Avoid getting object arrays by raising the value to a Fraction.
return self._new_view(
self.value ** float(other), self.unit**other, propagate_info=False
)
return super().__pow__(other)
# other overrides of special functions
def __hash__(self):
return hash(self.value) ^ hash(self.unit)
def __iter__(self):
if self.isscalar:
raise TypeError(
f"'{self.__class__.__name__}' object with a scalar value is not"
" iterable"
)
return map(self._new_view, self.value)
def __getitem__(self, key):
if isinstance(key, str) and isinstance(self.unit, StructuredUnit):
return self._new_view(
self.view(np.ndarray)[key], self.unit[key], propagate_info=False
)
try:
out = super().__getitem__(key)
except IndexError:
# We want zero-dimensional Quantity objects to behave like scalars,
# so they should raise a TypeError rather than an IndexError.
if self.isscalar:
raise TypeError(
f"'{self.__class__.__name__}' object with a scalar value "
"does not support indexing"
)
else:
raise
# For single elements, ndarray.__getitem__ returns scalars; these
# need a new view as a Quantity.
if not isinstance(out, np.ndarray):
out = self._new_view(out)
return out
def __setitem__(self, i, value):
if isinstance(i, str):
# Indexing will cause a different unit, so by doing this in
# two steps we effectively try with the right unit.
self[i][...] = value
return
# update indices in info if the info property has been accessed
# (in which case 'info' in self.__dict__ is True; this is guaranteed
# to be the case if we're part of a table).
if not self.isscalar and "info" in self.__dict__:
self.info.adjust_indices(i, value, len(self))
self.view(np.ndarray).__setitem__(i, self._to_own_unit(value))
# __contains__ is OK
def __bool__(self):
"""This method raises ValueError, since truthiness of quantities is ambiguous,
especially for logarithmic units and temperatures. Use explicit comparisons.
"""
raise ValueError(
f"{type(self).__name__} truthiness is ambiguous, especially for logarithmic units"
" and temperatures. Use explicit comparisons."
)
def __len__(self):
if self.isscalar:
raise TypeError(
f"'{self.__class__.__name__}' object with a scalar value has no len()"
)
else:
return len(self.value)
# Numerical types
def __float__(self):
try:
return float(self.to_value(dimensionless_unscaled))
except (UnitsError, TypeError):
raise TypeError(
"only dimensionless scalar quantities can be "
"converted to Python scalars"
)
def __int__(self):
try:
return int(self.to_value(dimensionless_unscaled))
except (UnitsError, TypeError):
raise TypeError(
"only dimensionless scalar quantities can be "
"converted to Python scalars"
)
def __round__(self, ndigits=0):
return self.round(decimals=ndigits)
def __index__(self):
# for indices, we do not want to mess around with scaling at all,
# so unlike for float, int, we insist here on unscaled dimensionless
if self.unit.is_unity():
try:
return self.value.__index__()
except AttributeError:
pass
raise TypeError(
"only integer dimensionless scalar quantities "
"can be converted to a Python index"
)
# TODO: we may want to add a hook for dimensionless quantities?
@property
def _unitstr(self):
if self.unit is None:
unitstr = _UNIT_NOT_INITIALISED
else:
unitstr = str(self.unit)
if unitstr:
unitstr = " " + unitstr
return unitstr
def to_string(
self, unit=None, precision=None, format=None, subfmt=None, *, formatter=None
):
"""
Generate a string representation of the quantity and its unit.
The behavior of this function can be altered via the
`numpy.set_printoptions` function and its various keywords. The
exception to this is the ``threshold`` keyword, which is controlled via
the ``[units.quantity]`` configuration item ``latex_array_threshold``.
This is treated separately because the numpy default of 1000 is too big
for most browsers to handle.
Parameters
----------
unit : unit-like, optional
Specifies the unit. If not provided,
the unit used to initialize the quantity will be used.
precision : number, optional
The level of decimal precision. If `None`, or not provided,
it will be determined from NumPy print options.
format : str, optional
The format of the result. If not provided, an unadorned
string is returned. Supported values are:
- 'latex': Return a LaTeX-formatted string
- 'latex_inline': Return a LaTeX-formatted string that uses
negative exponents instead of fractions
formatter : str, callable, dict, optional
The formatter to use for the value. If a string, it should be a
valid format specifier using Python's mini-language. If a callable,
it will be treated as the default formatter for all values and will
overwrite default Latex formatting for exponential notation and complex
numbers. If a dict, it should map a specific type to a callable to be
directly passed into `numpy.array2string`. If not provided, the default
formatter will be used.
subfmt : str, optional
Subformat of the result. For the moment, only used for
``format='latex'`` and ``format='latex_inline'``. Supported
values are:
- 'inline': Use ``$ ... $`` as delimiters.
- 'display': Use ``$\\displaystyle ... $`` as delimiters.
Returns
-------
str
A string with the contents of this Quantity
"""
if unit is not None and unit != self.unit:
return self.to(unit).to_string(
unit=None,
precision=precision,
format=format,
subfmt=subfmt,
formatter=formatter,
)
if format is None and formatter is None and precision is None:
# Use default formatting settings
return f"{self.value}{self._unitstr:s}"
formats = {
None: None,
"latex": {
None: ("$", "$"),
"inline": ("$", "$"),
"display": (r"$\displaystyle ", r"$"),
},
}
formats["latex_inline"] = formats["latex"]
if format not in formats:
raise ValueError(f"Unknown format '{format}'")
format_spec = formatter if isinstance(formatter, str) else None
if format is None:
if format_spec is not None:
def formatter(value):
return builtins.format(value, format_spec)
if callable(formatter):
formatter = {"all": formatter}
return (
np.array2string(
self.value,
precision=precision,
floatmode="fixed",
formatter=formatter,
)
+ self._unitstr
)
# else, for the moment we assume format="latex" or "latex_inline".
# Set the precision if set, otherwise use numpy default
pops = np.get_printoptions()
if format_spec is None:
format_spec = (
f".{precision if precision is not None else pops['precision']}g"
)
# Use default formatters
if formatter is None or isinstance(formatter, str):
# Filter width and alignment operations for latex
# [[fill]align][sign]["z"]["#"]["0"][width][grouping_option]["." precision][type]
format_spec = re.sub(
r"(.*?)([+\- ]?)(\d+)?(,)?(\.\d+)?([a-zA-Z%]+)?$",
r"\2\5\6",
format_spec,
)
if self.dtype.kind == "c": # Complex default latex formatter
# Disallow sign operations for the imaginary part
imag_format_spec = re.sub(r"[+\- ]", "", format_spec)
def formatter(value):
return "({}{}i)".format(
Latex.format_exponential_notation(
value.real, format_spec=format_spec
),
Latex.format_exponential_notation(
value.imag, format_spec="+" + imag_format_spec
),
)
else: # Float default latex formatter
def formatter(value):
return Latex.format_exponential_notation(
value, format_spec=format_spec
)
if callable(formatter):
formatter = {"all": formatter}
# The view is needed for the scalar case - self.value might be float.
latex_value = np.array2string(
self.view(np.ndarray),
threshold=(
conf.latex_array_threshold
if conf.latex_array_threshold > -1
else pops["threshold"]
),
formatter=formatter,
max_line_width=np.inf,
separator=",~",
)
latex_value = latex_value.replace("...", r"\dots")
# Format unit
# [1:-1] strips the '$' on either side needed for math mode
if self.unit is None:
latex_unit = _UNIT_NOT_INITIALISED
elif format == "latex":
latex_unit = self.unit._repr_latex_()[1:-1] # note this is unicode
elif format == "latex_inline":
latex_unit = self.unit.to_string(format="latex_inline")[1:-1]
delimiter_left, delimiter_right = formats[format][subfmt]
# Add a space in front except for super-script units like degrees.
if not latex_unit.removeprefix("\\mathrm{").startswith("{}^"):
latex_unit = rf" \; {latex_unit}"
return rf"{delimiter_left}{latex_value}{latex_unit}{delimiter_right}"
def __str__(self):
return self.to_string()
def __repr__(self):
prefixstr = "<" + self.__class__.__name__ + " "
arrstr = np.array2string(
self.view(np.ndarray), separator=", ", prefix=prefixstr
)
return f"{prefixstr}{arrstr}{self._unitstr:s}>"
def _repr_latex_(self):
"""
Generate a latex representation of the quantity and its unit.
Returns
-------
lstr
A LaTeX string with the contents of this Quantity
"""
# NOTE: This should change to display format in a future release
return self.to_string(format="latex", subfmt="inline")
def __format__(self, format_spec):
try:
return self.to_string(format=format_spec)
except ValueError:
# We might have a unit format not implemented in `to_string()`.
if format_spec in Base.registry:
if self.unit is dimensionless_unscaled:
return f"{self.value}"
else:
return f"{self.value} {format(self.unit, format_spec)}"
# Can the value be formatted on its own?
try:
return f"{format(self.value, format_spec)}{self._unitstr:s}"
except ValueError:
# Format the whole thing as a single string.
return format(f"{self.value}{self._unitstr:s}", format_spec)
def decompose(self, bases: Collection[UnitBase] = ()) -> Self:
"""
Generates a new `Quantity` with the units
decomposed. Decomposed units have only irreducible units in
them (see `astropy.units.UnitBase.decompose`).
Parameters
----------
bases : sequence of `~astropy.units.UnitBase`, optional
The bases to decompose into. When not provided,
decomposes down to any irreducible units. When provided,
the decomposed result will only contain the given units.
This will raises a `~astropy.units.UnitsError` if it's not possible
to do so.
Returns
-------
newq : `~astropy.units.Quantity`
A new object equal to this quantity with units decomposed.
"""
return self._decompose(False, bases=bases)
def _decompose(
self, allowscaledunits: bool = False, bases: Collection[UnitBase] = ()
) -> Self:
"""
Generates a new `Quantity` with the units decomposed. Decomposed
units have only irreducible units in them (see
`astropy.units.UnitBase.decompose`).
Parameters
----------
allowscaledunits : bool
If True, the resulting `Quantity` may have a scale factor
associated with it. If False, any scaling in the unit will
be subsumed into the value of the resulting `Quantity`
bases : sequence of UnitBase, optional
The bases to decompose into. When not provided,
decomposes down to any irreducible units. When provided,
the decomposed result will only contain the given units.
This will raises a `~astropy.units.UnitsError` if it's not possible
to do so.
Returns
-------
newq : `~astropy.units.Quantity`
A new object equal to this quantity with units decomposed.
"""
new_unit = self.unit.decompose(bases=bases)
# Be careful here because self.value usually is a view of self;
# be sure that the original value is not being modified.
if not allowscaledunits and hasattr(new_unit, "scale"):
new_value = self.value * new_unit.scale
new_unit = new_unit / new_unit.scale
return self._new_view(new_value, new_unit)
else:
return self._new_view(self.copy(), new_unit)
# These functions need to be overridden to take into account the units
# Array conversion
# https://numpy.org/doc/stable/reference/arrays.ndarray.html#array-conversion
def item(self, *args):
"""Copy an element of an array to a scalar Quantity and return it.
Like :meth:`~numpy.ndarray.item` except that it always
returns a `Quantity`, not a Python scalar.
"""
return self._new_view(super().item(*args))
def tolist(self):
raise NotImplementedError(
"cannot make a list of Quantities. Get list of values with"
" q.value.tolist()."
)
def _to_own_unit(self, value, check_precision=True, *, unit=None):
"""Convert value to one's own unit (or that given).
Here, non-quantities are treated as dimensionless, and care is taken
for values of 0, infinity or nan, which are allowed to have any unit.
Parameters
----------
value : anything convertible to `~astropy.units.Quantity`
The value to be converted to the requested unit.
check_precision : bool
Whether to forbid conversion of float to integer if that changes
the input number. Default: `True`.
unit : `~astropy.units.Unit` or None
The unit to convert to. By default, the unit of ``self``.
Returns
-------
value : number or `~numpy.ndarray`
In the requested units.
"""
if unit is None:
unit = self.unit
try:
_value = value.to_value(unit)
except AttributeError:
# We're not a Quantity.
# First remove two special cases (with a fast test):
# 1) Maybe masked printing? MaskedArray with quantities does not
# work very well, but no reason to break even repr and str.
# 2) np.ma.masked? useful if we're a MaskedQuantity.
if value is np.ma.masked or (
value is np.ma.masked_print_option and self.dtype.kind == "O"
):
return value
# Now, let's try a more general conversion.
# Plain arrays will be converted to dimensionless in the process,
# but anything with a unit attribute will use that.
try:
as_quantity = Quantity(value)
_value = as_quantity.to_value(unit)
except UnitsError:
# last chance: if this was not something with a unit
# and is all 0, inf, or nan, we treat it as arbitrary unit.
if not hasattr(value, "unit") and can_have_arbitrary_unit(
as_quantity.value
):
_value = as_quantity.value
else:
raise
if self.dtype.kind == "i" and check_precision:
# If, e.g., we are casting float to int, we want to fail if
# precision is lost, but let things pass if it works.
_value = np.array(_value, copy=COPY_IF_NEEDED, subok=True)
if not np.can_cast(_value.dtype, self.dtype):
self_dtype_array = np.array(_value, self.dtype, subok=True)
if not np.all((self_dtype_array == _value) | np.isnan(_value)):
raise TypeError(
"cannot convert value type to array type without precision loss"
)
# Setting names to ensure things like equality work (note that
# above will have failed already if units did not match).
# TODO: is this the best place to do this?
if _value.dtype.names is not None:
_value = _value.astype(self.dtype, copy=False)
return _value
if NUMPY_LT_2_0:
def itemset(self, *args):
if len(args) == 0:
raise ValueError("itemset must have at least one argument")
self.view(np.ndarray).itemset(*(args[:-1] + (self._to_own_unit(args[-1]),)))
def tostring(self, order="C"):
"""Not implemented, use ``.value.tostring()`` instead."""
raise NotImplementedError(
"cannot write Quantities to string. Write array with"
" q.value.tostring(...)."
)
def tobytes(self, order="C"):
"""Not implemented, use ``.value.tobytes()`` instead."""
raise NotImplementedError(
"cannot write Quantities to bytes. Write array with q.value.tobytes(...)."
)
def tofile(self, fid, sep="", format="%s"):
"""Not implemented, use ``.value.tofile()`` instead."""
raise NotImplementedError(
"cannot write Quantities to file. Write array with q.value.tofile(...)"
)
def dump(self, file):
"""Not implemented, use ``.value.dump()`` instead."""
raise NotImplementedError(
"cannot dump Quantities to file. Write array with q.value.dump()"
)
def dumps(self):
"""Not implemented, use ``.value.dumps()`` instead."""
raise NotImplementedError(
"cannot dump Quantities to string. Write array with q.value.dumps()"
)
# astype, byteswap, copy, view, getfield, setflags OK as is
def fill(self, value):
self.view(np.ndarray).fill(self._to_own_unit(value))
# Shape manipulation: resize cannot be done (does not own data), but
# shape, transpose, swapaxes, flatten, ravel, squeeze all OK. Only
# the flat iterator needs to be overwritten, otherwise single items are
# returned as numbers.
@property
def flat(self):
"""A 1-D iterator over the Quantity array.
This returns a ``QuantityIterator`` instance, which behaves the same
as the `~numpy.flatiter` instance returned by `~numpy.ndarray.flat`,
and is similar to, but not a subclass of, Python's built-in iterator
object.
"""
return QuantityIterator(self)
@flat.setter
def flat(self, value):
y = self.ravel()
y[:] = value
# Item selection and manipulation
# repeat, sort, compress, diagonal OK
def take(self, indices, axis=None, out=None, mode="raise"):
out = super().take(indices, axis=axis, out=out, mode=mode)
# For single elements, ndarray.take returns scalars; these
# need a new view as a Quantity.
if type(out) is not type(self):
out = self._new_view(out)
return out
def put(self, indices, values, mode="raise"):
self.view(np.ndarray).put(indices, self._to_own_unit(values), mode)
def choose(self, choices, out=None, mode="raise"):
raise NotImplementedError(
"cannot choose based on quantity. Choose using array with"
" q.value.choose(...)"
)
# ensure we do not return indices as quantities
if NUMPY_LT_2_0:
def argsort(self, axis=-1, kind=None, order=None):
return self.view(np.ndarray).argsort(axis=axis, kind=kind, order=order)
else:
def argsort(self, axis=-1, kind=None, order=None, *, stable=None):
return self.view(np.ndarray).argsort(
axis=axis, kind=kind, order=order, stable=stable
)
def searchsorted(self, v, *args, **kwargs):
return np.searchsorted(
np.array(self), self._to_own_unit(v, check_precision=False), *args, **kwargs
) # avoid numpy 1.6 problem
def argmax(self, axis=None, out=None, *, keepdims=False):
return self.view(np.ndarray).argmax(axis=axis, out=out, keepdims=keepdims)
def argmin(self, axis=None, out=None, *, keepdims=False):
return self.view(np.ndarray).argmin(axis=axis, out=out, keepdims=keepdims)
def __array_function__(self, function, types, args, kwargs):
"""Wrap numpy functions, taking care of units.
Parameters
----------
function : callable
Numpy function to wrap
types : iterable of classes
Classes that provide an ``__array_function__`` override. Can
in principle be used to interact with other classes. Below,
mostly passed on to `~numpy.ndarray`, which can only interact
with subclasses.
args : tuple
Positional arguments provided in the function call.
kwargs : dict
Keyword arguments provided in the function call.
Returns
-------
result: `~astropy.units.Quantity`, `~numpy.ndarray`
As appropriate for the function. If the function is not
supported, `NotImplemented` is returned, which will lead to
a `TypeError` unless another argument overrode the function.
Raises
------
~astropy.units.UnitsError
If operands have incompatible units.
"""
# A function should be in one of the following sets or dicts:
# 1. SUBCLASS_SAFE_FUNCTIONS (set), if the numpy implementation
# supports Quantity; we pass on to ndarray.__array_function__.
# 2. FUNCTION_HELPERS (dict), if the numpy implementation is usable
# after converting quantities to arrays with suitable units,
# and possibly setting units on the result.
# 3. DISPATCHED_FUNCTIONS (dict), if the function makes sense but
# requires a Quantity-specific implementation.
# 4. UNSUPPORTED_FUNCTIONS (set), if the function does not make sense.
# For now, since we may not yet have complete coverage, if a
# function is in none of the above, we simply call the numpy
# implementation.
if function in SUBCLASS_SAFE_FUNCTIONS:
return super().__array_function__(function, types, args, kwargs)
elif function in FUNCTION_HELPERS:
function_helper = FUNCTION_HELPERS[function]
try:
args, kwargs, unit, out = function_helper(*args, **kwargs)
except NotImplementedError:
return self._not_implemented_or_raise(function, types)
try:
result = super().__array_function__(function, types, args, kwargs)
except AttributeError as e:
# this exception handling becomes unneeded in numpy 2.2 (not NUMPY_LT_2_2)
# see https://github.com/numpy/numpy/issues/27500
if "_implementation" not in str(e):
raise
result = function(*args, **kwargs)
# Fall through to return section
elif function in DISPATCHED_FUNCTIONS:
dispatched_function = DISPATCHED_FUNCTIONS[function]
try:
result, unit, out = dispatched_function(*args, **kwargs)
except NotImplementedError:
return self._not_implemented_or_raise(function, types)
# Fall through to return section
elif function in UNSUPPORTED_FUNCTIONS:
return NotImplemented
else:
warnings.warn(
f"function '{function.__name__}' is not known to astropy's Quantity."
" Will run it anyway, hoping it will treat ndarray subclasses"
" correctly. Please raise an issue at"
" https://github.com/astropy/astropy/issues.",
AstropyWarning,
)
return super().__array_function__(function, types, args, kwargs)
if unit is UNIT_FROM_LIKE_ARG:
# fallback mechanism for NEP 35 functions that dispatch on the 'like'
# argument (i.e. self, in this context), in cases where no other
# argument provides a unit
unit = self.unit
# If unit is None, a plain array is expected (e.g., boolean), which
# means we're done.
# We're also done if the result was NotImplemented, which can happen
# if other inputs/outputs override __array_function__;
# hopefully, they can then deal with us.
if unit is None or result is NotImplemented:
return result
return self._result_as_quantity(result, unit, out=out)
def _not_implemented_or_raise(self, function, types):
# Our function helper or dispatcher found that the function does not
# work with Quantity. In principle, there may be another class that
# knows what to do with us, for which we should return NotImplemented.
# But if there is ndarray (or a non-Quantity subclass of it) around,
# it quite likely coerces, so we should just break.
if any(
issubclass(t, np.ndarray) and not issubclass(t, Quantity) for t in types
):
raise TypeError(
f"the Quantity implementation cannot handle {function} "
"with the given arguments."
) from None
else:
return NotImplemented
# Calculation -- override ndarray methods to take into account units.
# We use the corresponding numpy functions to evaluate the results, since
# the methods do not always allow calling with keyword arguments.
# For instance, np.array([0.,2.]).clip(a_min=0., a_max=1.) gives
# TypeError: 'a_max' is an invalid keyword argument for this function.
def _wrap_function(self, function, *args, unit=None, out=None, **kwargs):
"""Wrap a numpy function that processes self, returning a Quantity.
Parameters
----------
function : callable
Numpy function to wrap.
args : positional arguments
Any positional arguments to the function beyond the first argument
(which will be set to ``self``).
kwargs : keyword arguments
Keyword arguments to the function.
If present, the following arguments are treated specially:
unit : `~astropy.units.Unit`
Unit of the output result. If not given, the unit of ``self``.
out : `~astropy.units.Quantity`
A Quantity instance in which to store the output.
Notes
-----
Output should always be assigned via a keyword argument, otherwise
no proper account of the unit is taken.
Returns
-------
out : `~astropy.units.Quantity`
Result of the function call, with the unit set properly.
"""
if unit is None:
unit = self.unit
# Ensure we don't loop back by turning any Quantity into array views.
args = (self.value,) + tuple(
(arg.value if isinstance(arg, Quantity) else arg) for arg in args
)
if out is not None:
# If pre-allocated output is used, check it is suitable.
# This also returns array view, to ensure we don't loop back.
arrays = tuple(arg for arg in args if isinstance(arg, np.ndarray))
kwargs["out"] = check_output(out, unit, arrays, function=function)
# Apply the function and turn it back into a Quantity.
result = function(*args, **kwargs)
return self._result_as_quantity(result, unit, out)
def trace(self, offset=0, axis1=0, axis2=1, dtype=None, out=None):
return self._wrap_function(np.trace, offset, axis1, axis2, dtype, out=out)
def var(
self, axis=None, dtype=None, out=None, ddof=0, keepdims=False, *, where=True
):
return self._wrap_function(
np.var,
axis,
dtype,
out=out,
ddof=ddof,
keepdims=keepdims,
where=where,
unit=self.unit**2,
)
def std(
self, axis=None, dtype=None, out=None, ddof=0, keepdims=False, *, where=True
):
return self._wrap_function(
np.std, axis, dtype, out=out, ddof=ddof, keepdims=keepdims, where=where
)
def mean(self, axis=None, dtype=None, out=None, keepdims=False, *, where=True):
return self._wrap_function(
np.mean, axis, dtype, out=out, keepdims=keepdims, where=where
)
def round(self, decimals=0, out=None):
return self._wrap_function(np.round, decimals, out=out)
def dot(self, b, out=None):
result_unit = self.unit * getattr(b, "unit", dimensionless_unscaled)
return self._wrap_function(np.dot, b, out=out, unit=result_unit)
# Calculation: override methods that do not make sense.
def all(self, axis=None, out=None):
raise TypeError(
"cannot evaluate truth value of quantities. "
"Evaluate array with q.value.all(...)"
)
def any(self, axis=None, out=None):
raise TypeError(
"cannot evaluate truth value of quantities. "
"Evaluate array with q.value.any(...)"
)
# Calculation: numpy functions that can be overridden with methods.
def diff(self, n=1, axis=-1):
return self._wrap_function(np.diff, n, axis)
def ediff1d(self, to_end=None, to_begin=None):
return self._wrap_function(np.ediff1d, to_end, to_begin)
def insert(self, obj, values, axis=None):
"""
Insert values along the given axis before the given indices and return
a new `~astropy.units.Quantity` object.
This is a thin wrapper around the `numpy.insert` function.
Parameters
----------
obj : int, slice or sequence of int
Object that defines the index or indices before which ``values`` is
inserted.
values : array-like
Values to insert. If the type of ``values`` is different
from that of quantity, ``values`` is converted to the matching type.
``values`` should be shaped so that it can be broadcast appropriately
The unit of ``values`` must be consistent with this quantity.
axis : int, optional
Axis along which to insert ``values``. If ``axis`` is None then
the quantity array is flattened before insertion.
Returns
-------
out : `~astropy.units.Quantity`
A copy of quantity with ``values`` inserted. Note that the
insertion does not occur in-place: a new quantity array is returned.
Examples
--------
>>> import astropy.units as u
>>> q = [1, 2] * u.m
>>> q.insert(0, 50 * u.cm)
<Quantity [ 0.5, 1., 2.] m>
>>> q = [[1, 2], [3, 4]] * u.m
>>> q.insert(1, [10, 20] * u.m, axis=0)
<Quantity [[ 1., 2.],
[ 10., 20.],
[ 3., 4.]] m>
>>> q.insert(1, 10 * u.m, axis=1)
<Quantity [[ 1., 10., 2.],
[ 3., 10., 4.]] m>
"""
out_array = np.insert(self.value, obj, self._to_own_unit(values), axis)
return self._new_view(out_array)
| Quantity |
python | microsoft__pyright | packages/pyright-internal/src/tests/samples/typeVarDefault2.py | {
"start": 2416,
"end": 2521
} | class ____[**P = 3]: ...
# This should generate an error because ParamSpec must be a list of types.
| ClassP7 |
python | ray-project__ray | python/ray/tests/test_basic.py | {
"start": 17455,
"end": 18059
} | class ____:
def check(self):
import os
assert "CUDA_VISIBLE_DEVICES" not in os.environ
print("task check", ray.get(check.remote()))
print("actor check", ray.get(Actor.options(num_gpus=0).remote().check.remote()))
"""
run_string_as_driver(
not_override_check_script,
dict(
os.environ,
**{"RAY_ACCEL_ENV_VAR_OVERRIDE_ON_ZERO": "0"},
),
)
override_check_script = """
import ray
ray.init()
@ray.remote(num_gpus=0)
def check():
import os
assert os.environ.get("CUDA_VISIBLE_DEVICES") == ""
@ray.remote(num_gpus=0)
| Actor |
python | kubernetes-client__python | kubernetes/client/models/v1alpha1_json_patch.py | {
"start": 383,
"end": 10024
} | class ____(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'expression': 'str'
}
attribute_map = {
'expression': 'expression'
}
def __init__(self, expression=None, local_vars_configuration=None): # noqa: E501
"""V1alpha1JSONPatch - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._expression = None
self.discriminator = None
if expression is not None:
self.expression = expression
@property
def expression(self):
"""Gets the expression of this V1alpha1JSONPatch. # noqa: E501
expression will be evaluated by CEL to create a [JSON patch](https://jsonpatch.com/). ref: https://github.com/google/cel-spec expression must return an array of JSONPatch values. For example, this CEL expression returns a JSON patch to conditionally modify a value: [ JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"}, JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"} ] To define an object for the patch value, use Object types. For example: [ JSONPatch{ op: \"add\", path: \"/spec/selector\", value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}} } ] To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example: [ JSONPatch{ op: \"add\", path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"), value: \"test\" }, ] CEL expressions have access to the types needed to create JSON patches and objects: - 'JSONPatch' - CEL type of JSON Patch operations. JSONPatch has the fields 'op', 'from', 'path' and 'value'. See [JSON patch](https://jsonpatch.com/) for more details. The 'value' field may be set to any of: string, integer, array, map or object. If set, the 'path' and 'from' fields must be set to a [JSON pointer](https://datatracker.ietf.org/doc/html/rfc6901/) string, where the 'jsonpatch.escapeKey()' CEL function may be used to escape path keys containing '/' and '~'. - 'Object' - CEL type of the resource object. - 'Object.<fieldName>' - CEL type of object field (such as 'Object.spec') - 'Object.<fieldName1>.<fieldName2>...<fieldNameN>` - CEL type of nested field (such as 'Object.spec.containers') CEL expressions have access to the contents of the API request, organized into CEL variables as well as some other useful variables: - 'object' - The object from the incoming request. The value is null for DELETE requests. - 'oldObject' - The existing object. The value is null for CREATE requests. - 'request' - Attributes of the API request([ref](/pkg/apis/admission/types.go#AdmissionRequest)). - 'params' - Parameter resource referred to by the policy binding being evaluated. Only populated if the policy has a ParamKind. - 'namespaceObject' - The namespace object that the incoming object belongs to. The value is null for cluster-scoped resources. - 'variables' - Map of composited variables, from its name to its lazily evaluated value. For example, a variable named 'foo' can be accessed as 'variables.foo'. - 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz - 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the request resource. CEL expressions have access to [Kubernetes CEL function libraries](https://kubernetes.io/docs/reference/using-api/cel/#cel-options-language-features-and-libraries) as well as: - 'jsonpatch.escapeKey' - Performs JSONPatch key escaping. '~' and '/' are escaped as '~0' and `~1' respectively). Only property names of the form `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible. Required. # noqa: E501
:return: The expression of this V1alpha1JSONPatch. # noqa: E501
:rtype: str
"""
return self._expression
@expression.setter
def expression(self, expression):
"""Sets the expression of this V1alpha1JSONPatch.
expression will be evaluated by CEL to create a [JSON patch](https://jsonpatch.com/). ref: https://github.com/google/cel-spec expression must return an array of JSONPatch values. For example, this CEL expression returns a JSON patch to conditionally modify a value: [ JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"}, JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"} ] To define an object for the patch value, use Object types. For example: [ JSONPatch{ op: \"add\", path: \"/spec/selector\", value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}} } ] To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example: [ JSONPatch{ op: \"add\", path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"), value: \"test\" }, ] CEL expressions have access to the types needed to create JSON patches and objects: - 'JSONPatch' - CEL type of JSON Patch operations. JSONPatch has the fields 'op', 'from', 'path' and 'value'. See [JSON patch](https://jsonpatch.com/) for more details. The 'value' field may be set to any of: string, integer, array, map or object. If set, the 'path' and 'from' fields must be set to a [JSON pointer](https://datatracker.ietf.org/doc/html/rfc6901/) string, where the 'jsonpatch.escapeKey()' CEL function may be used to escape path keys containing '/' and '~'. - 'Object' - CEL type of the resource object. - 'Object.<fieldName>' - CEL type of object field (such as 'Object.spec') - 'Object.<fieldName1>.<fieldName2>...<fieldNameN>` - CEL type of nested field (such as 'Object.spec.containers') CEL expressions have access to the contents of the API request, organized into CEL variables as well as some other useful variables: - 'object' - The object from the incoming request. The value is null for DELETE requests. - 'oldObject' - The existing object. The value is null for CREATE requests. - 'request' - Attributes of the API request([ref](/pkg/apis/admission/types.go#AdmissionRequest)). - 'params' - Parameter resource referred to by the policy binding being evaluated. Only populated if the policy has a ParamKind. - 'namespaceObject' - The namespace object that the incoming object belongs to. The value is null for cluster-scoped resources. - 'variables' - Map of composited variables, from its name to its lazily evaluated value. For example, a variable named 'foo' can be accessed as 'variables.foo'. - 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz - 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the request resource. CEL expressions have access to [Kubernetes CEL function libraries](https://kubernetes.io/docs/reference/using-api/cel/#cel-options-language-features-and-libraries) as well as: - 'jsonpatch.escapeKey' - Performs JSONPatch key escaping. '~' and '/' are escaped as '~0' and `~1' respectively). Only property names of the form `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible. Required. # noqa: E501
:param expression: The expression of this V1alpha1JSONPatch. # noqa: E501
:type: str
"""
self._expression = expression
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, V1alpha1JSONPatch):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, V1alpha1JSONPatch):
return True
return self.to_dict() != other.to_dict()
| V1alpha1JSONPatch |
python | huggingface__transformers | tests/models/bark/test_modeling_bark.py | {
"start": 19126,
"end": 22716
} | class ____(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase):
all_model_classes = (BarkSemanticModel,) if is_torch_available() else ()
# `BarkSemanticModel` inherits from `BarkCausalModel`, but requires an advanced generation config.
# `BarkCausalModel` does not, so we run generation tests there.
all_generative_model_classes = (BarkCausalModel,) if is_torch_available() else ()
is_encoder_decoder = False
test_missing_keys = False
test_resize_embeddings = True
def setUp(self):
self.model_tester = BarkSemanticModelTester(self)
self.config_tester = ConfigTester(self, config_class=BarkSemanticConfig, n_embd=37)
def test_config(self):
self.config_tester.run_common_tests()
def test_save_load_strict(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs()
for model_class in self.all_model_classes:
model = model_class(config)
with tempfile.TemporaryDirectory() as tmpdirname:
model.save_pretrained(tmpdirname)
model2, info = model_class.from_pretrained(tmpdirname, output_loading_info=True)
self.assertEqual(info["missing_keys"], set())
def test_decoder_model_past_with_large_inputs(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
self.model_tester.create_and_check_decoder_model_past_large_inputs(*config_and_inputs)
def test_inputs_embeds(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
for model_class in self.all_model_classes:
model = model_class(config)
model.to(torch_device)
model.eval()
inputs = copy.deepcopy(self._prepare_for_class(inputs_dict, model_class))
input_ids = inputs["input_ids"]
del inputs["input_ids"]
wte = model.get_input_embeddings()
inputs["input_embeds"] = wte(input_ids)
with torch.no_grad():
model(**inputs)[0]
# override as the input arg is called "input_embeds", not "inputs_embeds"
def test_inputs_embeds_matches_input_ids(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
for model_class in self.all_model_classes:
model = model_class(config)
model.to(torch_device)
model.eval()
inputs = copy.deepcopy(self._prepare_for_class(inputs_dict, model_class))
with torch.no_grad():
out_ids = model(**inputs)[0]
input_ids = inputs["input_ids"]
del inputs["input_ids"]
wte = model.get_input_embeddings()
inputs["input_embeds"] = wte(input_ids)
with torch.no_grad():
out_embeds = model(**inputs)[0]
torch.testing.assert_close(out_embeds, out_ids)
@require_torch_fp16
def test_generate_fp16(self):
config, input_dict = self.model_tester.prepare_config_and_inputs()
input_ids = input_dict["input_ids"]
attention_mask = input_ids.ne(1).to(torch_device)
model = self.all_generative_model_classes[0](config).eval().to(torch_device)
model.half()
model.generate(input_ids, attention_mask=attention_mask)
model.generate(num_beams=4, do_sample=True, early_stopping=False, num_return_sequences=3)
@unittest.skip("Bark has no base model due to special archiecture")
def test_model_base_model_prefix(self):
pass
@require_torch
| BarkSemanticModelTest |
python | pytorch__pytorch | torch/testing/_internal/common_quantization.py | {
"start": 68423,
"end": 68937
} | class ____(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.fc1 = torch.nn.Linear(5, 8).to(dtype=torch.float)
self.fc2 = QuantWrapper(torch.nn.Linear(8, 5).to(dtype=torch.float))
self.fc2.qconfig = torch.ao.quantization.get_default_qconfig("fbgemm")
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
return x
def get_example_inputs(self) -> tuple[Any, ...]:
return (torch.rand(1, 5),)
| AnnotatedTwoLayerLinearModel |
python | pytorch__pytorch | torch/_functorch/_aot_autograd/descriptors.py | {
"start": 23523,
"end": 23976
} | class ____(DifferentiableAOTOutput):
"""An output representing the computed gradient for a differentiable input, in the joint graph"""
grad_of: DifferentiableAOTInput
def __post_init__(self) -> None:
assert isinstance(self.grad_of, DifferentiableAOTInput)
def expr(self) -> str:
return f"__grad({self.grad_of.expr()})"
def is_grad(self) -> bool:
return True
@dataclasses.dataclass(frozen=True)
| GradAOTOutput |
python | huggingface__transformers | src/transformers/models/audioflamingo3/configuration_audioflamingo3.py | {
"start": 5553,
"end": 8998
} | class ____(PretrainedConfig):
r"""
This is the configuration class to store the configuration of an [`AudioFlamingo3ForConditionalGeneration`]. It is used to instantiate an
AudioFlamingo3 model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the AudioFlamingo3.
e.g. [nvidia/audio-flamingo-3-hf](https://huggingface.co/nvidia/audio-flamingo-3-hf)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
audio_config (`Union[AudioFlamingo3EncoderConfig, dict]`, *optional*, defaults to `AudioFlamingo3EncoderConfig`):
The config object or dictionary of the audio backbone.
text_config (`Union[AutoConfig, dict]`, *optional*, defaults to `Qwen2Config`):
The config object or dictionary of the text backbone.
audio_token_id (`int`, *optional*, defaults to 151669):
The audio token index to encode the audio prompt.
projector_hidden_act (`str`, *optional*, defaults to `"gelu"`):
Activation function used in the projector.
projector_bias (`bool`, *optional*, defaults to `True`):
Whether to include bias terms in the projector.
Example:
```python
>>> from transformers import AudioFlamingo3ForConditionalGeneration, AudioFlamingo3Config, AudioFlamingo3EncoderConfig, Qwen2Config
>>> # Initializing an AudioFlamingo3Encoder config
>>> audio_config = AudioFlamingo3EncoderConfig()
>>> # Initializing a Qwen2 config
>>> text_config = Qwen2Config()
>>> # Initializing an AudioFlamingo3 configuration
>>> configuration = AudioFlamingo3Config(audio_config, text_config)
>>> # Initializing a model from the audioflamingo3 style configuration
>>> model = AudioFlamingo3ForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```"""
model_type = "audioflamingo3"
sub_configs = {
"audio_config": AudioFlamingo3EncoderConfig,
"text_config": AutoConfig,
}
def __init__(
self,
audio_config=None,
text_config=None,
audio_token_id=151669,
projector_hidden_act="gelu",
projector_bias=True,
**kwargs,
):
self.audio_token_id = audio_token_id
if isinstance(audio_config, dict):
audio_config["model_type"] = audio_config.get("model_type", "audioflamingo3_encoder")
audio_config = CONFIG_MAPPING[audio_config["model_type"]](**audio_config)
elif audio_config is None:
audio_config = CONFIG_MAPPING["audioflamingo3_encoder"]()
self.audio_config = audio_config
if isinstance(text_config, dict):
text_config["model_type"] = text_config.get("model_type", "qwen2")
text_config = CONFIG_MAPPING[text_config["model_type"]](**text_config)
elif text_config is None:
text_config = CONFIG_MAPPING["qwen2"]()
self.text_config = text_config
self.projector_hidden_act = projector_hidden_act
self.projector_bias = projector_bias
super().__init__(**kwargs)
__all__ = ["AudioFlamingo3Config", "AudioFlamingo3EncoderConfig"]
| AudioFlamingo3Config |
python | streamlit__streamlit | e2e_playwright/shared/data_mocks.py | {
"start": 7168,
"end": 14587
} | class ____:
def __str__(self) -> str:
return "TestObject"
BASE_TYPES_DF = pd.DataFrame(
{
"string": [
"a",
"this is a very long sentence that does not contain any reasonable content.",
"c",
"d",
"",
None,
],
"bool": [True, False, True, False, True, None],
"int64": [-5, 0, 1, 2, 3, None],
"float64": [-0.1, 0, 0.1, 0.001, 1.1, None],
"datetime": [
datetime(2020, 1, 1, 0, 0, 0),
datetime(2020, 1, 1, 0, 0, 1),
datetime(2020, 1, 1, 0, 0, 2),
datetime(2020, 1, 1, 0, 0, 3),
datetime(2020, 1, 1, 0, 0, 4),
None,
],
"date": [
date(2020, 1, 1),
date(2020, 1, 2),
date(2020, 1, 3),
date(2020, 1, 4),
date(2020, 1, 5),
None,
],
"time": [
time(0, 0, 0),
time(0, 0, 1),
time(0, 0, 2),
time(0, 0, 3),
time(0, 0, 4),
None,
],
"empty": [None, np.nan, pd.NA, pd.NaT, None, None],
}
)
NUMBER_TYPES_DF = pd.DataFrame(
{
"int64": pd.array([-5, 1, 2, 3, 4, None], dtype="Int64"),
"int32": pd.array([-5, 1, 2, 3, 4, None], dtype="Int32"),
"int16": pd.array([-5, 1, 2, 3, 4, None], dtype="Int16"),
"int8": pd.array([-5, 1, 2, 3, 4, None], dtype="Int8"),
"uint64": pd.array([1, 2, 3, 4, 5, None], dtype="UInt64"),
"uint32": pd.array([1, 2, 3, 4, 5, None], dtype="UInt32"),
"uint16": pd.array([1, 2, 3, 4, 5, None], dtype="UInt16"),
"uint8": pd.array([1, 2, 3, 4, 5, None], dtype="UInt8"),
"float64": pd.array([-0.1, 0, 0.1, 0.001, 1.1, None], dtype="float64"),
"float32": pd.array([-0.1, 0, 0.1, 0.001, 1.1, None], dtype="float32"),
"float16": pd.array([-0.1, 0, 0.1, 0.001, 1.1, None], dtype="float16"),
"mixed": pd.array([1, -2, 3.1, 4, 5.0, None]),
}
)
DATETIME_TYPES_DF = pd.DataFrame(
{
"datetime": [random_date() for _ in range(8)] + [None],
"time": [random_date().time() for _ in range(8)] + [None],
"date": [random_date().date() for _ in range(8)] + [None],
"mixed_datetime": [
random.choice(
[
pd.Timestamp(random_date()),
np.datetime64("2022-03-11T17:13:00")
- np.random.randint(400000, 1500000),
pd.to_datetime(10, unit="s"),
]
)
for _ in range(8)
]
+ [None],
"pd_datetime_TZ": [
(pd.to_datetime("2022-03-11 17:41:00-05:00")) for _ in range(8)
]
+ [None],
"datetime_UTC_TZ": [
random_date().replace(tzinfo=timezone.utc) for _ in range(8)
]
+ [None],
# TODO: Mixed timezones within a column will force the column to be of type object
# It also seems to not work correctly.
"mixed_timezones": [
random.choice(
[
random_date().replace(tzinfo=timezone.utc),
pd.to_datetime("2022-03-11 17:41:00-05:00"),
random_date(),
]
)
for _ in range(8)
]
+ [None],
}
)
LIST_TYPES_DF = pd.DataFrame(
{
"string_list": pd.Series(
[["a", "b", "c"], ["foo", "bar"], ["lorem"], [], None]
),
"number_set": pd.Series([{1, 2, 3}, {2, 3}, {4}, set(), None]),
"boolean_tuple": [
(True, False),
(False, True, True),
(True, True),
(),
None,
],
"dict_list": [
[{"foo": random.randint(0, 1000), "bar": "blub"} for _ in range(2)]
for _ in range(4)
]
+ [None],
"datetime_list": [[random_date() for _ in range(2)] for _ in range(4)] + [None],
}
)
INTERVAL_TYPES_DF = pd.DataFrame(
{
"int64_both": [
pd.Interval(left=i, right=i + 1, closed="both") for i in range(5)
]
+ [None],
"int64_right": [
pd.Interval(left=i, right=i + 1, closed="right") for i in range(5)
]
+ [None],
"int64_left": [
pd.Interval(left=i, right=i + 1, closed="left") for i in range(5)
]
+ [None],
"int64_neither": [
pd.Interval(left=i, right=i + 1, closed="neither") for i in range(5)
]
+ [None],
"timestamp_right_default": [
pd.Interval(
left=pd.Timestamp(2022, 3, 14, i),
right=pd.Timestamp(2022, 3, 14, i + 1),
)
for i in range(5)
]
+ [None],
"float64": [
pd.Interval(np.random.random(), np.random.random() + 1) for _ in range(5)
]
+ [None],
}
)
_offset_types = ["ms", "s", "min", "h", "D", "M", "Y", "W", "W-FRI", "Q"]
PERIOD_TYPES_DF = pd.DataFrame(
{
offset_type: (
[pd.Period(date, freq=offset_type) for date in ["1970-01-01", "2012-02-14"]]
+ [None]
)
for offset_type in _offset_types
}
)
SPECIAL_TYPES_DF = pd.DataFrame(
{
"categorical": pd.Series(["a", "b", "c", "a", None]).astype("category"),
"decimal": pd.Series(
[
Decimal("1.1"),
Decimal("-0.03864734299516908213"),
Decimal(1000),
Decimal("2.212"),
None,
]
),
"bytes": pd.Series(
[
b"a",
b"b",
b"foo",
b"bar",
None,
]
),
"emojis 🌈": pd.Series(["Black ⚫", "Red 🔴", "White ⚪", "Red 🔴", None]),
"timedelta": pd.Series(
[
pd.Timedelta("1 days"),
np.timedelta64(100, "D"),
pd.Timedelta("2 hours"),
timedelta(seconds=5),
None,
]
),
}
)
UNSUPPORTED_TYPES_DF = pd.DataFrame(
{
"period[us]": pd.Series(
[
pd.Period("1970-01-01", freq="us"),
pd.Period("2012-02-14", freq="us"),
pd.Period("2012-02-20T00:12:34.56780", freq="us"),
None,
]
),
"complex": pd.Series([1 + 2j, 3 + 4j, 5 + 6 * 1j, None]),
"mixed_integer": pd.Series([1, 2, "3", None]),
"mixed_types": pd.Series([2.1, "3", True, None]),
"frozenset": pd.Series(
[frozenset([1, 2]), frozenset([3, 4]), frozenset([5, 6]), None]
),
"dicts": pd.Series([{"a": 1}, {"b": 2}, {"c": 2}, None]),
"objects": pd.Series([TestObject(), TestObject(), TestObject(), None]),
# TODO(lukasmasuch): Not supported, but currently leads to error
# > "mixed_types_list": pd.Series(
# > [random.choice([1, 1.0, None, "foo"]) for _ in range(10)]
# > for _ in range(n_rows)
# > ),
# TODO(lukasmasuch): Sparse array is supported, but currently leads to error
# > "sparse-array": pd.Series(
# > pd.arrays.SparseArray([random.choice([0, 1, 2]) for _ in range(n_rows)])
# > ),
}
)
| TestObject |
python | vyperlang__vyper | vyper/venom/analysis/dominators.py | {
"start": 258,
"end": 6217
} | class ____(IRAnalysis):
"""
Dominator tree implementation. This class computes the dominator tree of a
function and provides methods to query the tree. The tree is computed using
the Lengauer-Tarjan algorithm.
"""
fn: IRFunction
entry_block: IRBasicBlock
dominators: dict[IRBasicBlock, OrderedSet[IRBasicBlock]]
immediate_dominators: dict[IRBasicBlock, IRBasicBlock]
dominated: dict[IRBasicBlock, OrderedSet[IRBasicBlock]]
dominator_frontiers: dict[IRBasicBlock, OrderedSet[IRBasicBlock]]
cfg: CFGAnalysis
def analyze(self):
"""
Compute the dominator tree.
"""
self.fn = self.function
self.entry_block = self.fn.entry
self.dominators = {}
self.immediate_dominators = {}
self.dominated = {}
self.dominator_frontiers = {}
self.cfg = self.analyses_cache.request_analysis(CFGAnalysis)
self.cfg_post_walk = list(self.cfg.dfs_post_walk)
self.cfg_post_order = {bb: idx for idx, bb in enumerate(self.cfg_post_walk)}
self._compute_dominators()
self._compute_idoms()
self._compute_df()
def get_all_dominated_blocks(self, bb: IRBasicBlock) -> OrderedSet[IRBasicBlock]:
result: OrderedSet[IRBasicBlock] = OrderedSet()
def visit(block):
for dominated_block in self.dominated.get(block, OrderedSet()):
if dominated_block not in result:
result.add(dominated_block)
visit(dominated_block)
visit(bb)
return result
def dominates(self, dom, sub):
"""
Check if `dom` dominates `sub`.
"""
return dom in self.dominators[sub]
def immediate_dominator(self, bb):
"""
Return the immediate dominator of a basic block.
"""
return self.immediate_dominators.get(bb)
def _compute_dominators(self):
"""
Compute dominators
"""
basic_blocks = self.cfg_post_walk
self.dominators = {bb: OrderedSet(basic_blocks) for bb in basic_blocks}
self.dominators[self.entry_block] = OrderedSet([self.entry_block])
changed = True
count = len(basic_blocks) ** 2 # TODO: find a proper bound for this
while changed:
count -= 1
if count < 0:
raise CompilerPanic("Dominators computation failed to converge")
changed = False
for bb in basic_blocks:
if bb == self.entry_block:
continue
preds = self.cfg.cfg_in(bb)
if len(preds) == 0:
continue
new_dominators = OrderedSet.intersection(*[self.dominators[pred] for pred in preds])
new_dominators.add(bb)
if new_dominators != self.dominators[bb]:
self.dominators[bb] = new_dominators
changed = True
def _compute_idoms(self):
"""
Compute immediate dominators
"""
self.immediate_dominators = {bb: None for bb in self.cfg_post_walk}
self.immediate_dominators[self.entry_block] = self.entry_block
for bb in self.cfg_post_walk:
if bb == self.entry_block:
continue
doms = sorted(self.dominators[bb], key=lambda x: self.cfg_post_order[x])
self.immediate_dominators[bb] = doms[1]
self.dominated = {bb: OrderedSet() for bb in self.cfg_post_walk}
for dom, target in self.immediate_dominators.items():
self.dominated[target].add(dom)
def _compute_df(self):
"""
Compute dominance frontier
"""
basic_blocks = self.cfg_post_walk
self.dominator_frontiers = {bb: OrderedSet() for bb in basic_blocks}
for bb in self.cfg_post_walk:
if len(in_bbs := self.cfg.cfg_in(bb)) > 1:
for pred in in_bbs:
runner = pred
while runner != self.immediate_dominators[bb]:
self.dominator_frontiers[runner].add(bb)
runner = self.immediate_dominators[runner]
def dominance_frontier(self, basic_blocks: list[IRBasicBlock]) -> OrderedSet[IRBasicBlock]:
"""
Compute dominance frontier of a set of basic blocks.
"""
df = OrderedSet[IRBasicBlock]()
for bb in basic_blocks:
df.update(self.dominator_frontiers[bb])
return df
def _intersect(self, bb1, bb2):
"""
Find the nearest common dominator of two basic blocks.
"""
dfs_order = self.cfg_post_order
while bb1 != bb2:
while dfs_order[bb1] < dfs_order[bb2]:
bb1 = self.immediate_dominators[bb1]
while dfs_order[bb1] > dfs_order[bb2]:
bb2 = self.immediate_dominators[bb2]
return bb1
@property
def dom_post_order(self) -> Iterator[IRBasicBlock]:
"""
Compute post-order traversal of the dominator tree.
"""
visited = set()
def visit(bb: IRBasicBlock) -> Iterator[IRBasicBlock]:
if bb in visited:
return
visited.add(bb)
for dominated_bb in self.dominated.get(bb, OrderedSet()):
yield from visit(dominated_bb)
yield bb
return visit(self.entry_block)
def as_graph(self) -> str:
"""
Generate a graphviz representation of the dominator tree.
"""
lines = ["digraph dominator_tree {"]
for bb in self.fn.get_basic_blocks():
if bb == self.entry_block:
continue
idom = self.immediate_dominator(bb)
if idom is None:
continue
lines.append(f' " {idom.label} " -> " {bb.label} "')
lines.append("}")
return "\n".join(lines)
| DominatorTreeAnalysis |
python | geekcomputers__Python | XORcipher/XOR_cipher.py | {
"start": 370,
"end": 5089
} | class ____(object):
def __init__(self, key=0):
"""
simple constructor that receives a key or uses
default key = 0
"""
# private field
self.__key = key
def encrypt(self, content, key):
"""
input: 'content' of type string and 'key' of type int
output: encrypted string 'content' as a list of chars
if key not passed the method uses the key by the constructor.
otherwise key = 1
"""
# precondition
assert isinstance(key, int) and isinstance(content, str)
key = key or self.__key or 1
# make sure key can be any size
while key > 255:
key -= 255
# This will be returned
ans = []
for ch in content:
ans.append(chr(ord(ch) ^ key))
return ans
def decrypt(self, content, key):
"""
input: 'content' of type list and 'key' of type int
output: decrypted string 'content' as a list of chars
if key not passed the method uses the key by the constructor.
otherwise key = 1
"""
# precondition
assert isinstance(key, int) and isinstance(content, list)
key = key or self.__key or 1
# make sure key can be any size
while key > 255:
key -= 255
# This will be returned
ans = []
for ch in content:
ans.append(chr(ord(ch) ^ key))
return ans
def encrypt_string(self, content, key=0):
"""
input: 'content' of type string and 'key' of type int
output: encrypted string 'content'
if key not passed the method uses the key by the constructor.
otherwise key = 1
"""
# precondition
assert isinstance(key, int) and isinstance(content, str)
key = key or self.__key or 1
# make sure key can be any size
while key > 255:
key -= 255
# This will be returned
ans = ""
for ch in content:
ans += chr(ord(ch) ^ key)
return ans
def decrypt_string(self, content, key=0):
"""
input: 'content' of type string and 'key' of type int
output: decrypted string 'content'
if key not passed the method uses the key by the constructor.
otherwise key = 1
"""
# precondition
assert isinstance(key, int) and isinstance(content, str)
key = key or self.__key or 1
# make sure key can be any size
while key > 255:
key -= 255
# This will be returned
ans = ""
for ch in content:
ans += chr(ord(ch) ^ key)
return ans
def encrypt_file(self, file, key=0):
"""
input: filename (str) and a key (int)
output: returns true if encrypt process was
successful otherwise false
if key not passed the method uses the key by the constructor.
otherwise key = 1
"""
# precondition
assert isinstance(file, str) and isinstance(key, int)
try:
with open(file, "r") as fin:
with open("encrypt.out", "w+") as fout:
# actual encrypt-process
for line in fin:
fout.write(self.encrypt_string(line, key))
except:
return False
return True
def decrypt_file(self, file, key):
"""
input: filename (str) and a key (int)
output: returns true if decrypt process was
successful otherwise false
if key not passed the method uses the key by the constructor.
otherwise key = 1
"""
# precondition
assert isinstance(file, str) and isinstance(key, int)
try:
with open(file, "r") as fin:
with open("decrypt.out", "w+") as fout:
# actual encrypt-process
for line in fin:
fout.write(self.decrypt_string(line, key))
except:
return False
return True
# Tests
# crypt = XORCipher()
# key = 67
# # test enrcypt
# print crypt.encrypt("hallo welt",key)
# # test decrypt
# print crypt.decrypt(crypt.encrypt("hallo welt",key), key)
# # test encrypt_string
# print crypt.encrypt_string("hallo welt",key)
# # test decrypt_string
# print crypt.decrypt_string(crypt.encrypt_string("hallo welt",key),key)
# if (crypt.encrypt_file("test.txt",key)):
# print "encrypt successful"
# else:
# print "encrypt unsuccessful"
# if (crypt.decrypt_file("encrypt.out",key)):
# print "decrypt successful"
# else:
# print "decrypt unsuccessful"
| XORCipher |
python | lazyprogrammer__machine_learning_examples | ab_testing/bayesian_normal.py | {
"start": 478,
"end": 2122
} | class ____:
def __init__(self, true_mean):
self.true_mean = true_mean
# parameters for mu - prior is N(0,1)
self.m = 0
self.lambda_ = 1
self.tau = 1
self.N = 0
def pull(self):
return np.random.randn() / np.sqrt(self.tau) + self.true_mean
def sample(self):
return np.random.randn() / np.sqrt(self.lambda_) + self.m
def update(self, x):
self.m = (self.tau * x + self.lambda_ * self.m) / (self.tau + self.lambda_)
self.lambda_ += self.tau
self.N += 1
def plot(bandits, trial):
x = np.linspace(-3, 6, 200)
for b in bandits:
y = norm.pdf(x, b.m, np.sqrt(1. / b.lambda_))
plt.plot(x, y, label=f"real mean: {b.true_mean:.4f}, num plays: {b.N}")
plt.title(f"Bandit distributions after {trial} trials")
plt.legend()
plt.show()
def run_experiment():
bandits = [Bandit(m) for m in BANDIT_MEANS]
sample_points = [5,10,20,50,100,200,500,1000,1500,1999]
rewards = np.empty(NUM_TRIALS)
for i in range(NUM_TRIALS):
# Thompson sampling
j = np.argmax([b.sample() for b in bandits])
# plot the posteriors
if i in sample_points:
plot(bandits, i)
# pull the arm for the bandit with the largest sample
x = bandits[j].pull()
# update the distribution for the bandit whose arm we just pulled
bandits[j].update(x)
# update rewards
rewards[i] = x
cumulative_average = np.cumsum(rewards) / (np.arange(NUM_TRIALS) + 1)
# plot moving average ctr
plt.plot(cumulative_average)
for m in BANDIT_MEANS:
plt.plot(np.ones(NUM_TRIALS)*m)
plt.show()
return cumulative_average
if __name__ == '__main__':
run_experiment()
| Bandit |
python | kamyu104__LeetCode-Solutions | Python/maximum-number-of-achievable-transfer-requests.py | {
"start": 88,
"end": 739
} | class ____(object):
def maximumRequests(self, n, requests):
"""
:type n: int
:type requests: List[List[int]]
:rtype: int
"""
for k in reversed(xrange(1, len(requests)+1)):
for c in itertools.combinations(xrange(len(requests)), k):
change = [0]*n
for i in c:
change[requests[i][0]] -= 1
change[requests[i][1]] += 1
if all(c == 0 for c in change):
return k # early return
return 0
# Time: O((n + r) * 2^r)
# Space: O(n + r)
# full search solution (much slower)
| Solution |
python | spyder-ide__spyder | spyder/plugins/updatemanager/container.py | {
"start": 727,
"end": 816
} | class ____:
SpyderCheckUpdateAction = "spyder_check_update_action"
| UpdateManagerActions |
python | spack__spack | lib/spack/spack/vendor/ruamel/yaml/scalarint.py | {
"start": 3117,
"end": 3454
} | class ____(ScalarInt):
def __new__(cls, value, width=None, underscore=None, anchor=None):
# type: (Any, Any, Any, Any) -> Any
return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor)
# mixed casing of A-F is not supported, when loading the first non digit
# determines the case
| OctalInt |
python | jmcnamara__XlsxWriter | xlsxwriter/test/workbook/test_write_defined_name.py | {
"start": 299,
"end": 964
} | class ____(unittest.TestCase):
"""
Test the Workbook _write_defined_name() method.
"""
def setUp(self):
self.fh = StringIO()
self.workbook = Workbook()
self.workbook._set_filehandle(self.fh)
def test_write_defined_name(self):
"""Test the _write_defined_name() method"""
self.workbook._write_defined_name(["_xlnm.Print_Titles", 0, "Sheet1!$1:$1", 0])
exp = """<definedName name="_xlnm.Print_Titles" localSheetId="0">Sheet1!$1:$1</definedName>"""
got = self.fh.getvalue()
self.assertEqual(exp, got)
def tearDown(self):
self.workbook.fileclosed = 1
| TestWriteDefinedName |
python | google__pytype | pytype/tests/test_options.py | {
"start": 97,
"end": 3103
} | class ____(test_base.BaseTest):
"""Tests for VM options."""
def test_no_max_depth(self):
ty = self.Infer(
"""
def f1(x):
return f2(x)
def f2(x):
return f3(x)
def f3(x):
return 1
""",
maximum_depth=None,
)
self.assertTypesMatchPytd(
ty,
"""
def f1(x) -> int: ...
def f2(x) -> int: ...
def f3(x) -> int: ...
""",
)
def test_max_depth0(self):
ty = self.Infer(
"""
def f1(x):
return f2(x)
def f2(x):
return f3(x)
def f3(x):
return 1
""",
maximum_depth=0,
)
self.assertTypesMatchPytd(
ty,
"""
from typing import Any
def f1(x) -> Any: ...
def f2(x) -> Any: ...
def f3(x) -> Any: ...
""",
)
def test_max_depth1(self):
ty = self.Infer(
"""
def f1(x):
return f2(x)
def f2(x):
return f3(x)
def f3(x):
return 1
""",
maximum_depth=1,
)
self.assertTypesMatchPytd(
ty,
"""
from typing import Any
def f1(x) -> Any: ...
def f2(x) -> Any: ...
def f3(x) -> int: ...
""",
)
def test_max_depth2(self):
ty = self.Infer(
"""
def f1(x):
return f2(x)
def f2(x):
return f3(x)
def f3(x):
return 1
""",
maximum_depth=2,
)
self.assertTypesMatchPytd(
ty,
"""
from typing import Any
def f1(x) -> Any: ...
def f2(x) -> int: ...
def f3(x) -> int: ...
""",
)
def test_init_max_depth(self):
ty = self.Infer(
"""
def f1(x):
return f2(x)
def f2(x):
return 1
def g1(x):
return g2(x)
def g2(x):
return g3(x)
def g3(x):
return 1
x1 = f1(__any_object__)
x2 = g1(__any_object__)
""",
init_maximum_depth=2,
)
self.assertTypesMatchPytd(
ty,
"""
from typing import Any
def f1(x) -> int: ...
def f2(x) -> int: ...
def g1(x) -> int: ...
def g2(x) -> int: ...
def g3(x) -> int: ...
x1: int
x2: Any # exceeded max depth
""",
)
def test_max_depth_for_init(self):
# This test will fail if we don't whitelist "__init__" methods from
# maxdepth, because that would prevent the constructor of Foo from being
# executed.
_ = self.Infer(
"""
class Foo:
def __init__(self):
self.bar = 0.0
def get_bar(self):
return self.bar
def f1(my_set):
my_set.add(Foo())
def f2(my_set):
f1(my_set)
def f3(my_set):
f2(my_set)
def f4(my_set):
f3(my_set)
my_set = set()
f4(my_set)
for foo in my_set:
foo.get_bar()
""",
maximum_depth=3,
init_maximum_depth=4,
)
if __name__ == "__main__":
test_base.main()
| OptionsTest |
python | PyCQA__pylint | tests/functional/u/unused/unused_private_member.py | {
"start": 5619,
"end": 6033
} | class ____:
def __init__(self):
self.context = Crash4755Context()
def method(self):
self.context.__messages = []
for message in self.context.__messages:
print(message)
# https://github.com/pylint-dev/pylint/issues/4681
# Accessing attributes of the class using the class name should not result in a false positive
# as long as it is used within the class
| Crash4755Command |
python | python-openxml__python-docx | tests/oxml/parts/unitdata/document.py | {
"start": 80,
"end": 183
} | class ____(BaseBuilder):
__tag__ = "w:body"
__nspfxs__ = ("w",)
__attrs__ = ()
| CT_BodyBuilder |
python | walkccc__LeetCode | solutions/1372. Longest ZigZag Path in a Binary Tree/1372.py | {
"start": 120,
"end": 606
} | class ____:
def longestZigZag(self, root: TreeNode | None) -> int:
def dfs(root: TreeNode | None) -> T:
if not root:
return T(-1, -1, -1)
left = dfs(root.left)
right = dfs(root.right)
leftZigZag = left.rightMax + 1
rightZigZag = right.leftMax + 1
subtreeMax = max(leftZigZag, rightZigZag,
left.subtreeMax, right.subtreeMax)
return T(leftZigZag, rightZigZag, subtreeMax)
return dfs(root).subtreeMax
| Solution |
python | django__django | tests/forms_tests/tests/test_error_messages.py | {
"start": 680,
"end": 939
} | class ____:
def assertFormErrors(self, expected, the_callable, *args, **kwargs):
with self.assertRaises(ValidationError) as cm:
the_callable(*args, **kwargs)
self.assertEqual(cm.exception.messages, expected)
| AssertFormErrorsMixin |
python | spack__spack | var/spack/test_repos/spack_repo/builtin_mock/packages/netlib_scalapack/package.py | {
"start": 218,
"end": 520
} | class ____(Package):
homepage = "http://www.netlib.org/scalapack/"
url = "http://www.netlib.org/scalapack/scalapack-2.1.0.tgz"
version("2.1.0", "b1d3e3e425b2e44a06760ff173104bdf")
provides("scalapack")
depends_on("mpi")
depends_on("lapack")
depends_on("blas")
| NetlibScalapack |
python | run-llama__llama_index | llama-index-integrations/embeddings/llama-index-embeddings-nvidia/llama_index/embeddings/nvidia/base.py | {
"start": 581,
"end": 10499
} | class ____(BaseEmbedding):
"""NVIDIA embeddings."""
base_url: str = Field(
default_factory=lambda: os.getenv("NVIDIA_BASE_URL", BASE_URL),
description="Base url for model listing an invocation",
)
model: Optional[str] = Field(
description="Name of the NVIDIA embedding model to use.\n"
)
truncate: Literal["NONE", "START", "END"] = Field(
default="NONE",
description=(
"Truncate input text if it exceeds the model's maximum token length. "
"Default is 'NONE', which raises an error if an input is too long."
),
)
timeout: float = Field(
default=120, description="The timeout for the API request in seconds.", ge=0
)
max_retries: int = Field(
default=5,
description="The maximum number of retries for the API request.",
ge=0,
)
dimensions: Optional[int] = Field(
default=None,
description=(
"The number of dimensions for the embeddings. This parameter is not "
"supported by all models."
),
)
_client: Any = PrivateAttr()
_aclient: Any = PrivateAttr()
_is_hosted: bool = PrivateAttr(True)
def __init__(
self,
model: Optional[str] = None,
timeout: Optional[float] = 120,
max_retries: Optional[int] = 5,
dimensions: Optional[int] = 0,
nvidia_api_key: Optional[str] = None,
api_key: Optional[str] = None,
embed_batch_size: int = DEFAULT_EMBED_BATCH_SIZE, # This could default to 50
callback_manager: Optional[CallbackManager] = None,
**kwargs: Any,
):
"""
Construct an Embedding interface for NVIDIA NIM.
This constructor initializes an instance of the NVIDIAEmbedding class, which provides
an interface for embedding text using NVIDIA's NIM service.
Parameters
----------
- model (str, optional): The name of the model to use for embeddings.
- timeout (float, optional): The timeout for requests to the NIM service, in seconds. Defaults to 120.
- max_retries (int, optional): The maximum number of retries for requests to the NIM service. Defaults to 5.
- dimensions (int, optional): The number of dimensions for the embeddings. This
parameter is not supported by all models.
- nvidia_api_key (str, optional): The API key for the NIM service. This is required if using a hosted NIM.
- api_key (str, optional): An alternative parameter for providing the API key.
- base_url (str, optional): The base URL for the NIM service. If not provided, the service will default to a hosted NIM.
- **kwargs: Additional keyword arguments.
API Keys:
- The recommended way to provide the API key is through the `NVIDIA_API_KEY` environment variable.
Note:
- Switch from a hosted NIM (default) to an on-premises NIM using the `base_url` parameter. An API key is required for hosted NIM.
"""
super().__init__(
model=model,
embed_batch_size=embed_batch_size,
callback_manager=callback_manager,
dimensions=dimensions,
**kwargs,
)
self.dimensions = dimensions
if embed_batch_size > 259:
raise ValueError("The batch size should not be larger than 259.")
api_key = get_from_param_or_env(
"api_key",
nvidia_api_key or api_key,
"NVIDIA_API_KEY",
"NO_API_KEY_PROVIDED",
)
self._is_hosted = self.base_url in KNOWN_URLS
if self._is_hosted: # hosted on API Catalog (build.nvidia.com)
if api_key == "NO_API_KEY_PROVIDED":
raise ValueError("An API key is required for hosted NIM.")
self._client = OpenAI(
api_key=api_key,
base_url=self.base_url,
timeout=timeout,
max_retries=max_retries,
)
self._client._custom_headers = {"User-Agent": "llama-index-embeddings-nvidia"}
self._aclient = AsyncOpenAI(
api_key=api_key,
base_url=self.base_url,
timeout=timeout,
max_retries=max_retries,
)
self._aclient._custom_headers = {"User-Agent": "llama-index-embeddings-nvidia"}
self.model = model
if not self.model:
if self._is_hosted:
self.model = DEFAULT_MODEL
else:
self.__get_default_model()
if not self.model.startswith("nvdev/"):
self._validate_model(self.model) ## validate model
def __get_default_model(self) -> None:
"""Set default model."""
if not self._is_hosted:
valid_models = [
model.id
for model in self.available_models
if not model.base_model or model.base_model == model.id
]
self.model = next(iter(valid_models), None)
if self.model:
warnings.warn(
f"Default model is set as: {self.model}. \n"
"Set model using model parameter. \n"
"To get available models use available_models property.",
UserWarning,
)
else:
raise ValueError("No locally hosted model was found.")
else:
self.model = self.model or DEFAULT_MODEL
def _validate_model(self, model_name: str) -> None:
"""
Validates compatibility of the hosted model with the client.
Skipping the client validation for non-catalogue requests.
Args:
model_name (str): The name of the model.
Raises:
ValueError: If the model is incompatible with the client.
"""
model = determine_model(model_name)
if self._is_hosted:
if not model:
warnings.warn(f"Unable to determine validity of {model_name}")
if model and model.endpoint:
self.base_url = model.endpoint
# Update client base_url for custom endpoints
self._client.base_url = self.base_url
self._aclient.base_url = self.base_url
# TODO: handle locally hosted models
@property
def available_models(self) -> List[str]:
"""Get available models."""
# TODO: hosted now has a model listing, need to merge known and listed models
if not self._is_hosted:
return [
Model(
id=model.id,
base_model=getattr(model, "params", {}).get("root", None),
)
for model in self._client.models.list()
]
else:
return [Model(id=id) for id in EMBEDDING_MODEL_TABLE]
@classmethod
def class_name(cls) -> str:
return "NVIDIAEmbedding"
def _get_query_embedding(self, query: str) -> List[float]:
"""Get query embedding."""
extra_body = {"input_type": "passage", "truncate": self.truncate}
if self.dimensions:
extra_body["dimensions"] = self.dimensions
return (
self._client.embeddings.create(
input=[query],
model=self.model,
extra_body=extra_body,
)
.data[0]
.embedding
)
def _get_text_embedding(self, text: str) -> List[float]:
"""Get text embedding."""
extra_body = {"input_type": "passage", "truncate": self.truncate}
if self.dimensions:
extra_body["dimensions"] = self.dimensions
return (
self._client.embeddings.create(
input=[text],
model=self.model,
extra_body=extra_body,
)
.data[0]
.embedding
)
def _get_text_embeddings(self, texts: List[str]) -> List[List[float]]:
"""Get text embeddings."""
assert len(texts) <= 259, "The batch size should not be larger than 259."
extra_body = {"input_type": "passage", "truncate": self.truncate}
if self.dimensions:
extra_body["dimensions"] = self.dimensions
data = self._client.embeddings.create(
input=texts,
model=self.model,
extra_body=extra_body,
).data
return [d.embedding for d in data]
async def _aget_query_embedding(self, query: str) -> List[float]:
"""Asynchronously get query embedding."""
return (
(
await self._aclient.embeddings.create(
input=[query],
model=self.model,
extra_body={"input_type": "query", "truncate": self.truncate},
)
)
.data[0]
.embedding
)
async def _aget_text_embedding(self, text: str) -> List[float]:
"""Asynchronously get text embedding."""
return (
(
await self._aclient.embeddings.create(
input=[text],
model=self.model,
extra_body={"input_type": "passage", "truncate": self.truncate},
)
)
.data[0]
.embedding
)
async def _aget_text_embeddings(self, texts: List[str]) -> List[List[float]]:
"""Asynchronously get text embeddings."""
assert len(texts) <= 259, "The batch size should not be larger than 259."
data = (
await self._aclient.embeddings.create(
input=texts,
model=self.model,
extra_body={"input_type": "passage", "truncate": self.truncate},
)
).data
return [d.embedding for d in data]
| NVIDIAEmbedding |
python | Unity-Technologies__ml-agents | ml-agents/mlagents/trainers/torch_entities/model_serialization.py | {
"start": 2848,
"end": 6267
} | class ____:
def __init__(self, policy):
# ONNX only support input in NCHW (channel first) format.
# Sentis also expect to get data in NCHW.
# Any multi-dimentional input should follow that otherwise will
# cause problem for Sentis import.
self.policy = policy
observation_specs = self.policy.behavior_spec.observation_specs
batch_dim = [1]
seq_len_dim = [1]
num_obs = len(observation_specs)
dummy_obs = [
torch.zeros(
batch_dim + list(ModelSerializer._get_onnx_shape(obs_spec.shape))
)
for obs_spec in observation_specs
]
dummy_masks = torch.ones(
batch_dim + [sum(self.policy.behavior_spec.action_spec.discrete_branches)]
)
dummy_memories = torch.zeros(
batch_dim + seq_len_dim + [self.policy.export_memory_size]
)
self.dummy_input = (dummy_obs, dummy_masks, dummy_memories)
self.input_names = [TensorNames.get_observation_name(i) for i in range(num_obs)]
self.input_names += [
TensorNames.action_mask_placeholder,
TensorNames.recurrent_in_placeholder,
]
self.dynamic_axes = {name: {0: "batch"} for name in self.input_names}
self.output_names = [TensorNames.version_number, TensorNames.memory_size]
if self.policy.behavior_spec.action_spec.continuous_size > 0:
self.output_names += [
TensorNames.continuous_action_output,
TensorNames.continuous_action_output_shape,
TensorNames.deterministic_continuous_action_output,
]
self.dynamic_axes.update(
{TensorNames.continuous_action_output: {0: "batch"}}
)
if self.policy.behavior_spec.action_spec.discrete_size > 0:
self.output_names += [
TensorNames.discrete_action_output,
TensorNames.discrete_action_output_shape,
TensorNames.deterministic_discrete_action_output,
]
self.dynamic_axes.update({TensorNames.discrete_action_output: {0: "batch"}})
if self.policy.export_memory_size > 0:
self.output_names += [TensorNames.recurrent_output]
@staticmethod
def _get_onnx_shape(shape: Tuple[int, ...]) -> Tuple[int, ...]:
"""
Converts the shape of an observation to be compatible with the NCHW format
of ONNX
"""
if len(shape) == 3:
return shape[0], shape[1], shape[2]
return shape
def export_policy_model(self, output_filepath: str) -> None:
"""
Exports a Torch model for a Policy to .onnx format for Unity embedding.
:param output_filepath: file path to output the model (without file suffix)
"""
onnx_output_path = f"{output_filepath}.onnx"
logger.debug(f"Converting to {onnx_output_path}")
with exporting_to_onnx():
torch.onnx.export(
self.policy.actor,
self.dummy_input,
onnx_output_path,
opset_version=SerializationSettings.onnx_opset,
input_names=self.input_names,
output_names=self.output_names,
dynamic_axes=self.dynamic_axes,
)
logger.info(f"Exported {onnx_output_path}")
| ModelSerializer |
python | openai__openai-python | src/openai/types/chat/chat_completion_message_custom_tool_call.py | {
"start": 222,
"end": 395
} | class ____(BaseModel):
input: str
"""The input for the custom tool call generated by the model."""
name: str
"""The name of the custom tool to call."""
| Custom |
python | numba__numba | numba/core/typing/builtins.py | {
"start": 14652,
"end": 14719
} | class ____(TupleCompare):
pass
@infer_global(operator.gt)
| TupleGe |
python | weaviate__weaviate-python-client | weaviate/exceptions.py | {
"start": 5389,
"end": 5757
} | class ____(WeaviateBaseError):
"""Is raised when the user provides a generic that is not supported."""
def __init__(self, type_: str) -> None:
msg = f"""{type_} can only be a dict type, e.g. Dict[str, Any], or a class that inherits from TypedDict"""
super().__init__(msg)
InvalidDataModelException = InvalidDataModelError
| InvalidDataModelError |
python | pytest-dev__pytest | testing/test_collection.py | {
"start": 18331,
"end": 26015
} | class ____:
def test_collect_topdir(self, pytester: Pytester) -> None:
p = pytester.makepyfile("def test_func(): pass")
id = "::".join([p.name, "test_func"])
# XXX migrate to collectonly? (see below)
config = pytester.parseconfig(id)
topdir = pytester.path
rcol = Session.from_config(config)
assert topdir == rcol.path
# rootid = rcol.nodeid
# root2 = rcol.perform_collect([rcol.nodeid], genitems=False)[0]
# assert root2 == rcol, rootid
colitems = rcol.perform_collect([rcol.nodeid], genitems=False)
assert len(colitems) == 1
assert colitems[0].path == topdir
def get_reported_items(self, hookrec: HookRecorder) -> list[Item]:
"""Return pytest.Item instances reported by the pytest_collectreport hook"""
calls = hookrec.getcalls("pytest_collectreport")
return [
x
for call in calls
for x in call.report.result
if isinstance(x, pytest.Item)
]
def test_collect_protocol_single_function(self, pytester: Pytester) -> None:
p = pytester.makepyfile("def test_func(): pass")
id = "::".join([p.name, "test_func"])
items, hookrec = pytester.inline_genitems(id)
(item,) = items
assert item.name == "test_func"
newid = item.nodeid
assert newid == id
pprint.pprint(hookrec.calls)
topdir = pytester.path # noqa: F841
hookrec.assert_contains(
[
("pytest_collectstart", "collector.path == topdir"),
("pytest_make_collect_report", "collector.path == topdir"),
("pytest_collectstart", "collector.path == p"),
("pytest_make_collect_report", "collector.path == p"),
("pytest_pycollect_makeitem", "name == 'test_func'"),
("pytest_collectreport", "report.result[0].name == 'test_func'"),
]
)
# ensure we are reporting the collection of the single test item (#2464)
assert [x.name for x in self.get_reported_items(hookrec)] == ["test_func"]
def test_collect_protocol_method(self, pytester: Pytester) -> None:
p = pytester.makepyfile(
"""
class TestClass(object):
def test_method(self):
pass
"""
)
normid = p.name + "::TestClass::test_method"
for id in [p.name, p.name + "::TestClass", normid]:
items, hookrec = pytester.inline_genitems(id)
assert len(items) == 1
assert items[0].name == "test_method"
newid = items[0].nodeid
assert newid == normid
# ensure we are reporting the collection of the single test item (#2464)
assert [x.name for x in self.get_reported_items(hookrec)] == ["test_method"]
def test_collect_custom_nodes_multi_id(self, pytester: Pytester) -> None:
p = pytester.makepyfile("def test_func(): pass")
pytester.makeconftest(
f"""
import pytest
class SpecialItem(pytest.Item):
def runtest(self):
return # ok
class SpecialFile(pytest.File):
def collect(self):
return [SpecialItem.from_parent(name="check", parent=self)]
def pytest_collect_file(file_path, parent):
if file_path.name == {p.name!r}:
return SpecialFile.from_parent(path=file_path, parent=parent)
"""
)
id = p.name
items, hookrec = pytester.inline_genitems(id)
pprint.pprint(hookrec.calls)
assert len(items) == 2
hookrec.assert_contains(
[
("pytest_collectstart", "collector.path == collector.session.path"),
(
"pytest_collectstart",
"collector.__class__.__name__ == 'SpecialFile'",
),
("pytest_collectstart", "collector.__class__.__name__ == 'Module'"),
("pytest_pycollect_makeitem", "name == 'test_func'"),
("pytest_collectreport", "report.nodeid.startswith(p.name)"),
]
)
assert len(self.get_reported_items(hookrec)) == 2
def test_collect_subdir_event_ordering(self, pytester: Pytester) -> None:
p = pytester.makepyfile("def test_func(): pass")
aaa = pytester.mkpydir("aaa")
test_aaa = aaa.joinpath("test_aaa.py")
p.replace(test_aaa)
items, hookrec = pytester.inline_genitems()
assert len(items) == 1
pprint.pprint(hookrec.calls)
hookrec.assert_contains(
[
("pytest_collectstart", "collector.path == test_aaa"),
("pytest_pycollect_makeitem", "name == 'test_func'"),
("pytest_collectreport", "report.nodeid.startswith('aaa/test_aaa.py')"),
]
)
def test_collect_two_commandline_args(self, pytester: Pytester) -> None:
p = pytester.makepyfile("def test_func(): pass")
aaa = pytester.mkpydir("aaa")
bbb = pytester.mkpydir("bbb")
test_aaa = aaa.joinpath("test_aaa.py")
shutil.copy(p, test_aaa)
test_bbb = bbb.joinpath("test_bbb.py")
p.replace(test_bbb)
id = "."
items, hookrec = pytester.inline_genitems(id)
assert len(items) == 2
pprint.pprint(hookrec.calls)
hookrec.assert_contains(
[
("pytest_collectstart", "collector.path == test_aaa"),
("pytest_pycollect_makeitem", "name == 'test_func'"),
("pytest_collectreport", "report.nodeid == 'aaa/test_aaa.py'"),
("pytest_collectstart", "collector.path == test_bbb"),
("pytest_pycollect_makeitem", "name == 'test_func'"),
("pytest_collectreport", "report.nodeid == 'bbb/test_bbb.py'"),
]
)
def test_serialization_byid(self, pytester: Pytester) -> None:
pytester.makepyfile("def test_func(): pass")
items, _hookrec = pytester.inline_genitems()
assert len(items) == 1
(item,) = items
items2, _hookrec = pytester.inline_genitems(item.nodeid)
(item2,) = items2
assert item2.name == item.name
assert item2.path == item.path
def test_find_byid_without_instance_parents(self, pytester: Pytester) -> None:
p = pytester.makepyfile(
"""
class TestClass(object):
def test_method(self):
pass
"""
)
arg = p.name + "::TestClass::test_method"
items, hookrec = pytester.inline_genitems(arg)
assert len(items) == 1
(item,) = items
assert item.nodeid.endswith("TestClass::test_method")
# ensure we are reporting the collection of the single test item (#2464)
assert [x.name for x in self.get_reported_items(hookrec)] == ["test_method"]
def test_collect_parametrized_order(self, pytester: Pytester) -> None:
p = pytester.makepyfile(
"""
import pytest
@pytest.mark.parametrize('i', [0, 1, 2])
def test_param(i): ...
"""
)
items, _hookrec = pytester.inline_genitems(f"{p}::test_param")
assert len(items) == 3
assert [item.nodeid for item in items] == [
"test_collect_parametrized_order.py::test_param[0]",
"test_collect_parametrized_order.py::test_param[1]",
"test_collect_parametrized_order.py::test_param[2]",
]
| TestSession |
python | ray-project__ray | python/ray/serve/_private/common.py | {
"start": 29660,
"end": 29765
} | class ____:
accepted: bool
num_ongoing_requests: int
@dataclass(frozen=True)
| ReplicaQueueLengthInfo |
python | django__django | django/db/models/functions/text.py | {
"start": 9331,
"end": 9580
} | class ____(MySQLSHA2Mixin, PostgreSQLSHAMixin, Transform):
function = "SHA224"
lookup_name = "sha224"
def as_oracle(self, compiler, connection, **extra_context):
raise NotSupportedError("SHA224 is not supported on Oracle.")
| SHA224 |
python | great-expectations__great_expectations | great_expectations/core/expectation_diagnostics/expectation_test_data_cases.py | {
"start": 1946,
"end": 3105
} | class ____(ExpectationTestCase):
"""This class provides an adapter between the test cases developed prior to Great Expectations' 0.14 release and the newer ExpectationTestCase dataclass
Notes:
* Legacy test cases used "in" (a python reserved word). This has been changed to "input".
* To maintain parallelism, we've also made the corresponding change from "out" to "output".
* To avoid any ambiguity, ExpectationLegacyTestCaseAdapter only accepts keyword arguments. Positional arguments are not allowed.
""" # noqa: E501 # FIXME CoP
def __init__(
self,
*,
title,
exact_match_out,
out,
suppress_test_for=None,
**kwargs,
) -> None:
if not suppress_test_for:
suppress_test_for = []
super().__init__(
title=title,
input=kwargs["in"],
output=out,
exact_match_out=exact_match_out,
suppress_test_for=suppress_test_for,
only_for=kwargs.get("only_for"),
include_in_gallery=kwargs.get("include_in_gallery", False),
)
@dataclass
| ExpectationLegacyTestCaseAdapter |
python | sqlalchemy__sqlalchemy | test/orm/test_froms.py | {
"start": 129475,
"end": 133972
} | class ____(fixtures.TestBase, testing.AssertsCompiledSQL):
__dialect__ = "default"
@testing.fixture
def mapping(self):
Base = declarative_base()
def go(include_property, correlate_style, include_from):
class Address(Base):
__tablename__ = "addresses"
id = Column(Integer, primary_key=True)
user_id = Column(
Integer, ForeignKey("users.id"), nullable=False
)
city = Column(Text)
class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True)
name = Column(Text)
stmt = select(func.count(Address.id)).where(
Address.user_id == User.id
)
if include_from:
stmt = stmt.select_from(Address)
if include_property:
if correlate_style == "correlate":
User.total_addresses = column_property(
stmt.correlate(User).scalar_subquery()
)
elif correlate_style == "correlate_except":
User.total_addresses = column_property(
stmt.correlate_except(Address).scalar_subquery()
)
elif correlate_style is None:
User.total_addresses = column_property(
stmt.scalar_subquery()
)
total_addresses = None
else:
def total_addresses(cls):
stmt = select(func.count(Address.id)).where(
Address.user_id == cls.id
)
if correlate_style == "correlate":
stmt = stmt.correlate(cls)
elif correlate_style == "correlate_except":
stmt = stmt.correlate_except(Address)
stmt = stmt.scalar_subquery()
return stmt
return User, Address, total_addresses
yield go
Base.registry.dispose()
def _combinations(fn):
return testing.combinations(
(True,), (False,), argnames="include_property"
)(
testing.combinations(
("correlate",),
("correlate_except",),
(None,),
argnames="correlate_style",
)(
testing.combinations(
(True,), (False), argnames="include_from"
)(fn)
)
)
@_combinations
def test_correlate_to_cte_legacy(
self, mapping, include_property, correlate_style, include_from
):
User, Address, total_addresses = mapping(
include_property, correlate_style, include_from
)
session = fixture_session()
filtered_users = (
session.query(User.id, User.name)
.join(Address)
.filter(Address.city == "somewhere")
.cte("filtered_users")
)
filtered_users_alias = aliased(User, filtered_users)
paginated_users = (
session.query(filtered_users_alias.id, filtered_users_alias.name)
.order_by(func.lower(filtered_users_alias.name).asc())
.limit(25)
.cte("paginated_users")
)
paginated_users_alias = aliased(User, paginated_users)
if total_addresses:
q = session.query(
paginated_users_alias, total_addresses(paginated_users_alias)
)
else:
q = session.query(paginated_users_alias)
self.assert_compile(
q,
"WITH filtered_users AS "
"(SELECT users.id AS id, users.name AS name "
"FROM users JOIN addresses ON users.id = addresses.user_id "
"WHERE addresses.city = :city_1), "
"paginated_users AS (SELECT filtered_users.id AS id, "
"filtered_users.name AS name FROM filtered_users "
"ORDER BY lower(filtered_users.name) ASC LIMIT :param_1) "
"SELECT "
"paginated_users.id AS paginated_users_id, "
"paginated_users.name AS paginated_users_name, "
"(SELECT count(addresses.id) AS count_1 FROM addresses "
"WHERE addresses.user_id = paginated_users.id) AS anon_1 "
"FROM paginated_users",
)
| CorrelateORMTest |
python | django-debug-toolbar__django-debug-toolbar | tests/test_middleware.py | {
"start": 695,
"end": 3230
} | class ____(TestCase):
def setUp(self):
self.factory = RequestFactory()
self.async_factory = AsyncRequestFactory()
@override_settings(DEBUG=True)
def test_sync_mode(self):
"""
test middleware switches to sync (__call__) based on get_response type
"""
request = self.factory.get("/")
middleware = DebugToolbarMiddleware(
lambda x: HttpResponse("<html><body>Test app</body></html>")
)
self.assertFalse(iscoroutinefunction(middleware))
response = middleware(request)
self.assertEqual(response.status_code, 200)
self.assertIn(b"djdt", response.content)
@override_settings(DEBUG=True)
async def test_async_mode(self):
"""
test middleware switches to async (__acall__) based on get_response type
and returns a coroutine
"""
async def get_response(request):
return HttpResponse("<html><body>Test app</body></html>")
middleware = DebugToolbarMiddleware(get_response)
request = self.async_factory.get("/")
self.assertTrue(iscoroutinefunction(middleware))
response = await middleware(request)
self.assertEqual(response.status_code, 200)
self.assertIn(b"djdt", response.content)
@override_settings(DEBUG=True)
@patch(
"debug_toolbar.middleware.show_toolbar_func_or_path",
return_value=ashow_toolbar_if_staff,
)
def test_async_show_toolbar_callback_sync_middleware(self, mocked_show):
def get_response(request):
return HttpResponse("<html><body>Hello world</body></html>")
middleware = DebugToolbarMiddleware(get_response)
request = self.factory.get("/")
response = middleware(request)
self.assertEqual(response.status_code, 200)
self.assertIn(b"djdt", response.content)
@override_settings(DEBUG=True)
@patch(
"debug_toolbar.middleware.show_toolbar_func_or_path",
return_value=show_toolbar_if_staff,
)
async def test_sync_show_toolbar_callback_async_middleware(self, mocked_show):
async def get_response(request):
return HttpResponse("<html><body>Hello world</body></html>")
middleware = DebugToolbarMiddleware(get_response)
request = self.async_factory.get("/")
response = await middleware(request)
self.assertEqual(response.status_code, 200)
self.assertIn(b"djdt", response.content)
| MiddlewareSyncAsyncCompatibilityTestCase |
python | sqlalchemy__sqlalchemy | test/orm/dml/test_bulk_statements.py | {
"start": 22909,
"end": 38790
} | class ____(testing.AssertsExecutionResults, fixtures.TestBase):
__sparse_driver_backend__ = True
@testing.variation(
"use_onupdate",
[
"none",
"server",
"callable",
"clientsql",
("computed", testing.requires.computed_columns),
],
)
def test_bulk_update_onupdates(self, decl_base, use_onupdate):
"""assert that for now, bulk ORM update by primary key does not
expire or refresh onupdates."""
class Employee(ComparableEntity, decl_base):
__tablename__ = "employee"
uuid: Mapped[uuid.UUID] = mapped_column(primary_key=True)
user_name: Mapped[str] = mapped_column(String(200), nullable=False)
if use_onupdate.server:
some_server_value: Mapped[str] = mapped_column(
server_onupdate=FetchedValue()
)
elif use_onupdate.callable:
some_server_value: Mapped[str] = mapped_column(
onupdate=lambda: "value 2"
)
elif use_onupdate.clientsql:
some_server_value: Mapped[str] = mapped_column(
onupdate=literal("value 2")
)
elif use_onupdate.computed:
some_server_value: Mapped[str] = mapped_column(
String(255),
Computed(user_name + " computed value"),
nullable=True,
)
else:
some_server_value: Mapped[str]
decl_base.metadata.create_all(testing.db)
s = fixture_session()
uuid1 = uuid.uuid4()
if use_onupdate.computed:
server_old_value, server_new_value = (
"e1 old name computed value",
"e1 new name computed value",
)
e1 = Employee(uuid=uuid1, user_name="e1 old name")
else:
server_old_value, server_new_value = ("value 1", "value 2")
e1 = Employee(
uuid=uuid1,
user_name="e1 old name",
some_server_value="value 1",
)
s.add(e1)
s.flush()
# for computed col, make sure e1.some_server_value is loaded.
# this will already be the case for all RETURNING backends, so this
# suits just MySQL.
if use_onupdate.computed:
e1.some_server_value
stmt = update(Employee)
# perform out of band UPDATE on server value to simulate
# a computed col
if use_onupdate.none or use_onupdate.server:
s.connection().execute(
update(Employee.__table__).values(some_server_value="value 2")
)
execution_options = {}
s.execute(
stmt,
execution_options=execution_options,
params=[{"uuid": uuid1, "user_name": "e1 new name"}],
)
assert "some_server_value" in e1.__dict__
eq_(e1.some_server_value, server_old_value)
# do a full expire, now the new value is definitely there
s.commit()
s.expire_all()
eq_(e1.some_server_value, server_new_value)
@testing.variation(
"returning_executemany",
[
("returning", testing.requires.update_returning),
"executemany",
"plain",
],
)
@testing.variation("bindparam_in_expression", [True, False])
# TODO: setting "bulk" here is all over the place as well, UPDATE is not
# too settled
@testing.combinations("auto", "orm", argnames="dml_strategy")
@testing.combinations(
"evaluate", "fetch", None, argnames="synchronize_strategy"
)
def test_alt_bindparam_names(
self,
decl_base,
returning_executemany,
dml_strategy,
bindparam_in_expression,
synchronize_strategy,
):
class A(decl_base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(
primary_key=True, autoincrement=False
)
x: Mapped[int]
y: Mapped[int]
decl_base.metadata.create_all(testing.db)
s = fixture_session()
s.add_all(
[A(id=1, x=1, y=1), A(id=2, x=2, y=2), A(id=3, x=3, y=3)],
)
s.commit()
if bindparam_in_expression:
stmt = (
update(A)
.values(y=literal(3) * (bindparam("q") + 15))
.where(A.id == bindparam("b_id"))
)
else:
stmt = (
update(A)
.values(y=bindparam("q"))
.where(A.id == bindparam("b_id"))
)
if dml_strategy != "auto":
# it really should work with any strategy
stmt = stmt.execution_options(dml_strategy=dml_strategy)
if returning_executemany.returning:
stmt = stmt.returning(A.x, A.y)
if synchronize_strategy in (None, "evaluate", "fetch"):
stmt = stmt.execution_options(
synchronize_session=synchronize_strategy
)
if returning_executemany.executemany:
if bindparam_in_expression:
expected_qs = [60, 69, 81]
else:
expected_qs = [5, 8, 12]
if dml_strategy != "orm":
params = [
{"id": 1, "b_id": 1, "q": 5, "x": 10},
{"id": 2, "b_id": 2, "q": 8, "x": 11},
{"id": 3, "b_id": 3, "q": 12, "x": 12},
]
else:
params = [
{"b_id": 1, "q": 5, "x": 10},
{"b_id": 2, "q": 8, "x": 11},
{"b_id": 3, "q": 12, "x": 12},
]
_expect_raises = None
if synchronize_strategy == "fetch":
if dml_strategy != "orm":
_expect_raises = expect_raises_message(
exc.InvalidRequestError,
r"The 'fetch' synchronization strategy is not "
r"available for 'bulk' ORM updates "
r"\(i.e. multiple parameter sets\)",
)
elif not testing.db.dialect.update_executemany_returning:
# no backend supports this except Oracle
_expect_raises = expect_raises_message(
exc.InvalidRequestError,
r"For synchronize_session='fetch', can't use multiple "
r"parameter sets in ORM mode, which this backend does "
r"not support with RETURNING",
)
elif synchronize_strategy == "evaluate" and dml_strategy != "orm":
_expect_raises = expect_raises_message(
exc.InvalidRequestError,
"bulk synchronize of persistent objects not supported",
)
if _expect_raises:
with _expect_raises:
result = s.execute(stmt, params)
return
result = s.execute(stmt, params)
else:
if bindparam_in_expression:
expected_qs = [60]
else:
expected_qs = [5]
result = s.execute(stmt, {"b_id": 1, "q": 5, "x": 10})
if returning_executemany.returning:
eq_(result.first(), (10, expected_qs[0]))
elif returning_executemany.executemany:
eq_(
s.execute(select(A.x, A.y).order_by(A.id)).all(),
[
(10, expected_qs[0]),
(11, expected_qs[1]),
(12, expected_qs[2]),
],
)
@testing.variation("add_where", [True, False])
@testing.variation("multi_row", ["multirow", "singlerow", "listwsingle"])
def test_bulk_update_no_pk(self, decl_base, add_where, multi_row):
"""test #9917"""
class A(decl_base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(
primary_key=True, autoincrement=False
)
x: Mapped[int]
y: Mapped[int]
decl_base.metadata.create_all(testing.db)
s = fixture_session()
s.add_all(
[A(id=1, x=1, y=1), A(id=2, x=2, y=2), A(id=3, x=3, y=3)],
)
s.commit()
stmt = update(A)
if add_where:
stmt = stmt.where(A.x > 1)
if multi_row.multirow:
data = [
{"x": 3, "y": 8},
{"x": 5, "y": 9},
{"x": 12, "y": 15},
]
stmt = stmt.execution_options(synchronize_session=None)
elif multi_row.listwsingle:
data = [
{"x": 5, "y": 9},
]
stmt = stmt.execution_options(synchronize_session=None)
elif multi_row.singlerow:
data = {"x": 5, "y": 9}
else:
multi_row.fail()
if multi_row.multirow or multi_row.listwsingle:
with expect_raises_message(
exc.InvalidRequestError,
r"No primary key value supplied for column\(s\) a.id; per-row "
"ORM Bulk UPDATE by Primary Key requires that records contain "
"primary key values",
):
s.execute(stmt, data)
else:
with self.sql_execution_asserter() as asserter:
s.execute(stmt, data)
if add_where:
asserter.assert_(
CompiledSQL(
"UPDATE a SET x=:x, y=:y WHERE a.x > :x_1",
[{"x": 5, "y": 9, "x_1": 1}],
),
)
else:
asserter.assert_(
CompiledSQL("UPDATE a SET x=:x, y=:y", [{"x": 5, "y": 9}]),
)
@testing.variation("multi_row", ["multirow", "singlerow", "listwsingle"])
@testing.requires.update_returning
@testing.requires.returning_star
def test_bulk_update_returning_star(self, decl_base, multi_row):
class A(decl_base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(
primary_key=True, autoincrement=False
)
x: Mapped[int]
y: Mapped[int]
decl_base.metadata.create_all(testing.db)
s = fixture_session()
s.add_all(
[A(id=1, x=1, y=1), A(id=2, x=2, y=2), A(id=3, x=3, y=3)],
)
s.commit()
stmt = update(A).returning(literal_column("*"))
if multi_row.multirow:
data = [
{"x": 3, "y": 8},
{"x": 5, "y": 9},
{"x": 12, "y": 15},
]
stmt = stmt.execution_options(synchronize_session=None)
elif multi_row.listwsingle:
data = [
{"x": 5, "y": 9},
]
stmt = stmt.execution_options(synchronize_session=None)
elif multi_row.singlerow:
data = {"x": 5, "y": 9}
else:
multi_row.fail()
if multi_row.multirow or multi_row.listwsingle:
with expect_raises_message(
exc.InvalidRequestError, "No primary key value supplied"
):
s.execute(stmt, data)
return
else:
result = s.execute(stmt, data)
eq_(result.all(), [(1, 5, 9), (2, 5, 9), (3, 5, 9)])
@testing.requires.update_returning
def test_bulk_update_returning_bundle(self, decl_base):
class A(decl_base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(
primary_key=True, autoincrement=False
)
x: Mapped[int]
y: Mapped[int]
decl_base.metadata.create_all(testing.db)
s = fixture_session()
s.add_all(
[A(id=1, x=1, y=1), A(id=2, x=2, y=2), A(id=3, x=3, y=3)],
)
s.commit()
stmt = update(A).returning(Bundle("mybundle", A.id, A.x), A.y)
data = {"x": 5, "y": 9}
result = s.execute(stmt, data)
eq_(result.all(), [((1, 5), 9), ((2, 5), 9), ((3, 5), 9)])
def test_bulk_update_w_where_one(self, decl_base):
"""test use case in #9595"""
class A(decl_base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(
primary_key=True, autoincrement=False
)
x: Mapped[int]
y: Mapped[int]
decl_base.metadata.create_all(testing.db)
s = fixture_session()
s.add_all(
[A(id=1, x=1, y=1), A(id=2, x=2, y=2), A(id=3, x=3, y=3)],
)
s.commit()
stmt = (
update(A)
.where(A.x > 1)
.execution_options(synchronize_session=None)
)
s.execute(
stmt,
[
{"id": 1, "x": 3, "y": 8},
{"id": 2, "x": 5, "y": 9},
{"id": 3, "x": 12, "y": 15},
],
)
eq_(
s.execute(select(A.id, A.x, A.y).order_by(A.id)).all(),
[(1, 1, 1), (2, 5, 9), (3, 12, 15)],
)
def test_bulk_update_w_where_two(self, decl_base):
class User(decl_base):
__tablename__ = "user"
id: Mapped[int] = mapped_column(
primary_key=True, autoincrement=False
)
name: Mapped[str]
age: Mapped[int]
decl_base.metadata.create_all(testing.db)
sess = fixture_session()
sess.execute(
insert(User),
[
dict(id=1, name="john", age=25),
dict(id=2, name="jack", age=47),
dict(id=3, name="jill", age=29),
dict(id=4, name="jane", age=37),
],
)
sess.execute(
update(User)
.where(User.age > bindparam("gtage"))
.values(age=bindparam("dest_age"))
.execution_options(synchronize_session=None),
[
{"id": 1, "gtage": 28, "dest_age": 40},
{"id": 2, "gtage": 20, "dest_age": 45},
],
)
eq_(
sess.execute(
select(User.id, User.name, User.age).order_by(User.id)
).all(),
[
(1, "john", 25),
(2, "jack", 45),
(3, "jill", 29),
(4, "jane", 37),
],
)
@testing.requires.update_returning
def test_returning_col_property(self, decl_base):
"""test #12326"""
class User(ComparableEntity, decl_base):
__tablename__ = "user"
id: Mapped[int] = mapped_column(
primary_key=True, autoincrement=False
)
name: Mapped[str]
age: Mapped[int]
decl_base.metadata.create_all(testing.db)
a_alias = aliased(User)
User.colprop = column_property(
select(func.max(a_alias.age))
.where(a_alias.id != User.id)
.scalar_subquery()
)
sess = fixture_session()
sess.execute(
insert(User),
[
dict(id=1, name="john", age=25),
dict(id=2, name="jack", age=47),
dict(id=3, name="jill", age=29),
dict(id=4, name="jane", age=37),
],
)
stmt = (
update(User).values(age=30).where(User.age == 29).returning(User)
)
row = sess.execute(stmt).one()
eq_(row[0], User(id=3, name="jill", age=30, colprop=47))
| UpdateStmtTest |
python | ray-project__ray | python/ray/util/iter_metrics.py | {
"start": 1509,
"end": 2227
} | class ____:
"""Holds an indirect reference to a (shared) metrics context.
This is used by LocalIterator.union() to point the metrics contexts of
entirely separate iterator chains to the same underlying context."""
def __init__(
self, metrics: MetricsContext = None, parents: List["SharedMetrics"] = None
):
self.metrics = metrics or MetricsContext()
self.parents = parents or []
self.set(self.metrics)
def set(self, metrics):
"""Recursively set self and parents to point to the same metrics."""
self.metrics = metrics
for parent in self.parents:
parent.set(metrics)
def get(self):
return self.metrics
| SharedMetrics |
python | django__django | tests/migrations/test_migrations_no_changes/0001_initial.py | {
"start": 43,
"end": 726
} | class ____(migrations.Migration):
operations = [
migrations.CreateModel(
"Author",
[
("id", models.BigAutoField(primary_key=True)),
("name", models.CharField(max_length=255)),
("slug", models.SlugField(null=True)),
("age", models.IntegerField(default=0)),
("silly_field", models.BooleanField(default=False)),
],
),
migrations.CreateModel(
"Tribble",
[
("id", models.BigAutoField(primary_key=True)),
("fluffy", models.BooleanField(default=True)),
],
),
]
| Migration |
python | apache__airflow | providers/google/tests/unit/google/cloud/operators/test_cloud_storage_transfer_service.py | {
"start": 18351,
"end": 20380
} | class ____:
@mock.patch(
"airflow.providers.google.cloud.operators.cloud_storage_transfer_service.CloudDataTransferServiceHook"
)
def test_job_delete(self, mock_hook):
op = CloudDataTransferServiceDeleteJobOperator(
job_name=JOB_NAME,
project_id=GCP_PROJECT_ID,
task_id="task-id",
google_impersonation_chain=IMPERSONATION_CHAIN,
)
op.execute(None)
mock_hook.assert_called_once_with(
api_version="v1",
gcp_conn_id="google_cloud_default",
impersonation_chain=IMPERSONATION_CHAIN,
)
mock_hook.return_value.delete_transfer_job.assert_called_once_with(
job_name=JOB_NAME, project_id=GCP_PROJECT_ID
)
# Setting all the operator's input parameters as templated dag_ids
# (could be anything else) just to test if the templating works for all
# fields
@pytest.mark.db_test
@mock.patch(
"airflow.providers.google.cloud.operators.cloud_storage_transfer_service.CloudDataTransferServiceHook"
)
def test_job_delete_with_templates(self, _, create_task_instance_of_operator, session):
dag_id = "test_job_delete_with_templates"
ti = create_task_instance_of_operator(
CloudDataTransferServiceDeleteJobOperator,
dag_id=dag_id,
job_name="{{ dag.dag_id }}",
gcp_conn_id="{{ dag.dag_id }}",
api_version="{{ dag.dag_id }}",
task_id=TASK_ID,
)
session.add(ti)
session.commit()
ti.render_templates()
assert dag_id == ti.task.job_name
assert dag_id == ti.task.gcp_conn_id
assert dag_id == ti.task.api_version
def test_job_delete_should_throw_ex_when_name_none(self):
with pytest.raises(AirflowException, match="The required parameter 'job_name' is empty or None"):
CloudDataTransferServiceDeleteJobOperator(job_name="", task_id="task-id")
| TestGcpStorageTransferJobDeleteOperator |
python | ray-project__ray | python/ray/tune/tests/test_searchers.py | {
"start": 16523,
"end": 22939
} | class ____(unittest.TestCase):
"""
Test searcher save and restore functionality.
"""
def setUp(self):
self.tempdir = tempfile.mkdtemp()
self.checkpoint_path = os.path.join(self.tempdir, "checkpoint.pkl")
self.metric_name = "metric"
self.config = {"a": tune.uniform(0.0, 5.0)}
def tearDown(self):
shutil.rmtree(self.tempdir)
@classmethod
def setUpClass(cls):
ray.init(num_cpus=4, num_gpus=0, include_dashboard=False)
@classmethod
def tearDownClass(cls):
ray.shutdown()
def _on_trial_callbacks(self, searcher, trial_id):
result = {
TRAINING_ITERATION: 1,
self.metric_name: 1,
"config/a": 1.0,
"time_total_s": 1,
}
searcher.on_trial_result(trial_id, result)
searcher.on_trial_complete(trial_id, result)
def _save(self, searcher):
searcher.set_search_properties(
metric=self.metric_name, mode="max", config=self.config
)
searcher.suggest("1")
searcher.suggest("2")
searcher.suggest("not_completed")
self._on_trial_callbacks(searcher, "1")
searcher.save(self.checkpoint_path)
def _restore(self, searcher):
# Restoration shouldn't require another call to `searcher.set_search_properties`
searcher.restore(self.checkpoint_path)
self._on_trial_callbacks(searcher, "2")
searcher.suggest("3")
self._on_trial_callbacks(searcher, "3")
# NOTE: Trial "not_completed" that was suggested before saving never completes
# We expect that it should still be tracked in the searcher state,
# which is usually done in the searcher's `_live_trial_mapping`.
# See individual searcher tests below for the special cases (e.g. Optuna, BOHB).
if hasattr(searcher, "_live_trial_mapping"):
assert "not_completed" in searcher._live_trial_mapping
def testAx(self):
from ax.service.ax_client import AxClient
from ray.tune.search.ax import AxSearch
converted_config = AxSearch.convert_search_space(self.config)
client = AxClient()
client.create_experiment(
parameters=converted_config, objective_name=self.metric_name, minimize=False
)
searcher = AxSearch(ax_client=client)
self._save(searcher)
client = AxClient()
client.create_experiment(
parameters=converted_config, objective_name=self.metric_name, minimize=False
)
searcher = AxSearch(ax_client=client)
self._restore(searcher)
def testBayesOpt(self):
from ray.tune.search.bayesopt import BayesOptSearch
searcher = BayesOptSearch(
space=self.config, metric=self.metric_name, mode="max"
)
self._save(searcher)
searcher = BayesOptSearch()
self._restore(searcher)
@pytest.mark.skipif(
sys.version_info >= (3, 12),
reason="BOHB not yet supported for python 3.12+",
)
def testBOHB(self):
from ray.tune.search.bohb import TuneBOHB
searcher = TuneBOHB(space=self.config, metric=self.metric_name, mode="max")
self._save(searcher)
searcher = TuneBOHB()
self._restore(searcher)
assert "not_completed" in searcher.trial_to_params
@pytest.mark.skipif(
sys.version_info >= (3, 12), reason="HEBO doesn't support py312"
)
def testHEBO(self):
if Version(pandas.__version__) >= Version("2.0.0"):
pytest.skip("HEBO does not support pandas>=2.0.0")
from ray.tune.search.hebo import HEBOSearch
searcher = HEBOSearch(
space=self.config,
metric=self.metric_name,
mode="max",
random_state_seed=1234,
)
self._save(searcher)
searcher = HEBOSearch()
self._restore(searcher)
def testHyperopt(self):
from ray.tune.search.hyperopt import HyperOptSearch
searcher = HyperOptSearch(
space=self.config,
metric=self.metric_name,
mode="max",
)
self._save(searcher)
searcher = HyperOptSearch()
self._restore(searcher)
def testNevergrad(self):
import nevergrad as ng
from ray.tune.search.nevergrad import NevergradSearch
searcher = NevergradSearch(
space=self.config,
metric=self.metric_name,
mode="max",
optimizer=ng.optimizers.RandomSearch,
)
self._save(searcher)
# `optimizer` is the only required argument
searcher = NevergradSearch(optimizer=ng.optimizers.RandomSearch)
self._restore(searcher)
def testOptuna(self):
from ray.tune.search.optuna import OptunaSearch
searcher = OptunaSearch(
space=self.config,
storage=None,
metric=self.metric_name,
mode="max",
)
self._save(searcher)
searcher = OptunaSearch()
self._restore(searcher)
assert "not_completed" in searcher._ot_trials
def testOptunaWithStorage(self):
from optuna.storages import JournalStorage
from optuna.storages.journal import JournalFileBackend
from ray.tune.search.optuna import OptunaSearch
storage_file_path = "/tmp/my_test_study.log"
searcher = OptunaSearch(
space=self.config,
study_name="my_test_study",
storage=JournalStorage(JournalFileBackend(file_path=storage_file_path)),
metric=self.metric_name,
mode="max",
)
self._save(searcher)
searcher = OptunaSearch()
self._restore(searcher)
assert "not_completed" in searcher._ot_trials
self.assertTrue(os.path.exists(storage_file_path))
def testZOOpt(self):
from ray.tune.search.zoopt import ZOOptSearch
searcher = ZOOptSearch(
space=self.config,
metric=self.metric_name,
mode="max",
budget=100,
parallel_num=4,
)
self._save(searcher)
# `budget` is the only required argument - will get replaced on restore
searcher = ZOOptSearch(budget=0)
self._restore(searcher)
assert searcher._budget == 100
| SaveRestoreCheckpointTest |
python | sympy__sympy | sympy/stats/crv_types.py | {
"start": 59219,
"end": 61763
} | class ____(SingleContinuousDistribution):
_argnames = ('mu', 'b')
set = Interval(-oo, oo)
@staticmethod
def check(mu, b):
_value_check(b > 0, "Scale parameter b must be positive.")
_value_check(mu.is_real, "Location parameter mu should be real")
def pdf(self, x):
mu, b = self.mu, self.b
return 1/(2*b)*exp(-Abs(x - mu)/b)
def _cdf(self, x):
mu, b = self.mu, self.b
return Piecewise(
(S.Half*exp((x - mu)/b), x < mu),
(S.One - S.Half*exp(-(x - mu)/b), x >= mu)
)
def _characteristic_function(self, t):
return exp(self.mu*I*t) / (1 + self.b**2*t**2)
def _moment_generating_function(self, t):
return exp(self.mu*t) / (1 - self.b**2*t**2)
def Laplace(name, mu, b):
r"""
Create a continuous random variable with a Laplace distribution.
Explanation
===========
The density of the Laplace distribution is given by
.. math::
f(x) := \frac{1}{2 b} \exp \left(-\frac{|x-\mu|}b \right)
Parameters
==========
mu : Real number or a list/matrix, the location (mean) or the
location vector
b : Real number or a positive definite matrix, representing a scale
or the covariance matrix.
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import Laplace, density, cdf
>>> from sympy import Symbol, pprint
>>> mu = Symbol("mu")
>>> b = Symbol("b", positive=True)
>>> z = Symbol("z")
>>> X = Laplace("x", mu, b)
>>> density(X)(z)
exp(-Abs(mu - z)/b)/(2*b)
>>> cdf(X)(z)
Piecewise((exp((-mu + z)/b)/2, mu > z), (1 - exp((mu - z)/b)/2, True))
>>> L = Laplace('L', [1, 2], [[1, 0], [0, 1]])
>>> pprint(density(L)(1, 2), use_unicode=False)
5 / ____\
e *besselk\0, \/ 35 /
---------------------
pi
References
==========
.. [1] https://en.wikipedia.org/wiki/Laplace_distribution
.. [2] https://mathworld.wolfram.com/LaplaceDistribution.html
"""
if isinstance(mu, (list, MatrixBase)) and\
isinstance(b, (list, MatrixBase)):
from sympy.stats.joint_rv_types import MultivariateLaplace
return MultivariateLaplace(name, mu, b)
return rv(name, LaplaceDistribution, (mu, b))
#-------------------------------------------------------------------------------
# Levy distribution ---------------------------------------------------------
| LaplaceDistribution |
python | apache__airflow | providers/slack/tests/unit/slack/transfers/test_sql_to_slack.py | {
"start": 1273,
"end": 8526
} | class ____:
def setup_method(self):
self.default_op_kwargs = {
"sql": "SELECT 1",
"sql_conn_id": "test-sql-conn-id",
"slack_conn_id": "test-slack-conn-id",
"sql_hook_params": None,
"parameters": None,
}
@mock.patch("airflow.providers.slack.transfers.sql_to_slack.BaseSqlToSlackOperator._get_query_results")
@mock.patch("airflow.providers.slack.transfers.sql_to_slack.SlackHook")
@pytest.mark.parametrize(
("filename", "df_method"),
[
("awesome.json", "to_json"),
("awesome.json.zip", "to_json"),
("awesome.csv", "to_csv"),
("awesome.csv.xz", "to_csv"),
("awesome.html", "to_html"),
],
)
@pytest.mark.parametrize("df_kwargs", [None, {}, {"foo": "bar"}])
@pytest.mark.parametrize("channels", ["#random", "#random,#general", None])
@pytest.mark.parametrize("initial_comment", [None, "Test Comment"])
@pytest.mark.parametrize("title", [None, "Test File Title"])
@pytest.mark.parametrize(
("slack_op_kwargs", "hook_extra_kwargs"),
[
pytest.param(
{},
{"base_url": None, "timeout": None, "proxy": None, "retry_handlers": None},
id="default-hook-parameters",
),
pytest.param(
{
"slack_base_url": "https://foo.bar",
"slack_timeout": 42,
"slack_proxy": "http://spam.egg",
"slack_retry_handlers": [],
},
{
"base_url": "https://foo.bar",
"timeout": 42,
"proxy": "http://spam.egg",
"retry_handlers": [],
},
id="with-extra-hook-parameters",
),
],
)
def test_send_file(
self,
mock_slack_hook_cls,
mock_get_query_results,
filename,
df_method,
df_kwargs,
channels,
initial_comment,
title,
slack_op_kwargs: dict,
hook_extra_kwargs: dict,
):
# Mock Hook
mock_send_file = mock.MagicMock()
setattr(mock_slack_hook_cls.return_value, "send_file_v1_to_v2", mock_send_file)
# Mock returns pandas.DataFrame and expected method
mock_df = mock.MagicMock()
mock_df_output_method = mock.MagicMock()
setattr(mock_df, df_method, mock_df_output_method)
mock_get_query_results.return_value = mock_df
op_kwargs = {
**self.default_op_kwargs,
"slack_conn_id": "expected-test-slack-conn-id",
"slack_filename": filename,
"slack_channels": channels,
"slack_initial_comment": initial_comment,
"slack_title": title,
"df_kwargs": df_kwargs,
**slack_op_kwargs,
}
op = SqlToSlackApiFileOperator(task_id="test_send_file", **op_kwargs)
mock.patch("airflow.providers.slack.transfers.sql_to_slack.SlackHook")
op.execute(mock.MagicMock())
mock_slack_hook_cls.assert_called_once_with(
slack_conn_id="expected-test-slack-conn-id", **hook_extra_kwargs
)
mock_get_query_results.assert_called_once_with()
mock_df_output_method.assert_called_once_with(mock.ANY, **(df_kwargs or {}))
mock_send_file.assert_called_once_with(
channels=channels,
filename=filename,
initial_comment=initial_comment,
title=title,
file=mock.ANY,
)
@pytest.mark.parametrize(
"filename",
[
"foo.parquet",
"bat.parquet.snappy",
"spam.xml",
"egg.xlsx",
],
)
def test_unsupported_format(self, filename):
op = SqlToSlackApiFileOperator(
task_id="test_send_file", slack_filename=filename, **self.default_op_kwargs
)
with pytest.raises(ValueError, match="Unsupported file format"):
op.execute(mock.MagicMock())
@mock.patch("airflow.providers.slack.transfers.sql_to_slack.SlackHook")
@mock.patch("airflow.providers.slack.transfers.sql_to_slack.BaseSqlToSlackOperator._get_query_results")
def test_null_output_sending_empty_file_by_default(self, mock_get_query_results, mock_slack_hook_cls):
op_kwargs = {
**self.default_op_kwargs,
"slack_conn_id": "expected-test-slack-conn-id",
"slack_filename": "test_filename.csv",
"slack_channels": ["#random"],
"slack_initial_comment": "test_comment",
"slack_title": "test_title",
}
op = SqlToSlackApiFileOperator(task_id="test_send_file", **op_kwargs)
# Mock empty query results
mock_df = mock.MagicMock()
mock_df.configure_mock(**{"empty.return_value": True})
mock_get_query_results.return_value = mock_df
op.execute(mock.MagicMock)
mock_slack_hook_cls.assert_called_once()
@mock.patch("airflow.providers.slack.transfers.sql_to_slack.SlackHook")
@mock.patch("airflow.providers.slack.transfers.sql_to_slack.BaseSqlToSlackOperator._get_query_results")
def test_null_output_skip_sending_file(self, mock_get_query_results, mock_slack_hook_cls):
op_kwargs = {
**self.default_op_kwargs,
"slack_conn_id": "expected-test-slack-conn-id",
"slack_filename": "test_filename.csv",
"slack_channels": ["#random"],
"slack_initial_comment": "test_comment",
"slack_title": "test_title",
"action_on_empty_df": "skip",
}
op = SqlToSlackApiFileOperator(task_id="test_send_file", **op_kwargs)
# Mock empty query results
mock_df = mock.MagicMock()
mock_df.configure_mock(**{"empty.return_value": True})
mock_get_query_results.return_value = mock_df
with pytest.raises(AirflowSkipException):
op.execute(mock.MagicMock())
mock_slack_hook_cls.assert_not_called()
@mock.patch("airflow.providers.slack.transfers.sql_to_slack.SlackHook")
@mock.patch("airflow.providers.slack.transfers.sql_to_slack.BaseSqlToSlackOperator._get_query_results")
def test_null_output_raise_error(self, mock_get_query_results, mock_slack_hook_cls):
op_kwargs = {
**self.default_op_kwargs,
"slack_conn_id": "expected-test-slack-conn-id",
"slack_filename": "test_filename.csv",
"slack_channels": ["#random"],
"slack_initial_comment": "test_comment",
"slack_title": "test_title",
"action_on_empty_df": "error",
}
op = SqlToSlackApiFileOperator(task_id="test_send_file", **op_kwargs)
# Mock empty query results
mock_df = mock.MagicMock()
mock_df.configure_mock(**{"empty.return_value": True})
mock_get_query_results.return_value = mock_df
with pytest.raises(ValueError, match=r"output df must be non-empty\. Failing"):
op.execute(mock.MagicMock())
mock_slack_hook_cls.assert_not_called()
| TestSqlToSlackApiFileOperator |
python | weaviate__weaviate-python-client | weaviate/collections/classes/config.py | {
"start": 58150,
"end": 59902
} | class ____(_PropertyBase):
data_type: DataType
index_filterable: bool
index_range_filters: bool
index_searchable: bool
nested_properties: Optional[List[NestedProperty]]
tokenization: Optional[Tokenization]
vectorizer_config: Optional[PropertyVectorizerConfig]
vectorizer: Optional[str]
vectorizer_configs: Optional[Dict[str, PropertyVectorizerConfig]]
def to_dict(self) -> Dict[str, Any]:
out = super().to_dict()
out["dataType"] = [self.data_type.value]
out["indexFilterable"] = self.index_filterable
out["indexSearchable"] = self.index_searchable
out["indexRangeFilters"] = self.index_range_filters
out["tokenization"] = self.tokenization.value if self.tokenization else None
if self.nested_properties is not None and len(self.nested_properties) > 0:
out["nestedProperties"] = [np.to_dict() for np in self.nested_properties]
module_config: Dict[str, Any] = {}
if self.vectorizer is not None:
module_config[self.vectorizer] = {}
if self.vectorizer_config is not None:
assert self.vectorizer is not None
module_config[self.vectorizer] = {
"skip": self.vectorizer_config.skip,
"vectorizePropertyName": self.vectorizer_config.vectorize_property_name,
}
if self.vectorizer_configs is not None:
module_config = {
k: {"skip": v.skip, "vectorizePropertyName": v.vectorize_property_name}
for k, v in self.vectorizer_configs.items()
}
if len(module_config) > 0:
out["moduleConfig"] = module_config
return out
PropertyConfig = _Property
@dataclass
| _Property |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.