language stringclasses 1 value | repo stringclasses 346 values | path stringlengths 6 201 | class_span dict | source stringlengths 21 2.38M | target stringlengths 1 96 |
|---|---|---|---|---|---|
python | psf__black | tests/data/cases/comments9.py | {
"start": 1978,
"end": 3160
} | class ____:
# First method has no empty lines between bare class def.
# More comments.
def first_method(self):
pass
# Regression test for https://github.com/psf/black/issues/3454.
def foo():
pass
# Trailing comment that belongs to this function
@decorator1
@decorator2 # fmt: skip
def bar():
pass
# Regression test for https://github.com/psf/black/issues/3454.
def foo():
pass
# Trailing comment that belongs to this function.
# NOTE this comment only has one empty line below, and the formatter
# should enforce two blank lines.
@decorator1
# A standalone comment
def bar():
pass
# output
# Test for https://github.com/psf/black/issues/246.
some = statement
# This comment should be split from the statement above by two lines.
def function():
pass
some = statement
# This multiline comments section
# should be split from the statement
# above by two lines.
def function():
pass
some = statement
# This comment should be split from the statement above by two lines.
async def async_function():
pass
some = statement
# This comment should be split from the statement above by two lines.
| MyClass |
python | encode__django-rest-framework | tests/models.py | {
"start": 649,
"end": 739
} | class ____(RESTFrameworkModel):
name = models.CharField(max_length=100)
| ManyToManyTarget |
python | altair-viz__altair | altair/vegalite/v6/schema/core.py | {
"start": 1216936,
"end": 1223242
} | class ____(Spec, NonNormalizedSpec):
"""
ConcatSpecGenericSpec schema wrapper.
Base interface for a generalized concatenation specification.
Parameters
----------
concat : Sequence[dict, :class:`Spec`, :class:`FacetSpec`, :class:`LayerSpec`, :class:`RepeatSpec`, :class:`FacetedUnitSpec`, :class:`LayerRepeatSpec`, :class:`NonLayerRepeatSpec`, :class:`ConcatSpecGenericSpec`, :class:`HConcatSpecGenericSpec`, :class:`VConcatSpecGenericSpec`]
A list of views to be concatenated.
align : dict, :class:`LayoutAlign`, :class:`RowColLayoutAlign`, Literal['all', 'each', 'none']
The alignment to apply to grid rows and columns. The supported string values are
``"all"``, ``"each"``, and ``"none"``.
* For ``"none"``, a flow layout will be used, in which adjacent subviews are simply
placed one after the other.
* For ``"each"``, subviews will be aligned into a clean grid structure, but each row
or column may be of variable size.
* For ``"all"``, subviews will be aligned and each row or column will be sized
identically based on the maximum observed size. String values for this property
will be applied to both grid rows and columns.
Alternatively, an object value of the form ``{"row": string, "column": string}`` can
be used to supply different alignments for rows and columns.
**Default value:** ``"all"``.
bounds : Literal['full', 'flush']
The bounds calculation method to use for determining the extent of a sub-plot. One
of ``full`` (the default) or ``flush``.
* If set to ``full``, the entire calculated bounds (including axes, title, and
legend) will be used.
* If set to ``flush``, only the specified width and height values for the sub-view
will be used. The ``flush`` setting can be useful when attempting to place
sub-plots without axes or legends into a uniform grid structure.
**Default value:** ``"full"``
center : bool, dict, :class:`RowColboolean`
Boolean flag indicating if subviews should be centered relative to their respective
rows or columns.
An object value of the form ``{"row": boolean, "column": boolean}`` can be used to
supply different centering values for rows and columns.
**Default value:** ``false``
columns : float
The number of columns to include in the view composition layout.
**Default value**: ``undefined`` -- An infinite number of columns (a single row)
will be assumed. This is equivalent to ``hconcat`` (for ``concat``) and to using the
``column`` channel (for ``facet`` and ``repeat``).
**Note**:
1) This property is only for:
* the general (wrappable) ``concat`` operator (not ``hconcat``/``vconcat``)
* the ``facet`` and ``repeat`` operator with one field/repetition definition
(without row/column nesting)
2) Setting the ``columns`` to ``1`` is equivalent to ``vconcat`` (for ``concat``)
and to using the ``row`` channel (for ``facet`` and ``repeat``).
data : dict, :class:`Data`, :class:`UrlData`, :class:`Generator`, :class:`NamedData`, :class:`DataSource`, :class:`InlineData`, :class:`SphereGenerator`, :class:`SequenceGenerator`, :class:`GraticuleGenerator`, None
An object describing the data source. Set to ``null`` to ignore the parent's data
source. If no data is set, it is derived from the parent.
description : str
Description of this mark for commenting purpose.
name : str
Name of the visualization for later reference.
resolve : dict, :class:`Resolve`
Scale, axis, and legend resolutions for view composition specifications.
spacing : dict, float, :class:`RowColnumber`
The spacing in pixels between sub-views of the composition operator. An object of
the form ``{"row": number, "column": number}`` can be used to set different spacing
values for rows and columns.
**Default value**: Depends on ``"spacing"`` property of `the view composition
configuration <https://vega.github.io/vega-lite/docs/config.html#view-config>`__
(``20`` by default)
title : str, dict, :class:`Text`, Sequence[str], :class:`TitleParams`
Title for the plot.
transform : Sequence[dict, :class:`Transform`, :class:`BinTransform`, :class:`FoldTransform`, :class:`LoessTransform`, :class:`PivotTransform`, :class:`StackTransform`, :class:`ExtentTransform`, :class:`FilterTransform`, :class:`ImputeTransform`, :class:`LookupTransform`, :class:`SampleTransform`, :class:`WindowTransform`, :class:`DensityTransform`, :class:`FlattenTransform`, :class:`QuantileTransform`, :class:`TimeUnitTransform`, :class:`AggregateTransform`, :class:`CalculateTransform`, :class:`RegressionTransform`, :class:`JoinAggregateTransform`]
An array of data transformations such as filter and new field calculation.
"""
_schema = {"$ref": "#/definitions/ConcatSpec<GenericSpec>"}
def __init__(
self,
concat: Optional[Sequence[SchemaBase | Map]] = Undefined,
align: Optional[SchemaBase | Map | LayoutAlign_T] = Undefined,
bounds: Optional[Literal["full", "flush"]] = Undefined,
center: Optional[bool | SchemaBase | Map] = Undefined,
columns: Optional[float] = Undefined,
data: Optional[SchemaBase | ChartDataType | Map | None] = Undefined,
description: Optional[str] = Undefined,
name: Optional[str] = Undefined,
resolve: Optional[SchemaBase | Map] = Undefined,
spacing: Optional[float | SchemaBase | Map] = Undefined,
title: Optional[str | SchemaBase | Sequence[str] | Map] = Undefined,
transform: Optional[Sequence[SchemaBase | Map]] = Undefined,
**kwds,
):
super().__init__(
concat=concat,
align=align,
bounds=bounds,
center=center,
columns=columns,
data=data,
description=description,
name=name,
resolve=resolve,
spacing=spacing,
title=title,
transform=transform,
**kwds,
)
| ConcatSpecGenericSpec |
python | pypa__warehouse | tests/unit/accounts/test_core.py | {
"start": 936,
"end": 1750
} | class ____:
def test_with_user_context_no_macaroon(self, db_request):
user = UserFactory.create()
user_ctx = UserContext(user, None)
request = pretend.stub(identity=user_ctx)
assert accounts._user(request) is user
def test_with_user_token_context_macaroon(self, db_request):
user = UserFactory.create()
user_ctx = UserContext(user, pretend.stub())
request = pretend.stub(identity=user_ctx)
assert accounts._user(request) is user
def test_without_user_identity(self):
nonuser = pretend.stub()
request = pretend.stub(identity=nonuser)
assert accounts._user(request) is None
def test_without_identity(self):
request = pretend.stub(identity=None)
assert accounts._user(request) is None
| TestUser |
python | jazzband__django-model-utils | tests/models.py | {
"start": 10577,
"end": 10740
} | class ____(models.Model):
NAMED_STATUS = Choices((0, "no", "No"), (1, "yes", "Yes"))
status = StatusField(choices_name='NAMED_STATUS')
| StatusFieldChoicesName |
python | getsentry__sentry | tests/sentry/seer/autofix/test_autofix.py | {
"start": 49602,
"end": 52232
} | class ____(TestCase):
def setUp(self) -> None:
super().setUp()
self.organization = self.create_organization()
self.project = self.create_project(organization=self.organization)
self.trace_id = "1234567890abcdef1234567890abcdef"
self.now = before_now(minutes=0)
@patch("sentry.snuba.ourlogs.OurLogs.run_table_query")
def test_merging_consecutive_logs(self, mock_query) -> None:
# Simulate logs with identical message/severity in sequence
dt = self.now
logs = [
{
"project.id": self.project.id,
"timestamp": (dt - timedelta(seconds=3)).isoformat(),
"message": "foo",
"severity": "info",
},
{
"project.id": self.project.id,
"timestamp": (dt - timedelta(seconds=2)).isoformat(),
"message": "foo",
"severity": "info",
},
{
"project.id": self.project.id,
"timestamp": (dt - timedelta(seconds=1)).isoformat(),
"message": "bar",
"severity": "error",
},
{
"project.id": self.project.id,
"timestamp": dt.isoformat(),
"message": "foo",
"severity": "info",
},
]
mock_query.return_value = {"data": logs}
# Use a mock event with datetime at dt
event = Mock()
event.trace_id = self.trace_id
event.datetime = dt
project = self.project
# Patch project.organization to avoid DB hits
project.organization = self.organization
result = _get_logs_for_event(event, project)
assert result is not None
merged = result["logs"]
# The first two "foo" logs should be merged (consecutive), the last "foo" is not consecutive
foo_merged = [
log for log in merged if log["message"] == "foo" and log.get("consecutive_count") == 2
]
foo_single = [
log for log in merged if log["message"] == "foo" and "consecutive_count" not in log
]
bar = [log for log in merged if log["message"] == "bar"]
assert len(foo_merged) == 1
assert len(foo_single) == 1
assert len(bar) == 1
# Order: merged foo, bar, single foo
assert merged[0]["message"] == "foo" and merged[0]["consecutive_count"] == 2
assert merged[1]["message"] == "bar"
assert merged[2]["message"] == "foo" and "consecutive_count" not in merged[2]
| TestGetLogsForEvent |
python | pyca__cryptography | tests/x509/test_x509.py | {
"start": 5222,
"end": 20964
} | class ____:
def test_load_pem_crl(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
)
assert isinstance(crl, x509.CertificateRevocationList)
fingerprint = binascii.hexlify(crl.fingerprint(hashes.SHA1()))
assert fingerprint == b"191b3428bf9d0dafa4edd42bc98603e182614c57"
assert isinstance(crl.signature_hash_algorithm, hashes.SHA256)
assert (
crl.signature_algorithm_oid
== SignatureAlgorithmOID.RSA_WITH_SHA256
)
def test_load_der_crl(self, backend):
crl = _load_cert(
os.path.join("x509", "PKITS_data", "crls", "GoodCACRL.crl"),
x509.load_der_x509_crl,
)
assert isinstance(crl, x509.CertificateRevocationList)
fingerprint = binascii.hexlify(crl.fingerprint(hashes.SHA1()))
assert fingerprint == b"dd3db63c50f4c4a13e090f14053227cb1011a5ad"
assert isinstance(crl.signature_hash_algorithm, hashes.SHA256)
def test_load_large_crl(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_almost_10k.pem"),
x509.load_pem_x509_crl,
)
assert len(crl) == 9999
def test_empty_crl_no_sequence(self, backend):
# The SEQUENCE for revoked certificates is optional so let's
# test that we handle it properly.
crl = _load_cert(
os.path.join("x509", "custom", "crl_empty_no_sequence.der"),
x509.load_der_x509_crl,
)
assert len(crl) == 0
with pytest.raises(IndexError):
crl[0]
assert crl.get_revoked_certificate_by_serial_number(12) is None
assert list(iter(crl)) == []
def test_invalid_pem(self, backend):
with pytest.raises(ValueError):
x509.load_pem_x509_crl(b"notacrl", backend)
pem_bytes = _load_cert(
os.path.join("x509", "custom", "valid_signature_cert.pem"),
lambda data: data,
)
with pytest.raises(ValueError):
x509.load_pem_x509_crl(pem_bytes, backend)
def test_invalid_der(self, backend):
with pytest.raises(ValueError):
x509.load_der_x509_crl(b"notacrl", backend)
def test_invalid_time(self, backend):
with pytest.raises(ValueError, match="TBSCertList::this_update"):
_load_cert(
os.path.join("x509", "custom", "crl_invalid_time.der"),
x509.load_der_x509_crl,
)
def test_unknown_signature_algorithm(self, backend):
crl = _load_cert(
os.path.join(
"x509", "custom", "crl_md2_unknown_crit_entry_ext.pem"
),
x509.load_pem_x509_crl,
)
with raises_unsupported_algorithm(None):
crl.signature_hash_algorithm
def test_invalid_version(self, backend):
with pytest.raises(x509.InvalidVersion):
_load_cert(
os.path.join("x509", "custom", "crl_bad_version.pem"),
x509.load_pem_x509_crl,
)
def test_issuer(self, backend):
crl = _load_cert(
os.path.join("x509", "PKITS_data", "crls", "GoodCACRL.crl"),
x509.load_der_x509_crl,
)
assert isinstance(crl.issuer, x509.Name)
assert list(crl.issuer) == [
x509.NameAttribute(x509.OID_COUNTRY_NAME, "US"),
x509.NameAttribute(
x509.OID_ORGANIZATION_NAME, "Test Certificates 2011"
),
x509.NameAttribute(x509.OID_COMMON_NAME, "Good CA"),
]
assert crl.issuer.get_attributes_for_oid(x509.OID_COMMON_NAME) == [
x509.NameAttribute(x509.OID_COMMON_NAME, "Good CA")
]
def test_equality(self, backend):
crl1 = _load_cert(
os.path.join("x509", "PKITS_data", "crls", "GoodCACRL.crl"),
x509.load_der_x509_crl,
)
crl2 = _load_cert(
os.path.join("x509", "PKITS_data", "crls", "GoodCACRL.crl"),
x509.load_der_x509_crl,
)
crl3 = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
)
assert crl1 == crl2
assert crl1 != crl3
assert crl1 != object()
def test_comparison(self, backend):
crl1 = _load_cert(
os.path.join("x509", "PKITS_data", "crls", "GoodCACRL.crl"),
x509.load_der_x509_crl,
)
with pytest.raises(TypeError):
crl1 < crl1 # type: ignore[operator]
def test_update_dates(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
)
with pytest.warns(utils.DeprecatedIn42):
assert isinstance(crl.next_update, datetime.datetime)
assert isinstance(crl.last_update, datetime.datetime)
assert crl.next_update.isoformat() == "2016-01-01T00:00:00"
assert crl.last_update.isoformat() == "2015-01-01T00:00:00"
assert isinstance(crl.next_update_utc, datetime.datetime)
assert isinstance(crl.last_update_utc, datetime.datetime)
assert crl.next_update_utc.isoformat() == "2016-01-01T00:00:00+00:00"
assert crl.last_update_utc.isoformat() == "2015-01-01T00:00:00+00:00"
def test_no_next_update(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_no_next_update.pem"),
x509.load_pem_x509_crl,
)
with pytest.warns(utils.DeprecatedIn42):
assert crl.next_update is None
assert crl.next_update_utc is None
def test_unrecognized_extension(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_unrecognized_extension.der"),
x509.load_der_x509_crl,
)
unrecognized = x509.UnrecognizedExtension(
x509.ObjectIdentifier("1.2.3.4.5"),
b"abcdef",
)
ext = crl.extensions.get_extension_for_oid(unrecognized.oid)
assert ext.value == unrecognized
def test_revoked_cert_retrieval(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
)
for r in crl:
assert isinstance(r, x509.RevokedCertificate)
# Check that len() works for CRLs.
assert len(crl) == 12
it = iter(crl)
assert len(typing.cast(typing.Sized, it)) == 12
next(it)
assert len(typing.cast(typing.Sized, it)) == 11
def test_get_revoked_certificate_by_serial_number(self, backend):
crl = _load_cert(
os.path.join(
"x509", "PKITS_data", "crls", "LongSerialNumberCACRL.crl"
),
x509.load_der_x509_crl,
)
serial_number = 725064303890588110203033396814564464046290047507
revoked = crl.get_revoked_certificate_by_serial_number(serial_number)
assert isinstance(revoked, x509.RevokedCertificate)
assert revoked.serial_number == serial_number
assert crl.get_revoked_certificate_by_serial_number(500) is None
def test_revoked_cert_retrieval_retain_only_revoked(self, backend):
"""
This test attempts to trigger the crash condition described in
https://github.com/pyca/cryptography/issues/2557
PyPy does gc at its own pace, so it will only be reliable on CPython.
"""
revoked = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
)[11]
with pytest.warns(utils.DeprecatedIn42):
assert revoked.revocation_date == datetime.datetime(
2015, 1, 1, 0, 0
)
assert revoked.revocation_date_utc == datetime.datetime(
2015, 1, 1, 0, 0, tzinfo=datetime.timezone.utc
)
assert revoked.serial_number == 11
def test_extensions(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_ian_aia_aki.pem"),
x509.load_pem_x509_crl,
)
crl_number = crl.extensions.get_extension_for_oid(
ExtensionOID.CRL_NUMBER
)
aki = crl.extensions.get_extension_for_class(
x509.AuthorityKeyIdentifier
)
aia = crl.extensions.get_extension_for_class(
x509.AuthorityInformationAccess
)
ian = crl.extensions.get_extension_for_class(
x509.IssuerAlternativeName
)
assert crl_number.value == x509.CRLNumber(1)
assert crl_number.critical is False
assert aki.value == x509.AuthorityKeyIdentifier(
key_identifier=(b"yu\xbb\x84:\xcb,\xdez\t\xbe1\x1bC\xbc\x1c*MSX"),
authority_cert_issuer=None,
authority_cert_serial_number=None,
)
assert aia.value == x509.AuthorityInformationAccess(
[
x509.AccessDescription(
AuthorityInformationAccessOID.CA_ISSUERS,
x509.DNSName("cryptography.io"),
)
]
)
assert ian.value == x509.IssuerAlternativeName(
[x509.UniformResourceIdentifier("https://cryptography.io")]
)
def test_delta_crl_indicator(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_delta_crl_indicator.pem"),
x509.load_pem_x509_crl,
)
dci = crl.extensions.get_extension_for_oid(
ExtensionOID.DELTA_CRL_INDICATOR
)
assert dci.value == x509.DeltaCRLIndicator(12345678901234567890)
assert dci.critical is True
def test_signature(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
)
assert crl.signature == binascii.unhexlify(
b"536a5a0794f68267361e7bc2f19167a3e667a2ab141535616855d8deb2ba1af"
b"9fd4546b1fe76b454eb436af7b28229fedff4634dfc9dd92254266219ae0ea8"
b"75d9ff972e9a2da23d5945f073da18c50a4265bfed9ca16586347800ef49dd1"
b"6856d7265f4f3c498a57f04dc04404e2bd2e2ada1f5697057aacef779a18371"
b"c621edc9a5c2b8ec1716e8fa22feeb7fcec0ce9156c8d344aa6ae8d1a5d99d0"
b"9386df36307df3b63c83908f4a61a0ff604c1e292ad63b349d1082ddd7ae1b7"
b"c178bba995523ec6999310c54da5706549797bfb1230f5593ba7b4353dade4f"
b"d2be13a57580a6eb20b5c4083f000abac3bf32cd8b75f23e4c8f4b3a79e1e2d"
b"58a472b0"
)
def test_tbs_certlist_bytes(self, backend):
crl = _load_cert(
os.path.join("x509", "PKITS_data", "crls", "GoodCACRL.crl"),
x509.load_der_x509_crl,
)
ca_cert = _load_cert(
os.path.join("x509", "PKITS_data", "certs", "GoodCACert.crt"),
x509.load_der_x509_certificate,
)
public_key = ca_cert.public_key()
assert isinstance(public_key, rsa.RSAPublicKey)
assert crl.signature_hash_algorithm is not None
public_key.verify(
crl.signature,
crl.tbs_certlist_bytes,
padding.PKCS1v15(),
crl.signature_hash_algorithm,
)
def test_public_bytes_pem(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_empty.pem"),
x509.load_pem_x509_crl,
)
# Encode it to PEM and load it back.
crl = x509.load_pem_x509_crl(
crl.public_bytes(
encoding=serialization.Encoding.PEM,
),
)
assert len(crl) == 0
_check_crl_times(
crl,
last_update=datetime.datetime(2015, 12, 20, 23, 44, 47),
next_update=datetime.datetime(2015, 12, 28, 0, 44, 47),
)
def test_public_bytes_der(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
)
# Encode it to DER and load it back.
crl = x509.load_der_x509_crl(
crl.public_bytes(
encoding=serialization.Encoding.DER,
),
)
assert len(crl) == 12
_check_crl_times(
crl,
last_update=datetime.datetime(2015, 1, 1, 0, 0, 0),
next_update=datetime.datetime(2016, 1, 1, 0, 0, 0),
)
@pytest.mark.parametrize(
("cert_path", "loader_func", "encoding"),
[
(
os.path.join("x509", "custom", "crl_all_reasons.pem"),
x509.load_pem_x509_crl,
serialization.Encoding.PEM,
),
(
os.path.join("x509", "PKITS_data", "crls", "GoodCACRL.crl"),
x509.load_der_x509_crl,
serialization.Encoding.DER,
),
],
)
def test_public_bytes_match(
self, cert_path, loader_func, encoding, backend
):
crl_bytes = load_vectors_from_file(
cert_path, lambda pemfile: pemfile.read(), mode="rb"
)
crl = loader_func(crl_bytes, backend)
serialized = crl.public_bytes(encoding)
assert serialized == crl_bytes
def test_public_bytes_invalid_encoding(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "crl_empty.pem"),
x509.load_pem_x509_crl,
)
with pytest.raises(TypeError):
crl.public_bytes("NotAnEncoding") # type: ignore[arg-type]
def test_verify_bad(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "invalid_signature_crl.pem"),
x509.load_pem_x509_crl,
)
crt = _load_cert(
os.path.join("x509", "custom", "invalid_signature_cert.pem"),
x509.load_pem_x509_certificate,
)
public_key = crt.public_key()
assert isinstance(public_key, rsa.RSAPublicKey)
assert not crl.is_signature_valid(public_key)
crl = _load_cert(
os.path.join("x509", "custom", "crl_inner_outer_mismatch.der"),
x509.load_der_x509_crl,
)
assert not crl.is_signature_valid(public_key)
def test_verify_good(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "valid_signature_crl.pem"),
x509.load_pem_x509_crl,
)
crt = _load_cert(
os.path.join("x509", "custom", "valid_signature_cert.pem"),
x509.load_pem_x509_certificate,
)
public_key = crt.public_key()
assert isinstance(public_key, rsa.RSAPublicKey)
assert crl.is_signature_valid(public_key)
def test_verify_argument_must_be_a_public_key(self, backend):
crl = _load_cert(
os.path.join("x509", "custom", "valid_signature_crl.pem"),
x509.load_pem_x509_crl,
)
with pytest.raises(TypeError):
crl.is_signature_valid(
"not a public key" # type: ignore[arg-type]
)
with pytest.raises(TypeError):
crl.is_signature_valid(object) # type: ignore[arg-type]
def test_crl_issuer_invalid_printable_string(self):
data = _load_cert(
os.path.join(
"x509", "custom", "crl_issuer_invalid_printable_string.der"
),
lambda v: v,
)
with pytest.raises(ValueError):
x509.load_der_x509_crl(data)
| TestCertificateRevocationList |
python | Textualize__textual | tests/toggles/test_radioset.py | {
"start": 131,
"end": 5209
} | class ____(App[None]):
def __init__(self):
super().__init__()
self.events_received = []
def compose(self) -> ComposeResult:
with RadioSet(id="from_buttons"):
yield RadioButton(id="clickme")
yield RadioButton()
yield RadioButton(value=True)
yield RadioSet("One", "True", "Three", id="from_strings")
def on_radio_set_changed(self, event: RadioSet.Changed) -> None:
assert event.radio_set is event.control
self.events_received.append(
(
event.radio_set.id,
event.index,
[button.value for button in event.radio_set.query(RadioButton)],
)
)
async def test_radio_sets_initial_state():
"""The initial states of the radio sets should be as we specified."""
async with RadioSetApp().run_test() as pilot:
assert pilot.app.query_one("#from_buttons", RadioSet).pressed_index == 2
assert pilot.app.query_one("#from_buttons", RadioSet).pressed_button is not None
assert pilot.app.query_one("#from_strings", RadioSet).pressed_index == -1
assert pilot.app.query_one("#from_strings", RadioSet).pressed_button is None
assert pilot.app.events_received == []
async def test_click_sets_focus():
"""Clicking within a radio set should set focus."""
async with RadioSetApp().run_test() as pilot:
pilot.app.set_focus(None)
assert pilot.app.screen.focused is None
await pilot.click("#clickme")
assert pilot.app.screen.focused == pilot.app.query_one("#from_buttons")
async def test_radio_sets_toggle():
"""Test the status of the radio sets after they've been toggled."""
async with RadioSetApp().run_test() as pilot:
pilot.app.query_one("#from_buttons", RadioSet)._nodes[0].toggle()
pilot.app.query_one("#from_strings", RadioSet)._nodes[2].toggle()
await pilot.pause()
assert pilot.app.query_one("#from_buttons", RadioSet).pressed_index == 0
assert pilot.app.query_one("#from_buttons", RadioSet).pressed_button is not None
assert pilot.app.query_one("#from_strings", RadioSet).pressed_index == 2
assert pilot.app.query_one("#from_strings", RadioSet).pressed_button is not None
assert pilot.app.events_received == [
("from_buttons", 0, [True, False, False]),
("from_strings", 2, [False, False, True]),
]
async def test_radioset_same_button_mash():
"""Mashing the same button should have no effect."""
async with RadioSetApp().run_test() as pilot:
assert pilot.app.query_one("#from_buttons", RadioSet).pressed_index == 2
pilot.app.query_one("#from_buttons", RadioSet)._nodes[2].toggle()
assert pilot.app.query_one("#from_buttons", RadioSet).pressed_index == 2
assert pilot.app.events_received == []
async def test_radioset_inner_navigation():
"""Using the cursor keys should navigate between buttons in a set."""
async with RadioSetApp().run_test() as pilot:
for key, landing in (
("down", 1),
("up", 0),
("right", 1),
("left", 0),
("up", 2),
("down", 0),
):
await pilot.press(key, "enter")
assert (
pilot.app.query_one("#from_buttons", RadioSet).pressed_button
== pilot.app.query_one("#from_buttons").children[landing]
)
async with RadioSetApp().run_test() as pilot:
assert pilot.app.screen.focused is pilot.app.screen.query_one("#from_buttons")
await pilot.press("tab")
assert pilot.app.screen.focused is pilot.app.screen.query_one("#from_strings")
assert pilot.app.query_one("#from_strings", RadioSet)._selected == 0
await pilot.press("down")
assert pilot.app.query_one("#from_strings", RadioSet)._selected == 1
async def test_radioset_inner_navigation_post_build():
class EmptyRadioSetApp(App[None]):
def compose(self) -> ComposeResult:
yield RadioSet()
def on_mount(self) -> None:
# This isn't encouraged; but neither is it currently prohibited;
# so let's test.
for n in range(5):
self.query_one(RadioSet).mount(RadioButton(id=f"rb{n}"))
async with EmptyRadioSetApp().run_test() as pilot:
assert pilot.app.query_one(RadioSet)._selected is None
await pilot.press("up")
assert pilot.app.query_one(RadioSet)._selected == 4
async def test_radioset_breakout_navigation():
"""Shift/Tabbing while in a radioset should move to the previous/next focsuable after the set itself."""
async with RadioSetApp().run_test() as pilot:
assert pilot.app.screen.focused is pilot.app.query_one("#from_buttons")
await pilot.press("tab")
assert pilot.app.screen.focused is pilot.app.query_one("#from_strings")
await pilot.press("shift+tab")
assert pilot.app.screen.focused is pilot.app.query_one("#from_buttons")
| RadioSetApp |
python | pytorch__pytorch | torch/_inductor/fx_passes/micro_pipeline_tp.py | {
"start": 15766,
"end": 40174
} | class ____(_Matmul):
A_scale_node: torch.fx.Node
B_scale_node: torch.fx.Node
bias_node: torch.fx.Node | None
result_scale_node: torch.fx.Node | None
out_dtype: torch.dtype | None
use_fast_accum: bool
pre_mm_reshape: torch.fx.Node | None
post_mm_reshape: torch.fx.Node | None
def __post_init__(self):
super().__post_init__()
self.arg_ancestor_nodes |= _find_ancestors(self.A_scale_node)
self.arg_ancestor_nodes |= _find_ancestors(self.B_scale_node)
@classmethod
def from_match(cls, match: list[torch.fx.Node]) -> "_ScaledMatmul":
assert len(match) in (1, 3)
assert match[0].target in (
aten._scaled_mm.default,
aten.reshape.default,
)
def get_arg(node: torch.fx.Node, idx: int, default: Any) -> Any:
if idx >= len(node.args):
return default
return node.args[idx]
# Use mm_node with 2D args for both A and B, even if this is a "reshape -> mm -> reshape" pattern.
# We will store the reshapes in pre_mm_reshape and post_mm_reshape, to be referenced later to
# produce the correct output shapes, reduce-scatter along the correct dimensions, etc.
is_reshape_mm_reshape_pattern = match[0].target is aten.reshape.default
mm_node = match[1] if is_reshape_mm_reshape_pattern else match[0]
pre_mm_reshape = match[0] if is_reshape_mm_reshape_pattern else None
post_mm_reshape = match[-1] if is_reshape_mm_reshape_pattern else None
A_node = cast("torch.fx.Node", mm_node.args[0])
B_node = cast("torch.fx.Node", mm_node.args[1])
A_scale_node = cast("torch.fx.Node", mm_node.args[2])
B_scale_node = cast("torch.fx.Node", mm_node.args[3])
return _ScaledMatmul(
nodes=match,
A_node=A_node,
B_node=B_node,
A_scale_node=A_scale_node,
B_scale_node=B_scale_node,
bias_node=get_arg(mm_node, 4, None),
result_scale_node=get_arg(mm_node, 5, None),
out_dtype=get_arg(mm_node, 6, None),
use_fast_accum=get_arg(mm_node, 7, False),
pre_mm_reshape=pre_mm_reshape,
post_mm_reshape=post_mm_reshape,
)
def _find_reshape_mm_reshape(node: torch.fx.Node) -> list[_Matmul]:
if node.target != aten.reshape.default:
return []
matches = []
for mm_node in node.users:
if mm_node.target not in (aten.mm.default, aten._scaled_mm.default):
continue
for reshape_node in mm_node.users:
if reshape_node.target != aten.reshape.default:
continue
# Since the reshape -> mm -> reshape pattern would be subsumed into
# the fused op, we only match the patterns where the shape of the
# second reshape is matches the mm result produced by the fused op.
matmul_input_node = cast("torch.fx.Node", node.args[0])
B_node = cast("torch.fx.Node", mm_node.args[1])
matmul_out_shape = torch.Size(
[
*_get_tensor(matmul_input_node).shape[:-1],
_get_tensor(B_node).shape[-1],
]
)
if _get_tensor(reshape_node).shape != matmul_out_shape:
continue
matches.append([node, mm_node, reshape_node])
# If for some rare reason mm_node is being reshaped by two
# different reshape nodes, we only include mm_node once in the
# parsing result.
break
matmuls = []
for match in matches:
mm_node = match[1]
if mm_node.target is aten.mm.default:
matmul = _Matmul.from_match(match)
matmuls.append(matmul)
elif mm_node.target is aten._scaled_mm.default:
matmul = _ScaledMatmul.from_match(match)
matmuls.append(matmul)
else:
raise AssertionError(
"Expect the node's target to be either aten.mm.default or "
f"aten._scaled_mm.default. Got {mm_node.target}."
)
return matmuls
def _find_consumer_matmuls(node: torch.fx.Node) -> list[_Matmul]:
"""
Find the matmuls that use `node` as the lhs argument.
"""
matmuls = []
for user in node.users:
# ND matmuls
if user.target is aten.reshape.default:
matmuls.extend(_find_reshape_mm_reshape(user))
# 2D matmuls
elif user.target is aten.mm.default:
matmul = _Matmul.from_match(match=[user])
matmuls.append(matmul)
elif user.target is aten._scaled_mm.default:
matmul = _ScaledMatmul.from_match([user])
matmuls.append(matmul)
return matmuls
def _insert_fused_all_gather_matmul(
graph: torch.fx.Graph,
matmuls: list[_Matmul],
shard_node: torch.fx.Node,
gather_dim: int,
group_name: str,
) -> torch.fx.Node:
mm_types = OrderedSet(map(type, matmuls))
assert len(mm_types) == 1
mm_type = next(iter(mm_types))
if mm_type == _Matmul:
B_nodes = [matmul.B_node for matmul in matmuls]
return graph.call_function(
torch.ops.symm_mem.fused_all_gather_matmul.default,
args=(shard_node, B_nodes, gather_dim, group_name),
kwargs={"return_A": True},
)
elif mm_type == _ScaledMatmul:
scaled_matmuls = cast("list[_ScaledMatmul]", matmuls)
return graph.call_function(
torch.ops.symm_mem.fused_all_gather_scaled_matmul.default,
args=(
shard_node,
[matmul.B_node for matmul in scaled_matmuls],
scaled_matmuls[0].A_scale_node,
[matmul.B_scale_node for matmul in scaled_matmuls],
gather_dim,
group_name,
[matmul.bias_node for matmul in scaled_matmuls],
[matmul.result_scale_node for matmul in scaled_matmuls],
[matmul.out_dtype for matmul in scaled_matmuls],
[matmul.use_fast_accum for matmul in scaled_matmuls],
),
)
else:
raise AssertionError(f"Unexpected matmul match type: {mm_type}")
def fuse_all_gather_matmul(all_gather: _AllGatherMatch) -> None:
"""
Fused the pattern
A = all_gather_tensor(A_shard, gather_dim, group_name)
C_0 = torch.matmul(A, B_0)
C_1 = torch.matmul(A, B_1)
C_2 = torch.matmul(A, B_2)
...
into
A, Cs = torch.ops.symm_mem.fused_all_gather_matmul(
A_shard, [B_0, B_1, B_2, ...], gather_dim, group_name,
)
"""
if (
not torch.distributed.is_available()
or not torch.distributed.is_nccl_available()
):
return
from torch.distributed._symmetric_memory import (
is_symm_mem_enabled_for_group,
restride_A_shard_for_fused_all_gather_matmul,
)
shard_node, ag_node, ag_res_node, gather_dim, group_name = (
all_gather.shard_node,
all_gather.ag_node,
all_gather.res_node,
all_gather.gather_dim,
all_gather.group_name,
)
if not is_symm_mem_enabled_for_group(group_name):
return
filter_matmul = None
if _is_last_dim(_get_tensor(shard_node), gather_dim):
# Decomposed mms should not be too small
if _get_tensor(shard_node).shape[-1] < 1024:
return
# scaled_mm is not supported yet for last dim
def _filter_out_scaled_matmul(matmul: _Matmul):
return not isinstance(matmul, _ScaledMatmul)
filter_matmul = _filter_out_scaled_matmul
# Find consumer matmuls
matmuls = _find_consumer_matmuls(ag_res_node)
# The matmuls are only fusible if non-A args don't depend on the all-gather
# result node
matmuls = [
matmul
for matmul in matmuls
if all_gather.res_node not in matmul.arg_ancestor_nodes
]
if len(matmuls) == 0 or len(OrderedSet(map(type, matmuls))) != 1:
return
if _is_last_dim(_get_tensor(shard_node), gather_dim) and len(
all_gather.res_node.users
) > len(matmuls):
# The result of ag-split-cat is used not only in matmuls.
# Then it has to be materialized, which can have overhead.
return
if filter_matmul and not filter_matmul(matmuls[0]):
return
# Fuse the all_gather_tensor with the eligible matmuls
graph = ag_node.graph
with graph.inserting_before(ag_node):
if not _is_last_dim(_get_tensor(shard_node), gather_dim):
if "val" in shard_node.meta:
restrided = restride_A_shard_for_fused_all_gather_matmul(
_get_tensor(shard_node),
gather_dim,
)
shard_node = graph.call_function(
inductor_prims.force_stride_order,
args=(shard_node, restrided.stride()),
)
fused_node = _insert_fused_all_gather_matmul(
graph, matmuls, shard_node, gather_dim, group_name
)
new_ag_node = graph.call_function(
operator.getitem,
args=(fused_node, 0),
)
new_out_nodes = graph.call_function(
operator.getitem,
args=(fused_node, 1),
)
for idx, matmul in enumerate(matmuls):
new_out_node = graph.call_function(
operator.getitem,
args=(new_out_nodes, idx),
)
matmul.replace_with(new_out_node)
matmul.erase()
all_gather.replace_with(new_ag_node)
all_gather.erase()
# If the new_ag_node has no users, we tell the fused op to not return
# it. This creates more optimization opportunities.
if len(new_ag_node.users) == 0:
graph.erase_node(new_ag_node)
kwargs = dict(fused_node.kwargs)
if "return_A" in kwargs:
kwargs["return_A"] = False
fused_node.kwargs = kwargs
# Raise ancestors of non-A args that are topologically ordered between
# ag_res_node and the matmul above fused_node.
order = {node: idx for idx, node in enumerate(graph.nodes)}
nodes_to_raise = sorted(
OrderedSet(x for matmul in matmuls for x in matmul.arg_ancestor_nodes),
key=lambda x: order[x],
)
for node in nodes_to_raise:
if order[node] > order[fused_node]:
fused_node.prepend(node)
def _scatter_dim_after_reshape(
reshape_node: torch.fx.Node, orig_scatter_dim: int
) -> int:
"""
Given a reshape node and the original scatter dim for the target tensor,
returns the new scatter dim for the reshaped tensor.
"""
# if there was no pre-mm reshape, scatter dim will not change.
if not reshape_node:
return orig_scatter_dim
reshape_op_output_tensor = _get_tensor(reshape_node)
assert reshape_op_output_tensor.ndim == 2, (
"reshape must produce 2D tensor for scaled_mm"
)
assert len(reshape_node.args) >= 1, "reshape node must have at least 1 arg"
input_tensor_node = cast(torch.fx.Node, reshape_node.args[0])
reshape_op_input_tensor = _get_tensor(input_tensor_node)
assert reshape_op_input_tensor.ndim > reshape_op_output_tensor.ndim, (
"reshape must be from 3D+ to 2D"
)
# Note: for a N-D tensor to be reshaped into 2D, either the leading dims or ending dims must
# be collapsed to a single dim. First determine which of these happened.
input_shape = reshape_op_input_tensor.shape
output_shape = reshape_op_output_tensor.shape
leading_dims_collapsed = output_shape[0] == prod(input_shape[:-1])
# Case 1: scatter dim 0 always maps to 0 after any reshape from 3D+ to 2D, regardless if
# leading dims or ending dims were collapsed.
if orig_scatter_dim == 0:
return 0
# Case 2: scatter dim "ndim-1" always maps to 1 after any reshape from 3D+ to 2D, regardless if
# leading dims or ending dims were collapsed.
if orig_scatter_dim == reshape_op_input_tensor.ndim - 1:
return 1
# Case 3: scatter dim was one of the middle dims (between 0 and ndim-1).
# if the leading dims were collapsed, the new scatter dim will be 0.
# if the ending dims were collapsed, the new scatter dim will be 1.
return 0 if leading_dims_collapsed else 1
def _find_producer_matmul(node: torch.fx.Node) -> _Matmul | None:
"""
Returns producer matmul node if found, otherwise returns None.
"""
if node.target is aten.mm.default:
return _Matmul.from_match(match=[node])
elif node.target is aten._scaled_mm.default:
return _ScaledMatmul.from_match(match=[node])
elif node.target is aten.reshape.default:
reshape_node_1 = node
mm_node = reshape_node_1.args[0]
assert isinstance(mm_node, torch.fx.Node)
if mm_node.target not in (aten.mm.default, aten._scaled_mm.default):
return None
reshape_node_0 = mm_node.args[0]
assert isinstance(reshape_node_0, torch.fx.Node)
if reshape_node_0.target != aten.reshape.default:
return None
if mm_node.target is aten.mm.default:
return _Matmul.from_match(match=[reshape_node_0, mm_node, reshape_node_1])
elif mm_node.target is aten._scaled_mm.default:
return _ScaledMatmul.from_match(
match=[reshape_node_0, mm_node, reshape_node_1]
)
return None
def _insert_fused_matmul_reduce_scatter(
graph: torch.fx.Graph,
matmul: _Matmul,
reduce_op: str,
orig_scatter_dim: int,
group_name: str,
scatter_dim_after_reshape: int, # only used for reshape -> scaled_mm -> reshape pattern
output_shape: list[int], # only used for reshape -> scaled_mm -> reshape pattern
) -> torch.fx.Node:
if type(matmul) is _Matmul:
return graph.call_function(
torch.ops.symm_mem.fused_matmul_reduce_scatter.default,
args=(
matmul.A_node,
matmul.B_node,
reduce_op,
orig_scatter_dim,
group_name,
),
)
elif type(matmul) is _ScaledMatmul:
return graph.call_function(
torch.ops.symm_mem.fused_scaled_matmul_reduce_scatter.default,
args=(
matmul.A_node,
matmul.B_node,
matmul.A_scale_node,
matmul.B_scale_node,
reduce_op,
orig_scatter_dim,
scatter_dim_after_reshape,
group_name,
output_shape,
matmul.bias_node,
matmul.result_scale_node,
matmul.out_dtype,
matmul.use_fast_accum,
),
)
else:
raise AssertionError(f"Unexpected matmul match type: {type(matmul)}")
def fuse_matmul_reduce_scatter(reduce_scatter: _ReduceScatterMatch) -> None:
"""
Fused the pattern
reduce_scatter_tensor(A @ B, scatter_dim, group_name)
into
torch.ops.symm_mem.fused_matmul_reduce_scatter(
A, B, scatter_dim, group_name,
)
Returns boolean indicating if fusion was successful or not.
"""
if (
not torch.distributed.is_available()
or not torch.distributed.is_nccl_available()
):
return
from torch.distributed._symmetric_memory import (
is_symm_mem_enabled_for_group,
restride_A_for_fused_matmul_reduce_scatter,
)
(
input_node,
_reduce_scatter_node,
rs_wait_tensor_node,
reduce_op,
orig_scatter_dim,
group_name,
) = (
reduce_scatter.input_node,
reduce_scatter.reduce_scatter_node,
reduce_scatter.wait_tensor_node,
reduce_scatter.reduce_op,
reduce_scatter.scatter_dim,
reduce_scatter.group_name,
)
if not is_symm_mem_enabled_for_group(group_name):
return
filter_matmul = None
if _is_last_dim(_get_tensor(input_node), orig_scatter_dim):
# scaled_mm is not supported yet for last dim mm+rs
def _filter_out_scaled_matmul(matmul: _Matmul):
return not isinstance(matmul, _ScaledMatmul)
filter_matmul = _filter_out_scaled_matmul
# Currently fused_matmul_reduce_scatter doesn't return the matmul result,
# so we can't apply the fusion if the matmul result is used by multiple
# users. This is not a fundamental limitation of the fused op and can be
# addressed if needed.
if len(input_node.users) != 1:
log.warning(
"matmul result has more than one user, skipping fused_matmul_reduce_scatter fusion."
)
return
matmul = _find_producer_matmul(input_node)
if matmul is None:
log.warning(
"no producer matmul found for reduce scatter, skipping fuse_matmul_reduce_scatter fusion"
)
return
if filter_matmul and not filter_matmul(matmul):
return
if rs_wait_tensor_node in matmul.arg_ancestor_nodes:
log.warning(
"reduce-scatter result node is an ancestor of matmul, skipping fuse_matmul_reduce_scatter fusion"
)
return
# We need to track 3 values for the fused scaled mm reduce scatter implementation:
# 1. The scatter dim before the reshape, which was assigned using the original (a,b,c) @ (c,d) = (a,b,d) dims.
# 2. The scatter dim after the reshape, to use when we are doing the 2D (a*b,c) @ (c,d) = (a,b,d) scaled mm op.
# 3. Store expected potentially 3D+ mm output shape, so we can reshape the 2D mm output to the intended
# 3D+ shape before applying reduce-scatter, and to prevent shape errors with subsequent ops.
# If 'A' was reshaped from 3D+ -> 2D for the mm, we need to determine the new scattter dim after the reshape
# for the fused matmul reduce scatter implementation to use.
if matmul.pre_mm_reshape:
scatter_dim_after_maybe_reshape = _scatter_dim_after_reshape(
matmul.pre_mm_reshape, orig_scatter_dim
)
else:
scatter_dim_after_maybe_reshape = orig_scatter_dim
# If the 2D mm output was reshaped from 2D -> 3D+, we need to store the intended output shape for the
# fused matmul reduce scatter implementation to use.
if matmul.post_mm_reshape:
output_shape = list(_get_tensor(matmul.post_mm_reshape).shape)
else:
A_orig_shape = list(_get_tensor(matmul.A_node).shape)
B_shape = list(_get_tensor(matmul.B_node).shape)
output_shape = [*A_orig_shape[:-1], B_shape[-1]]
graph = rs_wait_tensor_node.graph
with graph.inserting_before(rs_wait_tensor_node):
# Restride A tensor before fused op, for optimal perf in fused matmul reduce scatter
if "val" in matmul.A_node.meta:
restrided = restride_A_for_fused_matmul_reduce_scatter(
_get_tensor(matmul.A_node),
scatter_dim_after_maybe_reshape,
)
matmul.A_node = graph.call_function(
inductor_prims.force_stride_order,
args=(matmul.A_node, restrided.stride()),
)
# Replace matched subgraph with fused matmul reduce scatter node
fused_node = _insert_fused_matmul_reduce_scatter(
graph,
matmul,
reduce_op,
orig_scatter_dim,
group_name,
scatter_dim_after_maybe_reshape,
output_shape,
)
reduce_scatter.replace_with(fused_node)
reduce_scatter.erase()
matmul.erase()
order = {node: idx for idx, node in enumerate(graph.nodes)}
nodes_to_raise = sorted(
matmul.arg_ancestor_nodes,
key=lambda x: order[x],
)
for node in nodes_to_raise:
if order[node] > order[fused_node]:
fused_node.prepend(node)
log.debug("successfully fused matmul reduce scatter")
def _get_node_to_ancestors(
graph: torch.fx.Graph,
) -> dict[torch.fx.Node, OrderedSet[torch.fx.Node]]:
"""
Compute the ancestors for all nodes in a graph.
"""
node_to_ancestors = defaultdict(OrderedSet[torch.fx.Node]) # type: ignore[var-annotated]
for node in graph.nodes:
node_to_ancestors[node] = OrderedSet(node.all_input_nodes)
for dep in node.all_input_nodes:
node_to_ancestors[node] |= node_to_ancestors[dep]
return node_to_ancestors
def _get_collective_to_overlappable_nodes(
graph: torch.fx.Graph,
) -> dict[torch.fx.Node, list[torch.fx.Node]]:
"""
For each collective in the graph, find nodes that are neither ancestors nor
descendants of the collective.
"""
def is_collective(node) -> bool:
# Only consider all-gather and reduce-scatter in the context of
# micro-pipeline TP.
return node.target in [
torch.ops._c10d_functional.all_gather_into_tensor.default,
torch.ops._c10d_functional.reduce_scatter_tensor.default,
]
node_to_ancestors = _get_node_to_ancestors(graph)
collective_to_overlappable_nodes = defaultdict(list)
for node in graph.nodes:
if not is_collective(node):
continue
for x in graph.nodes:
if (
node not in node_to_ancestors[x]
and x not in node_to_ancestors[node]
and x.op == "call_function"
):
collective_to_overlappable_nodes[node].append(x)
return collective_to_overlappable_nodes
def _get_unexposed_collectives(graph: torch.fx.Graph) -> list[torch.fx.Node]:
"""
Find all unexposed collectives in the graph.
Because we don't have the runtime estimate, this function is a rough
estimation using the following strong/hand-wavy assumptions:
- Only a predefined set of "compute intensive" operation can hide a collective.
- Any "compute intensive" operation can hide exactly one collective.
"""
def _is_compute_intensive(node: torch.fx.Node) -> bool:
return node.target is torch.ops.aten.mm.default
collective_to_overlapping_candidates = defaultdict(list)
available_nodes = OrderedSet[torch.fx.Node]()
collective_to_overlappable_nodes = _get_collective_to_overlappable_nodes(graph)
for collective, overlappable_nodes in collective_to_overlappable_nodes.items():
candidates = [x for x in overlappable_nodes if _is_compute_intensive(x)]
collective_to_overlapping_candidates[collective] = candidates
available_nodes.update(candidates)
unexposed_collectives = []
for (
collective,
overlapping_candidates,
) in collective_to_overlapping_candidates.items():
# Each collective consumes exactly one overlapping candidate
for x in overlapping_candidates:
if x in available_nodes:
unexposed_collectives.append(collective)
available_nodes.remove(x)
break
return unexposed_collectives
def micro_pipeline_tp_pass(graph: torch.fx.Graph):
all_gathers = find_all_gather_patterns(graph)
reduce_scatters = find_reduce_scatter_patterns(graph)
# When a collective can be hidden through either simple overlapping or
# micro-pipeline TP, we prefer simple overlapping to avoid the overhead
# associated with decomposition. If reorder_for_compute_comm_overlap is
# enabled, we identify collectives that can be hidden through simple
# overlapping and exclude them from micro-pipeline TP candidates.
if config.reorder_for_compute_comm_overlap:
unexposed_collectives = _get_unexposed_collectives(graph)
all_gathers = [x for x in all_gathers if x.ag_node not in unexposed_collectives]
reduce_scatters = [
x
for x in reduce_scatters
if x.reduce_scatter_node not in unexposed_collectives
]
if not all_gathers and not reduce_scatters:
log.warning(
"async TP found no matching all-gather/reduce-scatter patterns for fusion"
)
for all_gather in all_gathers:
fuse_all_gather_matmul(all_gather)
for reduce_scatter in reduce_scatters:
fuse_matmul_reduce_scatter(reduce_scatter)
| _ScaledMatmul |
python | apache__airflow | providers/apache/cassandra/tests/unit/apache/cassandra/hooks/test_cassandra.py | {
"start": 1820,
"end": 9023
} | class ____:
def test_init_with_valid_connection(self, mock_cassandra_hook):
assert isinstance(mock_cassandra_hook.cluster, Cluster)
assert mock_cassandra_hook.cluster.contact_points == ["127.0.0.1", "127.0.0.2"]
assert mock_cassandra_hook.cluster.port == 9042
assert isinstance(mock_cassandra_hook.cluster.auth_provider, PlainTextAuthProvider)
assert (
mock_cassandra_hook.cluster.load_balancing_policy.__class__.__name__ == "DCAwareRoundRobinPolicy"
)
assert mock_cassandra_hook.cluster.cql_version == "3.4.4"
assert mock_cassandra_hook.cluster.ssl_options == {"ca_certs": "/path/to/certs"}
assert mock_cassandra_hook.cluster.protocol_version == 4
assert mock_cassandra_hook.keyspace == "test_keyspace"
def test_get_conn_session_exist(self, mock_cassandra_hook):
mock_cassandra_hook.session = Mock()
mock_cassandra_hook.session.is_shutdown = False
hook_session = mock_cassandra_hook.get_conn()
assert isinstance(hook_session, Mock)
assert not hook_session.is_shutdown
def test_get_conn_session_not_exist(self, mock_cassandra_hook):
mock_cassandra_hook.cluster.connect = Mock()
hook_session = mock_cassandra_hook.get_conn()
assert isinstance(hook_session, Mock)
def test_get_cluster(self, mock_cassandra_hook):
cluster = mock_cassandra_hook.get_cluster()
assert cluster.contact_points == ["127.0.0.1", "127.0.0.2"]
assert cluster.port == 9042
assert isinstance(cluster.auth_provider, PlainTextAuthProvider)
assert cluster.load_balancing_policy.__class__.__name__ == "DCAwareRoundRobinPolicy"
assert cluster.cql_version == "3.4.4"
assert cluster.ssl_options == {"ca_certs": "/path/to/certs"}
assert cluster.protocol_version == 4
assert mock_cassandra_hook.keyspace == "test_keyspace"
def test_shutdown_cluster(self, mock_cassandra_hook):
mock_cassandra_hook.cluster = Mock()
mock_cassandra_hook.cluster.is_shutdown = False
mock_cassandra_hook.cluster.shutdown.return_value = None
mock_cassandra_hook.shutdown_cluster()
assert not mock_cassandra_hook.cluster.is_shutdown
mock_cassandra_hook.cluster.shutdown.assert_called_once()
def test_get_lb_policy_dc_aware_round_robin_policy(self, mock_cassandra_hook):
policy_args = {"local_dc": "dc1", "used_hosts_per_remote_dc": 2}
lb_policy = mock_cassandra_hook.get_lb_policy("DCAwareRoundRobinPolicy", policy_args)
assert isinstance(lb_policy, DCAwareRoundRobinPolicy)
assert lb_policy.local_dc == "dc1"
assert lb_policy.used_hosts_per_remote_dc == 2
def test_get_lb_policy_token_aware_policy_dc_aware_round_robin_policy(self, mock_cassandra_hook):
policy_args = {
"child_load_balancing_policy": "DCAwareRoundRobinPolicy",
"child_load_balancing_policy_args": {
"local_dc": "dc1",
"used_hosts_per_remote_dc": 3,
},
}
lb_policy = mock_cassandra_hook.get_lb_policy("TokenAwarePolicy", policy_args)
assert isinstance(lb_policy, TokenAwarePolicy)
assert isinstance(lb_policy._child_policy, DCAwareRoundRobinPolicy)
assert lb_policy._child_policy.local_dc == "dc1"
assert lb_policy._child_policy.used_hosts_per_remote_dc == 3
@patch("airflow.providers.apache.cassandra.hooks.cassandra.CassandraHook.get_conn")
def test_table_exists(self, mock_get_conn, mock_cassandra_hook):
mock_cluster_metadata = Mock()
mock_cluster_metadata.keyspaces = {"test_keyspace": Mock(tables={"test_table": None})}
mock_get_conn.return_value.cluster.metadata = mock_cluster_metadata
result = mock_cassandra_hook.table_exists("test_keyspace.test_table")
assert result
mock_get_conn.assert_called_once()
def test_sanitize_input_valid(self, mock_cassandra_hook):
input_string = "valid_table_name"
sanitized_string = mock_cassandra_hook._sanitize_input(input_string)
assert sanitized_string == input_string
def test_sanitize_input_invalid(self, mock_cassandra_hook):
with pytest.raises(ValueError, match=r"Invalid input: invalid_table_name_with_%_characters"):
mock_cassandra_hook._sanitize_input("invalid_table_name_with_%_characters")
@patch("airflow.providers.apache.cassandra.hooks.cassandra.CassandraHook.get_conn")
@patch("airflow.providers.apache.cassandra.hooks.cassandra.CassandraHook._sanitize_input")
def test_record_exists_table(self, mock_sanitize_input, mock_get_conn, mock_cassandra_hook):
table = "test_table"
keys = {"key1": "value1", "key2": "value2"}
mock_sanitize_input.return_value = table
mock_get_conn.return_value.execute.return_value.one = Mock()
result = mock_cassandra_hook.record_exists(table, keys)
assert result
mock_sanitize_input.assert_called_with(table)
assert mock_sanitize_input.call_count == 2
mock_get_conn.return_value.execute.assert_called_once_with(
"SELECT * FROM test_table.test_table WHERE key1=%(key1)s AND key2=%(key2)s",
{"key1": "value1", "key2": "value2"},
)
@patch("airflow.providers.apache.cassandra.hooks.cassandra.CassandraHook.get_conn")
@patch("airflow.providers.apache.cassandra.hooks.cassandra.CassandraHook._sanitize_input")
def test_record_exists_keyspace_table(self, mock_sanitize_input, mock_get_conn, mock_cassandra_hook):
table = "test_keyspace.test_table"
keys = {"key1": "value1", "key2": "value2"}
mock_sanitize_input.return_value = "test_table"
mock_get_conn.return_value.execute.return_value.one = Mock()
result = mock_cassandra_hook.record_exists(table, keys)
assert result
mock_sanitize_input.assert_called_with("test_table")
assert mock_sanitize_input.call_count == 3
mock_get_conn.return_value.execute.assert_called_once_with(
"SELECT * FROM test_table.test_table WHERE key1=%(key1)s AND key2=%(key2)s",
{"key1": "value1", "key2": "value2"},
)
@patch("airflow.providers.apache.cassandra.hooks.cassandra.CassandraHook.get_conn")
@patch("airflow.providers.apache.cassandra.hooks.cassandra.CassandraHook._sanitize_input")
def test_record_exists_exception(self, mock_sanitize_input, mock_get_conn, mock_cassandra_hook):
table = "test_keyspace.test_table"
keys = {"key1": "value1", "key2": "value2"}
mock_sanitize_input.return_value = "test_table"
mock_get_conn.return_value.execute.side_effect = Exception("Test exception")
result = mock_cassandra_hook.record_exists(table, keys)
assert not result
mock_sanitize_input.assert_called_with("test_table")
assert mock_sanitize_input.call_count == 3
mock_get_conn.return_value.execute.assert_called_once_with(
"SELECT * FROM test_table.test_table WHERE key1=%(key1)s AND key2=%(key2)s",
{"key1": "value1", "key2": "value2"},
)
| TestCassandraHook |
python | PyCQA__pylint | tests/functional/f/first_arg.py | {
"start": 79,
"end": 441
} | class ____:
# C0202, classmethod
def __new__(something): # [bad-classmethod-argument]
pass
# C0202, classmethod
def class1(cls):
pass
class1 = classmethod(class1) # [no-classmethod-decorator]
def class2(other): # [bad-classmethod-argument]
pass
class2 = classmethod(class2) # [no-classmethod-decorator]
| Obj |
python | tensorflow__tensorflow | tensorflow/python/kernel_tests/metrics_test.py | {
"start": 120903,
"end": 124320
} | class ____(test.TestCase):
def setUp(self):
ops.reset_default_graph()
@test_util.run_deprecated_v1
def testVars(self):
metrics.mean_relative_error(
predictions=array_ops.ones((10, 1)),
labels=array_ops.ones((10, 1)),
normalizer=array_ops.ones((10, 1)))
_assert_metric_variables(
self, ('mean_relative_error/count:0', 'mean_relative_error/total:0'))
@test_util.run_deprecated_v1
def testMetricsCollection(self):
my_collection_name = '__metrics__'
mean, _ = metrics.mean_relative_error(
predictions=array_ops.ones((10, 1)),
labels=array_ops.ones((10, 1)),
normalizer=array_ops.ones((10, 1)),
metrics_collections=[my_collection_name])
self.assertListEqual(ops.get_collection(my_collection_name), [mean])
@test_util.run_deprecated_v1
def testUpdatesCollection(self):
my_collection_name = '__updates__'
_, update_op = metrics.mean_relative_error(
predictions=array_ops.ones((10, 1)),
labels=array_ops.ones((10, 1)),
normalizer=array_ops.ones((10, 1)),
updates_collections=[my_collection_name])
self.assertListEqual(ops.get_collection(my_collection_name), [update_op])
@test_util.run_deprecated_v1
def testValueTensorIsIdempotent(self):
predictions = random_ops.random_normal((10, 3), seed=1)
labels = random_ops.random_normal((10, 3), seed=2)
normalizer = random_ops.random_normal((10, 3), seed=3)
error, update_op = metrics.mean_relative_error(labels, predictions,
normalizer)
with self.cached_session():
self.evaluate(variables.local_variables_initializer())
# Run several updates.
for _ in range(10):
self.evaluate(update_op)
# Then verify idempotency.
initial_error = self.evaluate(error)
for _ in range(10):
self.assertEqual(initial_error, self.evaluate(error))
@test_util.run_deprecated_v1
def testSingleUpdateNormalizedByLabels(self):
np_predictions = np.asarray([2, 4, 6, 8], dtype=np.float32)
np_labels = np.asarray([1, 3, 2, 3], dtype=np.float32)
expected_error = np.mean(
np.divide(np.absolute(np_predictions - np_labels), np_labels))
predictions = constant_op.constant(
np_predictions, shape=(1, 4), dtype=dtypes_lib.float32)
labels = constant_op.constant(np_labels, shape=(1, 4))
error, update_op = metrics.mean_relative_error(
labels, predictions, normalizer=labels)
with self.cached_session():
self.evaluate(variables.local_variables_initializer())
self.assertEqual(expected_error, self.evaluate(update_op))
self.assertEqual(expected_error, self.evaluate(error))
@test_util.run_deprecated_v1
def testSingleUpdateNormalizedByZeros(self):
np_predictions = np.asarray([2, 4, 6, 8], dtype=np.float32)
predictions = constant_op.constant(
np_predictions, shape=(1, 4), dtype=dtypes_lib.float32)
labels = constant_op.constant(
[1, 3, 2, 3], shape=(1, 4), dtype=dtypes_lib.float32)
error, update_op = metrics.mean_relative_error(
labels, predictions, normalizer=array_ops.zeros_like(labels))
with self.cached_session():
self.evaluate(variables.local_variables_initializer())
self.assertEqual(0.0, self.evaluate(update_op))
self.assertEqual(0.0, self.evaluate(error))
| MeanRelativeErrorTest |
python | catalyst-team__catalyst | catalyst/contrib/optimizers/adamp.py | {
"start": 104,
"end": 6320
} | class ____(Optimizer):
"""Implements AdamP algorithm.
The original Adam algorithm was proposed in
`Adam: A Method for Stochastic Optimization`_.
The AdamP variant was proposed in
`Slowing Down the Weight Norm Increase in Momentum-based Optimizers`_.
Arguments:
params: iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay coefficient
(default: 0)
delta: threshold that determines whether
a set of parameters is scale invariant or not (default: 0.1)
wd_ratio: relative weight decay applied on scale-invariant
parameters compared to that applied on scale-variant parameters
(default: 0.1)
nesterov (boolean, optional): enables Nesterov momentum
(default: False)
.. _Adam\: A Method for Stochastic Optimization:
https://arxiv.org/abs/1412.6980
.. _Slowing Down the Weight Norm Increase in Momentum-based Optimizers:
https://arxiv.org/abs/2006.08217
Original source code: https://github.com/clovaai/AdamP
"""
def __init__(
self,
params,
lr=1e-3,
betas=(0.9, 0.999),
eps=1e-8,
weight_decay=0,
delta=0.1,
wd_ratio=0.1,
nesterov=False,
):
"""
Args:
params: iterable of parameters to optimize
or dicts defining parameter groups
lr (float, optional): learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients
used for computing running averages of gradient
and its square (default: (0.9, 0.999))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay coefficient
(default: 1e-2)
delta: threshold that determines whether
a set of parameters is scale invariant or not (default: 0.1)
wd_ratio: relative weight decay applied on scale-invariant
parameters compared to that applied on scale-variant parameters
(default: 0.1)
nesterov (boolean, optional): enables Nesterov momentum
(default: False)
"""
defaults = dict( # noqa: C408
lr=lr,
betas=betas,
eps=eps,
weight_decay=weight_decay,
delta=delta,
wd_ratio=wd_ratio,
nesterov=nesterov,
)
super(AdamP, self).__init__(params, defaults)
def _channel_view(self, x):
return x.view(x.size(0), -1)
def _layer_view(self, x):
return x.view(1, -1)
def _cosine_similarity(self, x, y, eps, view_func):
x = view_func(x)
y = view_func(y)
return F.cosine_similarity(x, y, dim=1, eps=eps).abs_()
def _projection(self, p, grad, perturb, delta, wd_ratio, eps):
wd = 1
expand_size = [-1] + [1] * (len(p.shape) - 1)
for view_func in [self._channel_view, self._layer_view]:
cosine_sim = self._cosine_similarity(grad, p.data, eps, view_func)
if cosine_sim.max() < delta / math.sqrt(view_func(p.data).size(1)):
p_n = p.data / view_func(p.data).norm(dim=1).view(expand_size).add_(eps)
perturb -= p_n * view_func(p_n * perturb).sum(dim=1).view(expand_size)
wd = wd_ratio
return perturb, wd
return perturb, wd
def step(self, closure=None):
"""
Performs a single optimization step (parameter update).
Arguments:
closure: A closure that reevaluates the model and
returns the loss. Optional for most optimizers.
Returns:
computed loss
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group["params"]:
if p.grad is None:
continue
grad = p.grad.data
beta1, beta2 = group["betas"]
nesterov = group["nesterov"]
state = self.state[p]
# State initialization
if len(state) == 0:
state["step"] = 0
state["exp_avg"] = torch.zeros_like(p.data)
state["exp_avg_sq"] = torch.zeros_like(p.data)
# Adam
exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"]
state["step"] += 1
bias_correction1 = 1 - beta1 ** state["step"]
bias_correction2 = 1 - beta2 ** state["step"]
exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)
denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(
group["eps"]
)
step_size = group["lr"] / bias_correction1
if nesterov:
perturb = (beta1 * exp_avg + (1 - beta1) * grad) / denom
else:
perturb = exp_avg / denom
# Projection
wd_ratio = 1
if len(p.shape) > 1:
perturb, wd_ratio = self._projection(
p, grad, perturb, group["delta"], group["wd_ratio"], group["eps"]
)
# Weight decay
if group["weight_decay"] > 0:
p.data.mul_(1 - group["lr"] * group["weight_decay"] * wd_ratio)
# Step
p.data.add_(perturb, alpha=-step_size)
return loss
__all__ = ["AdamP"]
| AdamP |
python | dagster-io__dagster | python_modules/libraries/dagster-datahub/dagster_datahub/resources.py | {
"start": 2993,
"end": 3310
} | class ____(Config):
bootstrap: str = Field(description="Kafka Boostrap Servers. Comma delimited")
schema_registry_url: str = Field(description="Schema Registry Location.")
schema_registry_config: dict[str, Any] = Field(
default={}, description="Extra Schema Registry Config."
)
| DatahubConnection |
python | getsentry__sentry | tests/sentry/tasks/test_post_process.py | {
"start": 13795,
"end": 20220
} | class ____(BasePostProgressGroupMixin):
@patch("sentry.rules.processing.processor.RuleProcessor")
def test_rule_processor_backwards_compat(self, mock_processor: MagicMock) -> None:
event = self.create_event(data={}, project_id=self.project.id)
mock_callback = Mock()
mock_futures = [Mock()]
mock_processor.return_value.apply.return_value = [(mock_callback, mock_futures)]
self.call_post_process_group(
is_new=True,
is_regression=False,
is_new_group_environment=True,
event=event,
)
mock_processor.assert_called_once_with(EventMatcher(event), True, False, True, False, False)
mock_processor.return_value.apply.assert_called_once_with()
mock_callback.assert_called_once_with(EventMatcher(event), mock_futures)
@patch("sentry.rules.processing.processor.RuleProcessor")
def test_rule_processor(self, mock_processor: MagicMock) -> None:
event = self.create_event(data={"message": "testing"}, project_id=self.project.id)
mock_callback = Mock()
mock_futures = [Mock()]
mock_processor.return_value.apply.return_value = [(mock_callback, mock_futures)]
self.call_post_process_group(
is_new=True,
is_regression=False,
is_new_group_environment=True,
event=event,
)
mock_processor.return_value.apply.assert_called_once_with()
mock_callback.assert_called_once_with(EventMatcher(event), mock_futures)
@mock_redis_buffer()
def test_rule_processor_buffer_values(self) -> None:
# Test that pending buffer values for `times_seen` are applied to the group and that alerts
# fire as expected
from sentry.models.rule import Rule
MOCK_RULES = ("sentry.rules.filters.issue_occurrences.IssueOccurrencesFilter",)
with (
mock.patch("sentry.buffer.backend.get", buffer.backend.get),
mock.patch("sentry.buffer.backend.incr", buffer.backend.incr),
patch("sentry.constants._SENTRY_RULES", MOCK_RULES),
patch("sentry.rules.rules", init_registry()) as rules,
):
MockAction = mock.Mock()
MockAction.id = "tests.sentry.tasks.post_process.tests.MockAction"
MockAction.return_value = mock.Mock(spec=EventAction)
MockAction.return_value.after.return_value = []
rules.add(MockAction)
conditions = [
{
"id": "sentry.rules.filters.issue_occurrences.IssueOccurrencesFilter",
"value": 10,
},
]
actions = [{"id": "tests.sentry.tasks.post_process.tests.MockAction"}]
Rule.objects.filter(project=self.project).delete()
Rule.objects.create(
project=self.project, data={"conditions": conditions, "actions": actions}
)
event = self.create_event(
data={"message": "testing", "fingerprint": ["group-1"]}, project_id=self.project.id
)
event_2 = self.create_event(
data={"message": "testing", "fingerprint": ["group-1"]}, project_id=self.project.id
)
self.call_post_process_group(
is_new=True,
is_regression=False,
is_new_group_environment=True,
event=event,
)
event.group.update(times_seen=2)
assert MockAction.return_value.after.call_count == 0
buffer.backend.incr(Group, {"times_seen": 15}, filters={"id": event.group.id})
self.call_post_process_group(
is_new=True,
is_regression=False,
is_new_group_environment=True,
event=event_2,
)
assert MockAction.return_value.after.call_count == 1
@patch("sentry.rules.processing.processor.RuleProcessor")
def test_group_refresh(self, mock_processor: MagicMock) -> None:
event = self.create_event(data={"message": "testing"}, project_id=self.project.id)
group1 = event.group
group2 = self.create_group(project=self.project)
assert event.group_id == group1.id
assert event.group == group1
with self.tasks():
merge_groups([group1.id], group2.id)
mock_callback = Mock()
mock_futures = [Mock()]
mock_processor.return_value.apply.return_value = [(mock_callback, mock_futures)]
self.call_post_process_group(
is_new=True,
is_regression=False,
is_new_group_environment=True,
event=event,
)
# Ensure that rule processing sees the merged group.
mock_processor.assert_called_with(
EventMatcher(event, group=group2), True, False, True, False, False
)
@patch("sentry.rules.processing.processor.RuleProcessor")
def test_group_last_seen_buffer(self, mock_processor: MagicMock) -> None:
first_event_date = timezone.now() - timedelta(days=90)
event1 = self.create_event(
data={"message": "testing"},
project_id=self.project.id,
)
group1 = event1.group
group1.update(last_seen=first_event_date)
event2 = self.create_event(data={"message": "testing"}, project_id=self.project.id)
# Mock set the last_seen value to the first event date
# To simulate the update to last_seen being buffered
event2.group.last_seen = first_event_date
event2.group.update(last_seen=first_event_date)
assert event2.group_id == group1.id
mock_callback = Mock()
mock_futures = [Mock()]
mock_processor.return_value.apply.return_value = [(mock_callback, mock_futures)]
self.call_post_process_group(
is_new=False,
is_regression=True,
is_new_group_environment=False,
event=event2,
)
mock_processor.assert_called_with(
EventMatcher(event2, group=group1), False, True, False, False, False
)
sent_group_date: datetime = mock_processor.call_args[0][0].group.last_seen
# Check that last_seen was updated to be at least the new event's date
assert abs(sent_group_date - event2.datetime) < timedelta(seconds=10)
| RuleProcessorTestMixin |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-github/source_github/github_schema.py | {
"start": 1257505,
"end": 1257726
} | class ____(sgqlc.types.Type, TeamAuditEntryData):
"""Metadata for a team membership for org.restore_member actions"""
__schema__ = github_schema
__field_names__ = ()
| OrgRestoreMemberMembershipTeamAuditEntryData |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-facebook-marketing/source_facebook_marketing/streams/streams.py | {
"start": 8158,
"end": 11361
} | class ____(FBMarketingStream):
"""See: https://developers.facebook.com/docs/marketing-api/reference/ad-account"""
use_batch = False
def __init__(self, **kwargs):
super().__init__(**kwargs)
self._fields_dict = {}
def get_task_permissions(self, account_id: str) -> Set[str]:
"""https://developers.facebook.com/docs/marketing-api/reference/ad-account/assigned_users/"""
res = set()
me = User(fbid="me", api=self._api.api)
for business_user in me.get_business_users():
assigned_users = self._api.get_account(account_id=account_id).get_assigned_users(
params={"business": business_user["business"].get_id()}
)
for assigned_user in assigned_users:
if business_user.get_id() == assigned_user.get_id():
res.update(set(assigned_user["tasks"]))
return res
def fields(self, account_id: str, **kwargs) -> List[str]:
if self._fields_dict.get(account_id):
return self._fields_dict.get(account_id)
properties = super().fields(**kwargs)
# https://developers.facebook.com/docs/marketing-apis/guides/javascript-ads-dialog-for-payments/
# To access "funding_source_details", the user making the API call must have a MANAGE task permission for
# that specific ad account.
permissions = self.get_task_permissions(account_id=account_id)
if "funding_source_details" in properties and "MANAGE" not in permissions:
properties.remove("funding_source_details")
if "is_prepay_account" in properties and "MANAGE" not in permissions:
properties.remove("is_prepay_account")
self._fields_dict[account_id] = properties
return properties
def list_objects(self, params: Mapping[str, Any], account_id: str) -> Iterable:
"""noop in case of AdAccount"""
fields = self.fields(account_id=account_id)
try:
return [FBAdAccount(self._api.get_account(account_id=account_id).get_id()).api_get(fields=fields)]
except FacebookRequestError as e:
# This is a workaround for cases when account seem to have all the required permissions
# but despite that is not allowed to get `owner` field. See (https://github.com/airbytehq/oncall/issues/3167)
if e.api_error_code() == 200 and e.api_error_message() == "(#200) Requires business_management permission to manage the object":
fields.remove("owner")
return [FBAdAccount(self._api.get_account(account_id=account_id).get_id()).api_get(fields=fields)]
# FB api returns a non-obvious error when accessing the `funding_source_details` field
# even though user is granted all the required permissions (`MANAGE`)
# https://github.com/airbytehq/oncall/issues/3031
if e.api_error_code() == 100 and e.api_error_message() == "Unsupported request - method type: get":
fields.remove("funding_source_details")
return [FBAdAccount(self._api.get_account(account_id=account_id).get_id()).api_get(fields=fields)]
raise e
| AdAccount |
python | qdrant__qdrant-client | qdrant_client/http/models/models.py | {
"start": 85878,
"end": 86038
} | class ____(BaseModel, extra="forbid"):
id: "ExtendedPointId" = Field(..., description="")
vector: "VectorStruct" = Field(..., description="")
| PointVectors |
python | apache__airflow | task-sdk/src/airflow/sdk/api/datamodels/_generated.py | {
"start": 6254,
"end": 6762
} | class ____(BaseModel):
"""
Schema for updating TaskInstance to 'RUNNING' state with minimal required fields.
"""
model_config = ConfigDict(
extra="forbid",
)
state: Annotated[Literal["running"] | None, Field(title="State")] = "running"
hostname: Annotated[str, Field(title="Hostname")]
unixname: Annotated[str, Field(title="Unixname")]
pid: Annotated[int, Field(title="Pid")]
start_date: Annotated[AwareDatetime, Field(title="Start Date")]
| TIEnterRunningPayload |
python | huggingface__transformers | src/transformers/models/luke/modeling_luke.py | {
"start": 24680,
"end": 26109
} | class ____(nn.Module):
def __init__(self, config):
super().__init__()
self.self = LukeSelfAttention(config)
self.output = LukeSelfOutput(config)
def forward(
self,
word_hidden_states,
entity_hidden_states,
attention_mask=None,
output_attentions=False,
):
word_size = word_hidden_states.size(1)
self_outputs = self.self(
word_hidden_states,
entity_hidden_states,
attention_mask,
output_attentions,
)
if entity_hidden_states is None:
concat_self_outputs = self_outputs[0]
concat_hidden_states = word_hidden_states
else:
concat_self_outputs = torch.cat(self_outputs[:2], dim=1)
concat_hidden_states = torch.cat([word_hidden_states, entity_hidden_states], dim=1)
attention_output = self.output(concat_self_outputs, concat_hidden_states)
word_attention_output = attention_output[:, :word_size, :]
if entity_hidden_states is None:
entity_attention_output = None
else:
entity_attention_output = attention_output[:, word_size:, :]
# add attentions if we output them
outputs = (word_attention_output, entity_attention_output) + self_outputs[2:]
return outputs
# Copied from transformers.models.bert.modeling_bert.BertIntermediate
| LukeAttention |
python | huggingface__transformers | src/transformers/models/instructblip/modeling_instructblip.py | {
"start": 30384,
"end": 37480
} | class ____(InstructBlipPreTrainedModel):
"""
Querying Transformer (Q-Former), used in InstructBLIP. Slightly modified from BLIP-2 as it also takes the
instruction as input.
"""
_supports_attention_backend = False # adds position on attn weights before last matmul
_supports_flash_attn = False
_supports_sdpa = False
_supports_flex_attn = False
_can_record_outputs = {
"hidden_states": InstructBlipQFormerLayer,
"attentions": [
OutputRecorder(InstructBlipQFormerMultiHeadAttention, index=1, layer_name=".attention"),
],
"cross_attentions": [
OutputRecorder(InstructBlipQFormerMultiHeadAttention, index=1, layer_name=".crossattention"),
],
}
def __init__(self, config: InstructBlipQFormerConfig):
super().__init__(config)
self.config = config
self.embeddings = InstructBlipQFormerEmbeddings(config)
self.encoder = InstructBlipQFormerEncoder(config)
self.post_init()
def get_input_embeddings(self):
return self.embeddings.word_embeddings
def set_input_embeddings(self, value):
self.embeddings.word_embeddings = value
def get_extended_attention_mask(
self,
attention_mask: torch.Tensor,
input_shape: tuple[int],
device: torch.device,
has_query: bool = False,
) -> torch.Tensor:
"""
Makes broadcastable attention and causal masks so that future and masked tokens are ignored.
Arguments:
attention_mask (`torch.Tensor`):
Mask with ones indicating tokens to attend to, zeros for tokens to ignore.
input_shape (`tuple[int]`):
The shape of the input to the model.
device: (`torch.device`):
The device of the input to the model.
Returns:
`torch.Tensor` The extended attention mask, with a the same dtype as `attention_mask.dtype`.
"""
# We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
# ourselves in which case we just need to make it broadcastable to all heads.
if attention_mask.dim() == 3:
extended_attention_mask = attention_mask[:, None, :, :]
elif attention_mask.dim() == 2:
# Provided a padding mask of dimensions [batch_size, seq_length]
# - the model is an encoder, so make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length]
extended_attention_mask = attention_mask[:, None, None, :]
else:
raise ValueError(
f"Wrong shape for input_ids (shape {input_shape}) or attention_mask (shape {attention_mask.shape})",
)
# Since attention_mask is 1.0 for positions we want to attend and 0.0 for
# masked positions, this operation will create a tensor which is 0.0 for
# positions we want to attend and -10000.0 for masked positions.
# Since we are adding it to the raw scores before the softmax, this is
# effectively the same as removing these entirely.
extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility
extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
return extended_attention_mask
@check_model_inputs()
@auto_docstring
def forward(
self,
input_ids: torch.LongTensor,
attention_mask: Optional[torch.FloatTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
query_embeds: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.FloatTensor] = None,
encoder_attention_mask: Optional[torch.FloatTensor] = None,
**kwargs: Unpack[TransformersKwargs],
) -> Union[tuple[torch.FloatTensor], BaseModelOutputWithPoolingAndCrossAttentions]:
r"""
query_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Hidden states to be used in the attention computation. If cross-attention,
will be used for the query (i.e., key and value will use the encoder_hidden_states).
"""
if input_ids is None and query_embeds is None:
raise ValueError("You have to specify query_embeds when input_ids is None")
query_length = query_embeds.shape[1] if query_embeds is not None else 0
embedding_output = self.embeddings(
input_ids=input_ids,
position_ids=position_ids,
query_embeds=query_embeds,
)
input_shape = embedding_output.size()[:-1]
batch_size, seq_length = input_shape
device = embedding_output.device
if attention_mask is None:
attention_mask = torch.ones(((batch_size, seq_length)), device=device)
# We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
# ourselves in which case we just need to make it broadcastable to all heads.
extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape, device)
# If a 2D or 3D attention mask is provided for the cross-attention
# we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
if encoder_hidden_states is not None:
if isinstance(encoder_hidden_states, list):
encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states[0].size()
else:
encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
if isinstance(encoder_attention_mask, list):
encoder_extended_attention_mask = [self.invert_attention_mask(mask) for mask in encoder_attention_mask]
elif encoder_attention_mask is None:
encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
else:
encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
else:
encoder_extended_attention_mask = None
encoder_outputs: BaseModelOutput = self.encoder(
embedding_output,
attention_mask=extended_attention_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_extended_attention_mask,
query_length=query_length,
**kwargs,
)
sequence_output = encoder_outputs.last_hidden_state
pooled_output = sequence_output[:, 0, :]
return BaseModelOutputWithPoolingAndCrossAttentions(
last_hidden_state=sequence_output,
pooler_output=pooled_output,
)
@auto_docstring(
custom_intro="""
InstructBLIP base Model consisting of language model, qformer and vision encoder.
"""
)
| InstructBlipQFormerModel |
python | bokeh__bokeh | src/bokeh/models/graphs.py | {
"start": 3389,
"end": 3666
} | class ____(GraphCoordinates):
'''
Node coordinate expression obtained from ``LayoutProvider``
'''
# explicit __init__ to support Init signatures
def __init__(self, *args: Any, **kwargs: Any) -> None:
super().__init__(*args, **kwargs)
| NodeCoordinates |
python | doocs__leetcode | solution/2400-2499/2475.Number of Unequal Triplets in Array/Solution.py | {
"start": 0,
"end": 375
} | class ____:
def unequalTriplets(self, nums: List[int]) -> int:
n = len(nums)
ans = 0
for i in range(n):
for j in range(i + 1, n):
for k in range(j + 1, n):
ans += (
nums[i] != nums[j] and nums[j] != nums[k] and nums[i] != nums[k]
)
return ans
| Solution |
python | pytorch__pytorch | benchmarks/dynamo/pr_time_benchmarks/check_results.py | {
"start": 133,
"end": 264
} | class ____:
benchmark_name: str
metric_name: str
expected_value: int
noise_margin: float
@dataclass
| ExpectedFileEntry |
python | PyCQA__pylint | tests/pyreverse/functional/class_diagrams/property_decorator/property_decorator.py | {
"start": 24,
"end": 461
} | class ____:
"""Test class for property decorators with annotated return type"""
def __init__(self):
self._x = 0
@property
def x(self) -> int:
"""This is a getter for x"""
return self._x
@x.setter
def x(self, value):
"""This is a setter for x"""
self._x = value
@x.deleter
def x(self):
"""This is a deleter for x"""
del self._x
| AnnotatedPropertyTest |
python | kamyu104__LeetCode-Solutions | Python/binary-tree-level-order-traversal-ii.py | {
"start": 154,
"end": 773
} | class ____(object):
def levelOrderBottom(self, root):
"""
:type root: TreeNode
:rtype: List[List[int]]
"""
if root is None:
return []
result, current = [], [root]
while current:
next_level, vals = [], []
for node in current:
vals.append(node.val)
if node.left:
next_level.append(node.left)
if node.right:
next_level.append(node.right)
current = next_level
result.append(vals)
return result[::-1]
| Solution |
python | pytorch__pytorch | test/mobile/model_test/builtin_ops.py | {
"start": 2295,
"end": 2978
} | class ____(torch.nn.Module):
def forward(self):
s = "abcde"
# list
l = ["1", "2", "test"]
l.reverse()
l.reverse()
l[1] = "3"
l.extend(["4"])
# str dict
d = {"key": 1}
d.clear()
d.update({"key": 0})
if "key" in d:
d["key"] = 2
# int dict
d2 = {0: 100}
if 0 in d2:
d2.clear()
d2[0] = 100
return len(
s[torch.tensor(1)],
d["key"],
d2[0],
d.keys(),
d.items(),
d.values(),
d2.values(),
l.pop(),
)
| TSCollectionOpsModule |
python | apache__airflow | airflow-core/src/airflow/api_fastapi/core_api/datamodels/ui/common.py | {
"start": 1127,
"end": 1249
} | class ____(BaseModel):
"""Base Edge serializer for responses."""
source_id: str
target_id: str
| BaseEdgeResponse |
python | doocs__leetcode | solution/1200-1299/1258.Synonymous Sentences/Solution.py | {
"start": 0,
"end": 542
} | class ____:
def __init__(self, n):
self.p = list(range(n))
self.size = [1] * n
def find(self, x):
if self.p[x] != x:
self.p[x] = self.find(self.p[x])
return self.p[x]
def union(self, a, b):
pa, pb = self.find(a), self.find(b)
if pa != pb:
if self.size[pa] > self.size[pb]:
self.p[pb] = pa
self.size[pa] += self.size[pb]
else:
self.p[pa] = pb
self.size[pb] += self.size[pa]
| UnionFind |
python | falconry__falcon | tests/test_httpstatus.py | {
"start": 6397,
"end": 7037
} | class ____:
def on_get(self, req, res):
res.data = b'foo'
http_status = HTTPStatus(745)
assert http_status.status_code == 745
raise http_status
def on_post(self, req, res):
res.media = {'a': 1}
http_status = HTTPStatus(falcon.HTTP_725)
assert http_status.status_code == 725
raise http_status
def on_put(self, req, res):
res.text = 'foo'
raise HTTPStatus(falcon.HTTP_719)
@pytest.fixture()
def body_client(asgi, util):
app = util.create_app(asgi)
app.add_route('/status', NoBodyResource())
return testing.TestClient(app)
| NoBodyResource |
python | great-expectations__great_expectations | great_expectations/datasource/fluent/invalid_datasource.py | {
"start": 1297,
"end": 3688
} | class ____(DataAsset):
"""
A DataAsset that is invalid.
The DataAsset itself may be valid, but it is classified as invalid because its parent Datasource or sibling assets are invalid.
""" # noqa: E501 # FIXME CoP
type: str = "invalid"
name: str = "invalid"
class Config:
extra = "ignore"
def _raise_type_error(self) -> NoReturn:
"""
Raise a TypeError indicating that the Asset is invalid.
If available, raise from the original config error that caused the Datasource to be invalid.
"""
error = TypeError(f"{self.name} Asset is invalid")
if datasource := getattr(self, "datasource", None):
raise error from datasource.config_error
raise error
@override
def test_connection(self) -> None:
if datasource := getattr(self, "datasource", None):
raise TestConnectionError( # noqa: TRY003 # FIXME CoP
f"The Datasource configuration for {self.name} is invalid and cannot be used. Please fix the error and try again" # noqa: E501 # FIXME CoP
) from datasource.config_error
# the asset should always have a datasource, but if it doesn't, we should still raise an error # noqa: E501 # FIXME CoP
raise TestConnectionError( # noqa: TRY003 # FIXME CoP
"This Asset configuration is invalid and cannot be used. Please fix the error and try again" # noqa: E501 # FIXME CoP
)
@override
def add_batch_definition(self, name: str, partitioner: Any | None = None) -> NoReturn:
self._raise_type_error()
@override
def build_batch_request(
self,
options: dict | None = None,
batch_slice: Any = None,
partitioner: Any = None,
) -> NoReturn:
self._raise_type_error()
@override
def get_batch_identifiers_list(self, batch_request: BatchRequest) -> List[dict]:
self._raise_type_error()
@override
def get_batch(self, batch_request: BatchRequest) -> Batch:
self._raise_type_error()
@override
def sort_batches(
self, batch_list: List[Batch], partitioner: PartitionerSortingProtocol
) -> List[Batch]:
self._raise_type_error()
@override
def get_batch_parameters_keys(self, partitioner: ColumnPartitioner | None = None) -> NoReturn:
self._raise_type_error()
| InvalidAsset |
python | doocs__leetcode | solution/1700-1799/1704.Determine if String Halves Are Alike/Solution.py | {
"start": 0,
"end": 252
} | class ____:
def halvesAreAlike(self, s: str) -> bool:
cnt, n = 0, len(s) >> 1
vowels = set('aeiouAEIOU')
for i in range(n):
cnt += s[i] in vowels
cnt -= s[i + n] in vowels
return cnt == 0
| Solution |
python | pytorch__pytorch | torch/fx/passes/split_utils.py | {
"start": 1178,
"end": 11679
} | class ____:
"""
A component serves as a container for a subgraph we want to create afterwards.
"""
graph: torch.fx.Graph
order: int
name: str
# Stores the placeholder nodes in `graph`.
input_placeholders: list = field(default_factory=list)
# Store the nodes in original graph that are placeholder in `graph`.
orig_inputs: list = field(default_factory=list)
# Store the nodes in original graph that are outputs in `graph`.
orig_outputs: list = field(default_factory=list)
# Mapping from get_attr node in original graph to get_attr node in `graph`.
getattr_maps: dict[torch.fx.Node, torch.fx.Node] = field(default_factory=dict)
constructor_args: list[str] = field(default_factory=list)
gm: Optional[torch.fx.GraphModule] = None
@compatibility(is_backward_compatible=False)
def split_by_tags(
gm: torch.fx.GraphModule,
tags: list[str],
return_fqn_mapping: bool = False,
return_tuple: bool = False,
GraphModuleCls: type[torch.fx.GraphModule] = torch.fx.GraphModule,
) -> Union[torch.fx.GraphModule, tuple[torch.fx.GraphModule, dict[str, str]]]:
"""
Splits a GraphModule using tags on its graph nodes. We honor the order of
tags. For example, we have tags = ["a", "b", "c"], the function will create
the initial submodules in the order of "a", "b", "c".
To set a tag:
gm.graph.nodes[idx].tag = "mytag"
This will result in all nodes with the same tag being extracted and placed in their
own submodule. For placeholder, output and get_attr node, the tag is ignored. placeholder
and output nodes are created when needed while get_attr nodes get copied to submodules
where they are used.
Given the following module def:
class SimpleModule(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.linear1 = torch.nn.Linear(...)
self.linear2 = torch.nn.Linear(...)
self.linear3 = torch.nn.Linear(...)
def forward(self, in1, in2):
r1 = self.linear1(in1)
r2 = self.linear2(in2)
r3 = torch.cat([r1, r2])
return self.linear3(r3)
Marking the node corresponding to in1 with the tag sc.REQUEST_ONLY.lower() results in the following split:
ro:
def forward(self, in1):
self = self.root
linear1 = self.linear1(in1)
return linear1
main:
def forward(self, in2, linear1):
self = self.root
linear2 = self.linear2(in2)
cat_1 = torch.cat([linear1, linear2])
linear3 = self.linear3(cat_1)
return linear3
main:
def forward(self, in1, in2):
self = self.root
ro_0 = self.ro_0(in1)
main_1 = self.main_1(in2, ro_0)
return main_1
Returns:
split_gm: torch fx graph after split
orig_to_split_fqn_mapping: a map between the original fqn and the fqn
after split for call_module and get_attr.
"""
def flatten(x: torch.fx.node.Argument) -> NodeList:
"""
Stores nodes in x to a list and returns the list.
"""
r: NodeList = []
map_arg(x, r.append)
return r
# Mapping from node in original module to node in created submodule.
node_remapping: dict[torch.fx.Node, torch.fx.Node] = {}
# Mapping from node in original module or created submodules to
# corresponding component.
node_to_component: dict[torch.fx.Node, Component] = {}
# Mapping from tag to the corresponding component.
tag_to_component: dict[str, Component] = {}
# Stores all components.
all_components: list[Component] = []
# Stores nodes that will be used in main graph.
used_in_main: dict[torch.fx.Node, None] = {}
# Main graph after split.
main_g = torch.fx.Graph()
# Mapping from node in original module to node in main graph after split.
main_remapping: dict[torch.fx.Node, torch.fx.Node] = {}
# Output node of original module.
output_node: Optional[torch.fx.Node] = None
# Create a component for each tag, we don't expect to create other components afterwards.
for tag in tags:
comp = Component(torch.fx.Graph(), len(all_components), f"{tag}")
all_components.append(comp)
tag_to_component[tag] = comp
# Traverse the nodes in original graph and take care of them.
for node in gm.graph.nodes:
if node.op == "output":
if output_node is not None:
raise RuntimeError("Multiple output nodes in graph!")
output_node = node
continue
# Placeholders in the original graph get copied to main graph.
if node.op == "placeholder":
main_remapping[node] = main_g.placeholder(node.name, type_expr=node.type)
main_remapping[node].meta = copy.copy(node.meta)
continue
# Get_attr nodes are ignored because we are not tagging them.
# Instead, we copy them directly to the submodules use them afterwards.
if node.op == "get_attr":
continue
# Now we process callable nodes which are nodes with op of call_module,
# call_function or call_method. Every callable nodes should be tagged.
assert hasattr(node, "tag"), f"Node does not have tag: {node.format_node()}"
upstream_components = [
node_to_component[x]
for x in flatten(node.args) + flatten(node.kwargs)
if x.op not in {"placeholder", "get_attr"}
]
comp = tag_to_component[node.tag]
node_to_component[node] = comp
# Max order of upperstream components.
mx = max((c.order for c in upstream_components), default=0)
# Expect the component for `node` has higher order then its upstream components.
assert comp.order >= mx, (
f"Component {comp.name} order must be >= max of its upstream components, order={comp.order} and max={mx}"
)
# Map a input of `node` to nodes in the component's graph.
def remap_func(x):
# If input is a get_attr node, copy it to current component's graph.
# Returns the get_attr node in current component's graph.
if x.op == "get_attr":
if x not in comp.getattr_maps:
comp.getattr_maps[x] = comp.graph.get_attr(
x.target, type_expr=x.type
)
comp.getattr_maps[x].meta = copy.copy(x.meta)
return comp.getattr_maps[x]
# If input is not a placeholder, it should have been put into a component
# already. If it's the current component then we return the corresponding
# node in the component.
if x.op != "placeholder" and node_to_component[x] == comp:
return node_remapping[x]
# If input is a placeholder or it's in other components, we want to make it
# as a placeholder in current component's graph.
if x not in comp.orig_inputs:
comp.orig_inputs.append(x)
placeholder = comp.graph.placeholder(x.name, type_expr=x.type)
placeholder.meta = copy.copy(x.meta)
comp.input_placeholders.append(placeholder)
used_in_main[x] = None
return comp.input_placeholders[comp.orig_inputs.index(x)]
n = comp.graph.node_copy(node, remap_func)
n.tag = node.tag # type: ignore[attr-defined]
node_remapping[node] = n
node_to_component[n] = comp
if output_node is None:
raise RuntimeError("Graph had no output node!")
for x in flatten(output_node.args[0]):
if x.op == "get_attr":
# We don't need components mapping for nodes of type "get_attr"
# that are consumed by the output. Only need to make sure we create
# corresponding counterparts in the resulting graph.
main_remapping[x] = main_g.get_attr(x.name, type_expr=x.type)
else:
# All component results consumed by the output node should be
# marked as "used in main".
used_in_main[x] = None
# If a node is used in main graph then we mark it as an output in the component
# it belongs to.
for n in used_in_main:
if n.op != "placeholder":
node_to_component[n].orig_outputs.append(n)
# Now we create a graphmodule for each component.
orig_to_split_fqn_mapping: dict[str, str] = {}
for comp in all_components:
outs = tuple(map(node_remapping.__getitem__, comp.orig_outputs))
if return_tuple:
comp.graph.output(outs)
else:
# Take care of the args of FX output node. If there's a single
# output then the output node args is like (output_single), else
# if there're multiple outputs then the output node args is like
# ((output_0, output_1, ...)).
comp.graph.output(outs[0] if len(outs) == 1 else outs)
comp.gm, comp_orig_to_split_fqn_mapping = lift_subgraph_as_module(
gm, subgraph=comp.graph, comp_name=comp.name
)
orig_to_split_fqn_mapping.update(comp_orig_to_split_fqn_mapping)
# Create a call_module node in main graph.
main_node = main_g.call_module(
comp.name,
args=tuple(map(main_remapping.__getitem__, comp.orig_inputs)),
kwargs=None,
)
if len(outs) == 1 and not return_tuple:
main_remapping[comp.orig_outputs[0]] = main_node
else:
for i, o in enumerate(comp.orig_outputs):
# Use Proxy to record getitem access.
main_remapping[o] = torch.fx.Proxy(main_node)[i].node # type: ignore[index]
main_g.output(map_arg(output_node.args[0], main_remapping.__getitem__))
main_root = HolderModule({comp.name: comp.gm for comp in all_components})
main_g._codegen = gm.graph._codegen
# If the output nodes consumes get_attr directly in the original graph,
# then we need to make sure get_attr is copied to the new graph.
for x in flatten(output_node.args[0]):
if x.op == "get_attr":
setattr(main_root, x.name, getattr_recursive(gm, x.target)) # type: ignore[arg-type]
result_gm = GraphModuleCls(main_root, main_g)
if return_fqn_mapping:
return result_gm, orig_to_split_fqn_mapping
return result_gm
| Component |
python | huggingface__transformers | templates/adding_a_new_example_script/{{cookiecutter.directory_name}}/run_{{cookiecutter.example_shortcut}}.py | {
"start": 4965,
"end": 37545
} | class ____:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
dataset_name: Optional[str] = field(
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
)
dataset_config_name: Optional[str] = field(
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
)
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
validation_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
)
test_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input test data file to predict the label on (a text file)."},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
preprocessing_num_workers: Optional[int] = field(
default=None,
metadata={"help": "The number of processes to use for the preprocessing."},
)
max_train_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of training examples to this "
"value if set."
},
)
max_eval_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
"value if set."
},
)
max_predict_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of prediction examples to this "
"value if set."
},
)
def __post_init__(self):
if (
self.dataset_name is None
and self.train_file is None
and self.validation_file is None
and self.test_file is None
):
raise ValueError("Need either a dataset name or a training/validation/test file.")
else:
if self.train_file is not None:
extension = self.train_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file."
if self.validation_file is not None:
extension = self.validation_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, a json or a txt file."
if self.test_file is not None:
extension = self.test_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`test_file` should be a csv, a json or a txt file."
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
log_level = training_args.get_process_log_level()
logger.setLevel(log_level)
datasets.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
transformers.utils.logging.enable_default_handler()
transformers.utils.logging.enable_explicit_format()
# Log on each process the small summary:
logger.warning(
f"Process rank: {training_args.local_process_index}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
+ f"distributed training: {bool(training_args.parallel_mode.value == 'distributed')}, 16-bits training: {training_args.fp16}"
)
logger.info(f"Training/evaluation parameters {training_args}")
# Set seed before initializing model.
set_seed(training_args.seed)
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
# (the dataset will be downloaded automatically from the datasets Hub).
#
# For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
# 'text' is found. You can easily tweak this behavior (see below).
#
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
# download the dataset.
if data_args.dataset_name is not None:
# Downloading and loading a dataset from the hub.
raw_datasets = load_dataset(
data_args.dataset_name,
data_args.dataset_config_name,
trust_remote_code=model_args.trust_remote_code,
)
else:
data_files = {}
if data_args.train_file is not None:
data_files["train"] = data_args.train_file
extension = data_args.train_file.split(".")[-1]
if data_args.validation_file is not None:
data_files["validation"] = data_args.validation_file
extension = data_args.validation_file.split(".")[-1]
if data_args.test_file is not None:
data_files["test"] = data_args.test_file
extension = data_args.test_file.split(".")[-1]
if extension == "txt":
extension = "text"
raw_datasets = load_dataset(extension, data_files=data_files)
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
# https://huggingface.co/docs/datasets/loading_datasets.
# Load pretrained model and tokenizer
#
# Distributed training:
# The .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
{%- if cookiecutter.can_train_from_scratch == "True" %}
config_kwargs = {
"cache_dir": model_args.cache_dir,
"revision": model_args.model_revision,
"token": model_args.token,
"trust_remote_code": model_args.trust_remote_code,
}
if model_args.config_name:
config = AutoConfig.from_pretrained(model_args.config_name, **config_kwargs)
elif model_args.model_name_or_path:
config = AutoConfig.from_pretrained(model_args.model_name_or_path, **config_kwargs)
else:
config = CONFIG_MAPPING[model_args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
tokenizer_kwargs = {
"cache_dir": model_args.cache_dir,
"use_fast": model_args.use_fast_tokenizer,
"revision": model_args.model_revision,
"token": model_args.token,
"trust_remote_code": model_args.trust_remote_code,
}
if model_args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, **tokenizer_kwargs)
elif model_args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, **tokenizer_kwargs)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported by this script. "
"You can do it from another script, save it, and load it from here, using --tokenizer_name."
)
if model_args.model_name_or_path:
model = {{cookiecutter.model_class}}.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
token=model_args.token,
trust_remote_code=model_args.trust_remote_code,
)
else:
logger.info("Training new model from scratch")
model = {{cookiecutter.model_class}}.from_config(config)
model.resize_token_embeddings(len(tokenizer))
{%- elif cookiecutter.can_train_from_scratch == "False" %}
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
# num_labels=num_labels, Uncomment if you have a certain number of labels
finetuning_task=data_args.task_name,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
token=model_args.token,
trust_remote_code=model_args.trust_remote_code,
)
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
use_fast=model_args.use_fast_tokenizer,
revision=model_args.model_revision,
token=model_args.token,
trust_remote_code=model_args.trust_remote_code,
)
model = AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
token=model_args.token,
trust_remote_code=model_args.trust_remote_code,
)
{% endif %}
# Preprocessing the datasets.
# First we tokenize all the texts.
if training_args.do_train:
column_names = raw_datasets["train"].column_names
elif training_args.do_eval:
column_names = raw_datasets["validation"].column_names
elif training_args.do_predict:
column_names = raw_datasets["test"].column_names
text_column_name = "text" if "text" in column_names else column_names[0]
def tokenize_function(examples):
return tokenizer(examples[text_column_name], padding="max_length", truncation=True)
if training_args.do_train:
if "train" not in raw_datasets:
raise ValueError("--do_train requires a train dataset")
train_dataset = raw_datasets["train"]
if data_args.max_train_samples is not None:
# Select Sample from Dataset
train_dataset = train_dataset.select(range(data_args.max_train_samples))
# tokenize train dataset in batch
with training_args.main_process_first(desc="train dataset map tokenization"):
train_dataset = train_dataset.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=[text_column_name],
load_from_cache_file=not data_args.overwrite_cache,
)
if training_args.do_eval:
if "validation" not in raw_datasets:
raise ValueError("--do_eval requires a validation dataset")
eval_dataset = raw_datasets["validation"]
# Selecting samples from dataset
if data_args.max_eval_samples is not None:
eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))
# tokenize validation dataset
with training_args.main_process_first(desc="validation dataset map tokenization"):
eval_dataset = eval_dataset.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=[text_column_name],
load_from_cache_file=not data_args.overwrite_cache,
)
if training_args.do_predict:
if "test" not in raw_datasets:
raise ValueError("--do_predict requires a test dataset")
predict_dataset = raw_datasets["test"]
# Selecting samples from dataset
if data_args.max_predict_samples is not None:
predict_dataset = predict_dataset.select(range(data_args.max_predict_samples))
# tokenize predict dataset
with training_args.main_process_first(desc="prediction dataset map tokenization"):
predict_dataset = predict_dataset.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=[text_column_name],
load_from_cache_file=not data_args.overwrite_cache,
)
# Data collator
data_collator=default_data_collator if not training_args.fp16 else DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8)
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
processing_class=tokenizer,
data_collator=data_collator,
)
# Training
if training_args.do_train:
{%- if cookiecutter.can_train_from_scratch == "False" %}
if os.path.isdir(model_args.model_name_or_path):
checkpoint = model_args.model_name_or_path
else:
checkpoint = None
{%- elif cookiecutter.can_train_from_scratch == "True" %}
if model_args.model_name_or_path is not None and os.path.isdir(model_args.model_name_or_path):
checkpoint = model_args.model_name_or_path
else:
checkpoint = None
{% endif %}
train_result = trainer.train(resume_from_checkpoint=checkpoint)
trainer.save_model() # Saves the tokenizer too for easy upload
metrics = train_result.metrics
max_train_samples = (
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
)
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
trainer.save_state()
# Evaluation
if training_args.do_eval:
logger.info("*** Evaluate ***")
metrics = trainer.evaluate()
max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)
metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset))
trainer.log_metrics("eval", metrics)
trainer.save_metrics("eval", metrics)
# Prediction
if training_args.do_predict:
logger.info("*** Predict ***")
predictions, labels, metrics = trainer.predict(predict_dataset)
max_predict_samples = data_args.max_predict_samples if data_args.max_predict_samples is not None else len(predict_dataset)
metrics["predict_samples"] = min(max_predict_samples, len(predict_dataset))
trainer.log_metrics("predict", metrics)
trainer.save_metrics("predict", metrics)
# write custom code for saving predictions according to task
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
if __name__ == "__main__":
main()
{%- elif cookiecutter.with_trainer == "False" %}
import argparse
import logging
import math
import os
import random
import datasets
from datasets import load_dataset, load_metric
from torch.utils.data import DataLoader
from tqdm.auto import tqdm
import transformers
from accelerate import Accelerator
from transformers import (
CONFIG_MAPPING,
MODEL_MAPPING,
AutoConfig,
{{cookiecutter.model_class}},
AutoTokenizer,
DataCollatorWithPadding,
PreTrainedConfig,
SchedulerType,
default_data_collator,
get_scheduler,
set_seed,
)
logger = logging.getLogger(__name__)
{%- if cookiecutter.can_train_from_scratch == "True" %}
# You should update this to your particular problem to have better documentation of `model_type`
MODEL_CONFIG_CLASSES = list(MODEL_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
{% endif %}
def parse_args():
parser = argparse.ArgumentParser(description="Finetune a transformers model on a text classification task")
parser.add_argument(
"--dataset_name",
type=str,
default=None,
help="The name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--dataset_config_name",
type=str,
default=None,
help= "The configuration name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--trust_remote_code",
action="store_true",
help=(
"Whether to trust the execution of code from datasets/models defined on the Hub."
" This option should only be set to `True` for repositories you trust and in which you have read the"
" code, as it will execute code present on the Hub on your local machine."
),
)
parser.add_argument(
"--train_file", type=str, default=None, help="A csv or a json file containing the training data."
)
parser.add_argument(
"--validation_file", type=str, default=None, help="A csv or a json file containing the validation data."
)
parser.add_argument(
"--max_length",
type=int,
default=128,
help=(
"The maximum total input sequence length after tokenization. Sequences longer than this will be truncated,"
" sequences shorter will be padded if `--pad_to_max_length` is passed."
),
)
parser.add_argument(
"--pad_to_max_length",
action="store_true",
help="If passed, pad all samples to `max_length`. Otherwise, dynamic padding is used.",
)
parser.add_argument(
"--model_name_or_path",
type=str,
help="Path to pretrained model or model identifier from huggingface.co/models.",
required=True,
)
parser.add_argument(
"--config_name",
type=str,
default=None,
help="Pretrained config name or path if not the same as model_name",
)
parser.add_argument(
"--tokenizer_name",
type=str,
default=None,
help="Pretrained tokenizer name or path if not the same as model_name",
)
parser.add_argument(
"--use_slow_tokenizer",
action="store_true",
help="If passed, will use a slow tokenizer (not backed by the Hugging Face Tokenizers library).",
)
parser.add_argument(
"--per_device_train_batch_size",
type=int,
default=8,
help="Batch size (per device) for the training dataloader.",
)
parser.add_argument(
"--per_device_eval_batch_size",
type=int,
default=8,
help="Batch size (per device) for the evaluation dataloader.",
)
parser.add_argument(
"--learning_rate",
type=float,
default=5e-5,
help="Initial learning rate (after the potential warmup period) to use.",
)
parser.add_argument("--weight_decay", type=float, default=0.0, help="Weight decay to use.")
parser.add_argument("--num_train_epochs", type=int, default=3, help="Total number of training epochs to perform.")
parser.add_argument(
"--max_train_steps",
type=int,
default=None,
help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
)
parser.add_argument(
"--gradient_accumulation_steps",
type=int,
default=1,
help="Number of updates steps to accumulate before performing a backward/update pass.",
)
parser.add_argument(
"--lr_scheduler_type",
type=SchedulerType,
default="linear",
help="The scheduler type to use.",
choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"],
)
parser.add_argument(
"--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler."
)
parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.")
parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
{%- if cookiecutter.can_train_from_scratch == "True" %}
parser.add_argument(
"--model_type",
type=str,
default=None,
help="Model type to use if training from scratch.",
choices=MODEL_TYPES,
)
{% endif %}
args = parser.parse_args()
# Sanity checks
if args.task_name is None and args.train_file is None and args.validation_file is None:
raise ValueError("Need either a task name or a training/validation file.")
else:
if args.train_file is not None:
extension = args.train_file.split(".")[-1]
assert extension in ["csv", "json"], "`train_file` should be a csv or a json file."
if args.validation_file is not None:
extension = args.validation_file.split(".")[-1]
assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file."
if args.output_dir is not None:
os.makedirs(args.output_dir, exist_ok=True)
return args
def main():
args = parse_args()
# Initialize the accelerator. We will let the accelerator handle device placement for us in this example.
accelerator = Accelerator()
# Make one log on every process with the configuration for debugging.
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
logger.info(accelerator.state)
# Setup logging, we only want one process per machine to log things on the screen.
# accelerator.is_local_main_process is only True for one process per machine.
logger.setLevel(logging.INFO if accelerator.is_local_main_process else logging.ERROR)
if accelerator.is_local_main_process:
datasets.utils.logging.set_verbosity_warning()
transformers.utils.logging.set_verbosity_info()
else:
datasets.utils.logging.set_verbosity_error()
transformers.utils.logging.set_verbosity_error()
# If passed along, set the training seed now.
if args.seed is not None:
set_seed(args.seed)
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
# (the dataset will be downloaded automatically from the datasets Hub).
#
# For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
# 'text' is found. You can easily tweak this behavior (see below).
#
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
# download the dataset.
if args.dataset_name is not None:
# Downloading and loading a dataset from the hub.
raw_datasets = load_dataset(
args.dataset_name, args.dataset_config_name, trust_remote_code=args.trust_remote_code
)
else:
data_files = {}
if args.train_file is not None:
data_files["train"] = args.train_file
extension = args.train_file.split(".")[-1]
if args.validation_file is not None:
data_files["validation"] = args.validation_file
extension = args.validation_file.split(".")[-1]
raw_datasets = load_dataset(extension, data_files=data_files)
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
# https://huggingface.co/docs/datasets/loading_datasets.
# Load pretrained model and tokenizer
#
# In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
{%- if cookiecutter.can_train_from_scratch == "True" %}
if model_args.config_name:
config = AutoConfig.from_pretrained(args.model_name_or_path)
elif model_args.model_name_or_path:
config = AutoConfig.from_pretrained(args.model_name_or_path)
else:
config = CONFIG_MAPPING[args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
if model_args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, use_fast=not args.use_slow_tokenizer)
elif model_args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, use_fast=not args.use_slow_tokenizer)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported by this script. "
"You can do it from another script, save it, and load it from here, using --tokenizer_name."
)
if model_args.model_name_or_path:
model = {{cookiecutter.model_class}}.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
)
else:
logger.info("Training new model from scratch")
model = {{cookiecutter.model_class}}.from_config(config)
model.resize_token_embeddings(len(tokenizer))
{%- elif cookiecutter.can_train_from_scratch == "False" %}
config = AutoConfig.from_pretrained(
args.config_name if model_args.config_name else args.model_name_or_path,
# num_labels=num_labels, Uncomment if you have a certain number of labels
finetuning_task=data_args.task_name,
)
tokenizer = AutoTokenizer.from_pretrained(
args.tokenizer_name if model_args.tokenizer_name else args.model_name_or_path,
use_fast=not args.use_slow_tokenizer,
)
model = AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
)
{% endif %}
# Preprocessing the datasets.
# First we tokenize all the texts.
column_names = raw_datasets["train"].column_names
text_column_name = "text" if "text" in column_names else column_names[0]
padding = "max_length" if args.pad_to_max_length else False
def tokenize_function(examples):
result = tokenizer(examples[text_column_name], padding=padding, max_length=args.max_length, truncation=True)
if "label" in examples:
result["labels"] = examples["label"]
return result
processed_datasets = raw_datasets.map(
preprocess_function, batched=True, remove_columns=raw_datasets["train"].column_names
)
train_dataset = processed_datasets["train"]
eval_dataset = processed_datasets["validation"]
# Log a few random samples from the training set:
for index in random.sample(range(len(train_dataset)), 3):
logger.info(f"Sample {index} of the training set: {train_dataset[index]}.")
# DataLoaders creation:
if args.pad_to_max_length:
# If padding was already done ot max length, we use the default data collator that will just convert everything
# to tensors.
data_collator = default_data_collator
else:
# Otherwise, `DataCollatorWithPadding` will apply dynamic padding for us (by padding to the maximum length of
# the samples passed). When using mixed precision, we add `pad_to_multiple_of=8` to pad all tensors to multiple
# of 8s, which will enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
# For fp8, we pad to multiple of 16.
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
data_collator = DataCollatorWithPadding(tokenizer, pad_to_multiple_of=pad_to_multiple_of)
train_dataloader = DataLoader(
train_dataset, shuffle=True, collate_fn=data_collator, batch_size=args.per_device_train_batch_size
)
eval_dataloader = DataLoader(eval_dataset, collate_fn=data_collator, batch_size=args.per_device_eval_batch_size)
# Optimizer
# Split weights in two groups, one with weight decay and the other not.
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": args.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=args.learning_rate)
# Prepare everything with our `accelerator`.
model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader
)
# Note -> the training dataloader needs to be prepared before we grab his length below (cause its length will be
# shorter in multiprocess)
# Scheduler and math around the number of training steps.
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
if args.max_train_steps is None:
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
else:
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
lr_scheduler = get_scheduler(
name=args.lr_scheduler_type,
optimizer=optimizer,
num_warmup_steps=args.num_warmup_steps,
num_training_steps=args.max_train_steps,
)
# TODO Get the proper metric function
# metric = load_metric(xxx)
# Train!
total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
logger.info("***** Running training *****")
logger.info(f" Num examples = {len(train_dataset)}")
logger.info(f" Num Epochs = {args.num_train_epochs}")
logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}")
logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
logger.info(f" Total optimization steps = {args.max_train_steps}")
# Only show the progress bar once on each machine.
progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
completed_steps = 0
for epoch in range(args.num_train_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
outputs = model(**batch)
loss = outputs.loss
loss = loss / args.gradient_accumulation_steps
accelerator.backward(loss)
if step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
completed_steps += 1
if completed_steps >= args.max_train_steps:
break
model.eval()
for step, batch in enumerate(eval_dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
metric.add_batch(
predictions=accelerator.gather(predictions),
references=accelerator.gather(batch["labels"]),
)
eval_metric = metric.compute()
logger.info(f"epoch {epoch}: {eval_metric}")
if args.output_dir is not None:
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(args.output_dir, save_function=accelerator.save)
if __name__ == "__main__":
main()
{% endif %}
| DataTrainingArguments |
python | getsentry__sentry | src/sentry/integrations/msteams/card_builder/block.py | {
"start": 1696,
"end": 1783
} | class ____(TypedDict):
type: Literal["ActionSet"]
actions: list[Action]
| ActionSet |
python | apache__airflow | providers/amazon/src/airflow/providers/amazon/aws/waiters/base_waiter.py | {
"start": 969,
"end": 1910
} | class ____:
"""
Used to create custom Boto3 Waiters.
For more details, see airflow/providers/amazon/aws/waiters/README.md
"""
def __init__(self, client: boto3.client, model_config: dict, deferrable: bool = False) -> None:
self.model = WaiterModel(model_config)
self.client = client
self.deferrable = deferrable
def _get_async_waiter_with_client(self, waiter_name: str):
from aiobotocore.waiter import create_waiter_with_client as create_async_waiter_with_client
return create_async_waiter_with_client(
waiter_name=waiter_name, waiter_model=self.model, client=self.client
)
def waiter(self, waiter_name: str) -> Waiter:
if self.deferrable:
return self._get_async_waiter_with_client(waiter_name=waiter_name)
return create_waiter_with_client(waiter_name=waiter_name, waiter_model=self.model, client=self.client)
| BaseBotoWaiter |
python | huggingface__transformers | src/transformers/modeling_utils.py | {
"start": 33404,
"end": 33467
} | class ____(Enum):
inputs = 0
outputs = 1
| PipelineParallel |
python | coleifer__peewee | tests/regressions.py | {
"start": 24695,
"end": 25475
} | class ____(TestModel):
upc = CharField(primary_key=True)
product_id = CharField()
color = CharField()
class Meta:
constraints = [SQL('FOREIGN KEY (product_id, color) REFERENCES '
'product(id, color)')]
@hybrid_property
def product(self):
if not hasattr(self, '_product'):
self._product = Product.get((Product.id == self.product_id) &
(Product.color == self.color))
return self._product
@product.setter
def product(self, obj):
self._product = obj
self.product_id = obj.id
self.color = obj.color
@product.expression
def product(cls):
return (Product.id == cls.product_id) & (Product.color == cls.color)
| Sku |
python | kamyu104__LeetCode-Solutions | Python/maximum-value-of-k-coins-from-piles.py | {
"start": 90,
"end": 642
} | class ____(object):
def maxValueOfCoins(self, piles, k):
"""
:type piles: List[List[int]]
:type k: int
:rtype: int
"""
dp = [0]
for pile in piles:
new_dp = [0]*min(len(dp)+len(pile), k+1)
for i in xrange(len(dp)):
curr = 0
for j in xrange(min(k-i, len(pile))+1):
new_dp[i+j] = max(new_dp[i+j], dp[i]+curr)
curr += pile[j] if j < len(pile) else 0
dp = new_dp
return dp[-1]
| Solution |
python | django__django | django/template/defaulttags.py | {
"start": 14332,
"end": 14500
} | class ____(Node):
def __init__(self, node):
self.node = node
def render(self, context):
self.node.reset(context)
return ""
| ResetCycleNode |
python | xlwings__xlwings | xlwings/constants.py | {
"start": 118437,
"end": 120961
} | class ____:
xlBlankRow = 19 # from enum XlTableStyleElementType
xlColumnStripe1 = 7 # from enum XlTableStyleElementType
xlColumnStripe2 = 8 # from enum XlTableStyleElementType
xlColumnSubheading1 = 20 # from enum XlTableStyleElementType
xlColumnSubheading2 = 21 # from enum XlTableStyleElementType
xlColumnSubheading3 = 22 # from enum XlTableStyleElementType
xlFirstColumn = 3 # from enum XlTableStyleElementType
xlFirstHeaderCell = 9 # from enum XlTableStyleElementType
xlFirstTotalCell = 11 # from enum XlTableStyleElementType
xlGrandTotalColumn = 4 # from enum XlTableStyleElementType
xlGrandTotalRow = 2 # from enum XlTableStyleElementType
xlHeaderRow = 1 # from enum XlTableStyleElementType
xlLastColumn = 4 # from enum XlTableStyleElementType
xlLastHeaderCell = 10 # from enum XlTableStyleElementType
xlLastTotalCell = 12 # from enum XlTableStyleElementType
xlPageFieldLabels = 26 # from enum XlTableStyleElementType
xlPageFieldValues = 27 # from enum XlTableStyleElementType
xlRowStripe1 = 5 # from enum XlTableStyleElementType
xlRowStripe2 = 6 # from enum XlTableStyleElementType
xlRowSubheading1 = 23 # from enum XlTableStyleElementType
xlRowSubheading2 = 24 # from enum XlTableStyleElementType
xlRowSubheading3 = 25 # from enum XlTableStyleElementType
xlSlicerHoveredSelectedItemWithData = 33 # from enum XlTableStyleElementType
xlSlicerHoveredSelectedItemWithNoData = 35 # from enum XlTableStyleElementType
xlSlicerHoveredUnselectedItemWithData = 32 # from enum XlTableStyleElementType
xlSlicerHoveredUnselectedItemWithNoData = 34 # from enum XlTableStyleElementType
xlSlicerSelectedItemWithData = 30 # from enum XlTableStyleElementType
xlSlicerSelectedItemWithNoData = 31 # from enum XlTableStyleElementType
xlSlicerUnselectedItemWithData = 28 # from enum XlTableStyleElementType
xlSlicerUnselectedItemWithNoData = 29 # from enum XlTableStyleElementType
xlSubtotalColumn1 = 13 # from enum XlTableStyleElementType
xlSubtotalColumn2 = 14 # from enum XlTableStyleElementType
xlSubtotalColumn3 = 15 # from enum XlTableStyleElementType
xlSubtotalRow1 = 16 # from enum XlTableStyleElementType
xlSubtotalRow2 = 17 # from enum XlTableStyleElementType
xlSubtotalRow3 = 18 # from enum XlTableStyleElementType
xlTotalRow = 2 # from enum XlTableStyleElementType
xlWholeTable = 0 # from enum XlTableStyleElementType
| TableStyleElementType |
python | PyCQA__pylint | tests/functional/g/generic_alias/generic_alias_collections.py | {
"start": 2503,
"end": 2547
} | class ____(abc.ABC):
pass
| CustomAbstractCls1 |
python | wandb__wandb | wandb/sdk/artifacts/_generated/artifact_type.py | {
"start": 513,
"end": 696
} | class ____(GQLResult):
name: str
ArtifactType.model_rebuild()
ArtifactTypeProject.model_rebuild()
ArtifactTypeProjectArtifact.model_rebuild()
| ArtifactTypeProjectArtifactArtifactType |
python | hyperopt__hyperopt | hyperopt/tests/test_base.py | {
"start": 4153,
"end": 7051
} | class ____(unittest.TestCase):
def setUp(self):
self.trials = Trials()
def test_valid(self):
trials = self.trials
f = trials.insert_trial_doc
fine = ok_trial("ID", 1, 2, 3)
# --original runs fine
f(fine)
# -- take out each mandatory root key
def knockout(key):
rval = copy.deepcopy(fine)
del rval[key]
return rval
for key in TRIAL_KEYS:
self.assertRaises(InvalidTrial, f, knockout(key))
# -- take out each mandatory misc key
def knockout2(key):
rval = copy.deepcopy(fine)
del rval["misc"][key]
return rval
for key in TRIAL_MISC_KEYS:
self.assertRaises(InvalidTrial, f, knockout2(key))
def test_insert_sync(self):
trials = self.trials
assert len(trials) == 0
trials.insert_trial_doc(ok_trial("a", 8))
assert len(trials) == 0
trials.insert_trial_doc(ok_trial(5, a=1, b=3))
assert len(trials) == 0
trials.insert_trial_docs([ok_trial(tid=4, a=2, b=3), ok_trial(tid=9, a=4, b=3)])
assert len(trials) == 0
trials.refresh()
assert len(trials) == 4, len(trials)
assert len(trials) == len(trials.specs)
assert len(trials) == len(trials.results)
assert len(trials) == len(trials.miscs)
trials.insert_trial_docs(
trials.new_trial_docs(
["id0", "id1"],
[dict(a=1), dict(a=2)],
[dict(status="new"), dict(status="new")],
[
dict(tid="id0", idxs={}, vals={}, cmd=None),
dict(tid="id1", idxs={}, vals={}, cmd=None),
],
)
)
assert len(trials) == 4
assert len(trials) == len(trials.specs)
assert len(trials) == len(trials.results)
assert len(trials) == len(trials.miscs)
trials.refresh()
assert len(trials) == 6
assert len(trials) == len(trials.specs)
assert len(trials) == len(trials.results)
assert len(trials) == len(trials.miscs)
def test_best_trial(self):
trials = self.trials
assert len(trials) == 0
# It should throw a reasonable error when no valid trials exist.
trials.insert_trial_doc(create_fake_trial(0, loss=np.NaN))
trials.refresh()
with self.assertRaises(AllTrialsFailed):
assert trials.best_trial is None
# It should work even with some trials with NaN losses.
trials.insert_trial_doc(create_fake_trial(1, loss=1.0))
trials.insert_trial_doc(create_fake_trial(2, loss=np.NaN))
trials.insert_trial_doc(create_fake_trial(3, loss=0.5))
trials.refresh()
best_trial = trials.best_trial
self.assertEqual(best_trial["tid"], 3)
| TestTrials |
python | sqlalchemy__sqlalchemy | lib/sqlalchemy/dialects/postgresql/pg8000.py | {
"start": 4685,
"end": 4746
} | class ____(_PGNumericCommon, sqltypes.Float):
pass
| _PGFloat |
python | aimacode__aima-python | ipyviews.py | {
"start": 2617,
"end": 5574
} | class ____:
""" View for grid world. Uses XYEnviornment in agents.py as model.
world: an instance of XYEnviornment.
block_size: size of individual blocks in pixes.
default_fill: color of blocks. A hex value or name should be passed.
"""
def __init__(self, world, block_size=30, default_fill="white"):
self.time = time.time()
self.world = world
self.labels = defaultdict(str) # locations as keys
self.representation = {"default": {"type": "color", "source": default_fill}}
self.block_size = block_size
def object_name(self):
globals_in_main = {x: getattr(__main__, x) for x in dir(__main__)}
for x in globals_in_main:
if isinstance(globals_in_main[x], type(self)):
if globals_in_main[x].time == self.time:
return x
def set_label(self, coordinates, label):
""" Add lables to a particular block of grid.
coordinates: a tuple of (row, column).
rows and columns are 0 indexed.
"""
self.labels[coordinates] = label
def set_representation(self, thing, repr_type, source):
""" Set the representation of different things in the
environment.
thing: a thing object.
repr_type : type of representation can be either "color" or "img"
source: Hex value in case of color. Image path in case of image.
"""
thing_class_name = thing.__class__.__name__
if repr_type not in ("img", "color"):
raise ValueError('Invalid repr_type passed. Possible types are img/color')
self.representation[thing_class_name] = {"type": repr_type, "source": source}
def handle_click(self, coordinates):
""" This method needs to be overidden. Make sure to include a
self.show() call at the end. """
self.show()
def map_to_render(self):
default_representation = {"val": "default", "tooltip": ""}
world_map = [[copy.deepcopy(default_representation) for _ in range(self.world.width)]
for _ in range(self.world.height)]
for thing in self.world.things:
row, column = thing.location
thing_class_name = thing.__class__.__name__
if thing_class_name not in self.representation:
raise KeyError('Representation not found for {}'.format(thing_class_name))
world_map[row][column]["val"] = thing.__class__.__name__
for location, label in self.labels.items():
row, column = location
world_map[row][column]["tooltip"] = label
return json.dumps(world_map)
def show(self):
clear_output()
total_html = _GRID_WORLD_HTML.format(
self.object_name(), self.map_to_render(),
self.block_size, json.dumps(self.representation), _JS_GRID_WORLD)
display(HTML(total_html))
| GridWorldView |
python | ethereum__web3.py | web3/contract/async_contract.py | {
"start": 19466,
"end": 20428
} | class ____(BaseContractConstructor):
# mypy types
w3: "AsyncWeb3[Any]"
@combomethod
async def transact(self, transaction: TxParams | None = None) -> HexBytes:
return await self.w3.eth.send_transaction(self._get_transaction(transaction))
@combomethod
async def build_transaction(self, transaction: TxParams | None = None) -> TxParams:
"""
Build the transaction dictionary without sending
"""
built_transaction = self._build_transaction(transaction)
return await async_fill_transaction_defaults(self.w3, built_transaction)
@combomethod
async def estimate_gas(
self,
transaction: TxParams | None = None,
block_identifier: BlockIdentifier | None = None,
) -> int:
transaction = self._estimate_gas(transaction)
return await self.w3.eth.estimate_gas(
transaction, block_identifier=block_identifier
)
| AsyncContractConstructor |
python | pallets__itsdangerous | tests/test_itsdangerous/test_signer.py | {
"start": 406,
"end": 3435
} | class ____:
@pytest.fixture()
def signer_factory(self):
return partial(Signer, secret_key="secret-key")
@pytest.fixture()
def signer(self, signer_factory):
return signer_factory()
def test_signer(self, signer):
signed = signer.sign("my string")
assert isinstance(signed, bytes)
assert signer.validate(signed)
out = signer.unsign(signed)
assert out == b"my string"
def test_no_separator(self, signer):
signed = signer.sign("my string")
signed = signed.replace(signer.sep, b"*", 1)
assert not signer.validate(signed)
with pytest.raises(BadSignature):
signer.unsign(signed)
def test_broken_signature(self, signer):
signed = signer.sign("b")
bad_signed = signed[:-1]
bad_sig = bad_signed.rsplit(b".", 1)[1]
assert not signer.verify_signature(b"b", bad_sig)
with pytest.raises(BadSignature) as exc_info:
signer.unsign(bad_signed)
assert exc_info.value.payload == b"b"
def test_changed_value(self, signer):
signed = signer.sign("my string")
signed = signed.replace(b"my", b"other", 1)
assert not signer.validate(signed)
with pytest.raises(BadSignature):
signer.unsign(signed)
def test_invalid_separator(self, signer_factory):
with pytest.raises(ValueError) as exc_info:
signer_factory(sep="-")
assert "separator cannot be used" in str(exc_info.value)
@pytest.mark.parametrize(
"key_derivation", ("concat", "django-concat", "hmac", "none")
)
def test_key_derivation(self, signer_factory, key_derivation):
signer = signer_factory(key_derivation=key_derivation)
assert signer.unsign(signer.sign("value")) == b"value"
def test_invalid_key_derivation(self, signer_factory):
signer = signer_factory(key_derivation="invalid")
with pytest.raises(TypeError):
signer.derive_key()
def test_digest_method(self, signer_factory):
signer = signer_factory(digest_method=hashlib.md5)
assert signer.unsign(signer.sign("value")) == b"value"
@pytest.mark.parametrize(
"algorithm", (None, NoneAlgorithm(), HMACAlgorithm(), _ReverseAlgorithm())
)
def test_algorithm(self, signer_factory, algorithm):
signer = signer_factory(algorithm=algorithm)
assert signer.unsign(signer.sign("value")) == b"value"
if algorithm is None:
assert signer.algorithm.digest_method == signer.digest_method
def test_secret_keys(self):
signer = Signer("a")
signed = signer.sign("my string")
assert isinstance(signed, bytes)
signer = Signer(["a", "b"])
assert signer.validate(signed)
out = signer.unsign(signed)
assert out == b"my string"
def test_abstract_algorithm():
alg = SigningAlgorithm()
with pytest.raises(NotImplementedError):
alg.get_signature(b"a", b"b")
| TestSigner |
python | gevent__gevent | src/gevent/_ffi/watcher.py | {
"start": 18568,
"end": 19382
} | class ____(object):
# hack for libuv which doesn't extend watcher
_CALL_SUPER_INIT = True
def __init__(self, loop, pid, trace=0, ref=True):
if not loop.default:
raise TypeError('child watchers are only available on the default loop')
loop.install_sigchld()
self._pid = pid
if self._CALL_SUPER_INIT:
super(ChildMixin, self).__init__(loop, ref=ref, args=(pid, trace))
def _format(self):
return ' pid=%r rstatus=%r' % (self.pid, self.rstatus)
@property
def pid(self):
return self._pid
@property
def rpid(self):
# The received pid, the result of the waitpid() call.
return self._rpid
_rpid = None
_rstatus = 0
@property
def rstatus(self):
return self._rstatus
| ChildMixin |
python | fastapi__sqlmodel | docs_src/tutorial/create_db_and_table/tutorial003_py310.py | {
"start": 62,
"end": 572
} | class ____(SQLModel, table=True): # (3)!
id: int | None = Field(default=None, primary_key=True) # (4)!
name: str # (5)!
secret_name: str # (6)!
age: int | None = None # (7)!
sqlite_file_name = "database.db" # (8)!
sqlite_url = f"sqlite:///{sqlite_file_name}" # (9)!
engine = create_engine(sqlite_url, echo=True) # (10)!
def create_db_and_tables(): # (11)!
SQLModel.metadata.create_all(engine) # (12)!
if __name__ == "__main__": # (13)!
create_db_and_tables() # (14)!
| Hero |
python | pytorch__pytorch | test/dynamo/test_debug_utils.py | {
"start": 2938,
"end": 7342
} | class ____(TestCase):
def test_aot_graph_parser(self, device):
def forward(
self,
primals_1: "f32[1001, 6]",
primals_2: "f32[1001]",
primals_3: "f32[1001, 64]",
primals_4: "f32[4190]",
primals_5: "f32[4190]",
primals_6: "f32[1739, 4190]",
primals_48: "f32[6144, 4191]",
):
_tensor_constant0: "i64[4190]" = self._tensor_constant0
lift_fresh_copy: "i64[4190]" = torch.ops.aten.lift_fresh_copy.default(
_tensor_constant0
)
_tensor_constant0 = None
index: "f32[6144, 4190]" = torch.ops.aten.index.Tensor( # noqa: F841
primals_48, [None, lift_fresh_copy]
)
lift_fresh_copy = None
_tensor_constant1: "i64[6]" = self._tensor_constant1
lift_fresh_copy_1: "i64[6]" = torch.ops.aten.lift_fresh_copy.default(
_tensor_constant1
)
_tensor_constant1 = None
index_1: "f32[6144, 6]" = torch.ops.aten.index.Tensor(
primals_48, [None, lift_fresh_copy_1]
)
primals_48 = lift_fresh_copy_1 = None
permute: "f32[6, 1001]" = torch.ops.aten.permute.default(primals_1, [1, 0])
primals_1 = None
addmm: "f32[6144, 1001]" = torch.ops.aten.addmm.default(
primals_2, index_1, permute
)
primals_2 = permute = None
amax: "f32[6144, 1]" = torch.ops.aten.amax.default(addmm, [-1], True)
sub: "f32[6144, 1001]" = torch.ops.aten.sub.Tensor(addmm, amax)
exp: "f32[6144, 1001]" = torch.ops.aten.exp.default(sub)
sub = None
sum_1: "f32[6144, 1]" = torch.ops.aten.sum.dim_IntList(exp, [-1], True)
div: "f32[6144, 1001]" = torch.ops.aten.div.Tensor(exp, sum_1)
exp = None
full_default: "i32[6144, 1001]" = torch.ops.aten.full.default(
[6144, 1001],
1,
dtype=torch.int32,
layout=torch.strided,
device=device,
pin_memory=False,
)
iota: "i32[1001]" = torch.ops.prims.iota.default(
1001,
start=0,
step=1,
dtype=torch.int32,
device=device,
requires_grad=False,
)
mul: "i32[6144, 1001]" = torch.ops.aten.mul.Tensor(full_default, iota)
full_default = iota = None
iota_1: "i32[6144]" = torch.ops.prims.iota.default(
6144,
start=0,
step=1001,
dtype=torch.int32,
device=device,
requires_grad=False,
)
view: "i32[6150144]" = torch.ops.aten.reshape.default(mul, [-1])
mul = None
view_1: "f32[6150144]" = torch.ops.aten.reshape.default(div, [-1])
div = None
_embedding_bag = torch.ops.aten._embedding_bag.default(
primals_3, view, iota_1, False, 0, False, view_1
)
return _embedding_bag
kwargs = aot_graph_input_parser(forward, device=device)
# runs successfully
forward(**kwargs)
def test_sym_aot_graph_parser(self, device):
def forward(
self,
primals_1: "f32[1001, 6]", # noqa: F821
primals_2: "f32[s0]", # noqa: F821
primals_3: "Sym(s0)", # noqa: F821,
primals_4: "f32[s1]", # noqa: F821,
primals_5: "Sym(s1)", # noqa: F821,
):
_tensor_constant0: "i64[4190]" = self._tensor_constant0
kwargs = aot_graph_input_parser(
forward, device=device, sym_shapes={"s0": 10}, default_sym_shape=5
)
self.assertEqual(list(kwargs["primals_2"].shape), [10])
self.assertEqual(kwargs["primals_3"], 10)
self.assertEqual(list(kwargs["primals_4"].shape), [5])
self.assertEqual(kwargs["primals_5"], 5)
instantiate_device_type_tests(TestDebugUtils, globals())
devices = ["cuda", "hpu"]
instantiate_device_type_tests(TestDebugUtilsDevice, globals(), only_for=devices)
if __name__ == "__main__":
from torch._dynamo.test_case import run_tests
run_tests()
| TestDebugUtilsDevice |
python | pytorch__pytorch | test/distributed/checkpoint/e2e/test_e2e_save_and_load.py | {
"start": 2001,
"end": 2534
} | class ____(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
torch.manual_seed(0)
self.net1 = nn.Linear(8, 16)
self.net2 = nn.Linear(16, 32)
self.net3 = nn.Linear(32, 64)
self.net4 = nn.Linear(64, 8)
def forward(self, x):
x = F.relu(self.net1(x))
x = F.relu(self.net2(x))
x = F.relu(self.net3(x))
x = F.relu(self.net4(x))
return x
def get_input(self):
return torch.rand(8, 8, device=device_type)
| TestDummyModel |
python | pytorch__pytorch | torch/fx/experimental/unification/multipledispatch/conflict.py | {
"start": 280,
"end": 4210
} | class ____(Warning):
pass
def supercedes(a, b):
"""A is consistent and strictly more specific than B"""
if len(a) < len(b):
# only case is if a is empty and b is variadic
return not a and len(b) == 1 and isvariadic(b[-1])
elif len(a) == len(b):
return all(map(issubclass, a, b))
else:
# len(a) > len(b)
p1 = 0
p2 = 0
while p1 < len(a) and p2 < len(b):
cur_a = a[p1]
cur_b = b[p2]
if not (isvariadic(cur_a) or isvariadic(cur_b)):
if not issubclass(cur_a, cur_b):
return False
p1 += 1
p2 += 1
elif isvariadic(cur_a):
assert p1 == len(a) - 1
return p2 == len(b) - 1 and issubclass(cur_a, cur_b)
elif isvariadic(cur_b):
assert p2 == len(b) - 1
if not issubclass(cur_a, cur_b):
return False
p1 += 1
return p2 == len(b) - 1 and p1 == len(a)
def consistent(a, b):
"""It is possible for an argument list to satisfy both A and B"""
# Need to check for empty args
if not a:
return not b or isvariadic(b[0])
if not b:
return not a or isvariadic(a[0])
# Non-empty args check for mutual subclasses
if len(a) == len(b):
return all(issubclass(aa, bb) or issubclass(bb, aa) for aa, bb in zip(a, b))
else:
p1 = 0
p2 = 0
while p1 < len(a) and p2 < len(b):
cur_a = a[p1]
cur_b = b[p2]
if not issubclass(cur_b, cur_a) and not issubclass(cur_a, cur_b):
return False
if not (isvariadic(cur_a) or isvariadic(cur_b)):
p1 += 1
p2 += 1
elif isvariadic(cur_a):
p2 += 1
elif isvariadic(cur_b):
p1 += 1
# We only need to check for variadic ends
# Variadic types are guaranteed to be the last element
return (
isvariadic(cur_a) # type: ignore[possibly-undefined]
and p2 == len(b)
or isvariadic(cur_b) # type: ignore[possibly-undefined]
and p1 == len(a)
)
def ambiguous(a, b):
"""A is consistent with B but neither is strictly more specific"""
return consistent(a, b) and not (supercedes(a, b) or supercedes(b, a))
def ambiguities(signatures):
"""All signature pairs such that A is ambiguous with B"""
signatures = list(map(tuple, signatures))
return {
(a, b)
for a in signatures
for b in signatures
if hash(a) < hash(b)
and ambiguous(a, b)
and not any(supercedes(c, a) and supercedes(c, b) for c in signatures)
}
def super_signature(signatures):
"""A signature that would break ambiguities"""
n = len(signatures[0])
assert all(len(s) == n for s in signatures)
return [max((type.mro(sig[i]) for sig in signatures), key=len)[0] for i in range(n)]
def edge(a, b, tie_breaker=hash):
"""A should be checked before B
Tie broken by tie_breaker, defaults to ``hash``
"""
# A either supersedes B and B does not supersede A or if B does then call
# tie_breaker
return supercedes(a, b) and (
not supercedes(b, a) or tie_breaker(a) > tie_breaker(b)
)
def ordering(signatures):
"""A sane ordering of signatures to check, first to last
Topological sort of edges as given by ``edge`` and ``supercedes``
"""
signatures = list(map(tuple, signatures))
edges = [(a, b) for a in signatures for b in signatures if edge(a, b)]
edges = groupby(operator.itemgetter(0), edges)
for s in signatures:
if s not in edges:
edges[s] = []
edges = {k: [b for a, b in v] for k, v in edges.items()} # type: ignore[assignment, attr-defined]
return _toposort(edges)
| AmbiguityWarning |
python | django__django | tests/admin_registration/tests.py | {
"start": 366,
"end": 454
} | class ____(admin.ModelAdmin):
list_display = ["name"]
save_on_top = True
| NameAdmin |
python | django__django | tests/model_inheritance/models.py | {
"start": 1484,
"end": 1570
} | class ____(Attachment):
url = models.URLField()
#
# Multi-table inheritance
#
| Link |
python | spyder-ide__spyder | spyder/api/plugins/enum.py | {
"start": 195,
"end": 1352
} | class ____:
"""
Convenience class for accessing Spyder internal plugins.
"""
All = "all" # Wildcard to populate REQUIRES with all available plugins
Appearance = 'appearance'
Application = 'application'
Completions = 'completions'
Console = 'internal_console'
Debugger = 'debugger'
Editor = 'editor'
Explorer = 'explorer'
ExternalTerminal = 'external_terminal'
Find = 'find_in_files'
Help = 'help'
History = 'historylog'
IPythonConsole = 'ipython_console'
Layout = 'layout'
MainInterpreter = 'main_interpreter'
MainMenu = 'mainmenu'
OnlineHelp = 'onlinehelp'
OutlineExplorer = 'outline_explorer'
Plots = 'plots'
Preferences = 'preferences'
Profiler = 'profiler'
Projects = 'project_explorer'
Pylint = 'pylint'
PythonpathManager = 'pythonpath_manager'
RemoteClient = 'remoteclient'
Run = 'run'
Shortcuts = 'shortcuts'
StatusBar = 'statusbar'
Switcher = 'switcher'
Toolbar = "toolbar"
Tours = 'tours'
UpdateManager = 'update_manager'
VariableExplorer = 'variable_explorer'
WorkingDirectory = 'workingdir'
| Plugins |
python | django__django | tests/staticfiles_tests/test_storage.py | {
"start": 28107,
"end": 28510
} | class ____(storage.ManifestStaticFilesStorage):
support_js_module_import_aggregation = True
@override_settings(
STORAGES={
**settings.STORAGES,
STATICFILES_STORAGE_ALIAS: {
"BACKEND": (
"staticfiles_tests.test_storage."
"JSModuleImportAggregationManifestStorage"
),
},
}
)
| JSModuleImportAggregationManifestStorage |
python | readthedocs__readthedocs.org | readthedocs/rtd_tests/tests/test_notifications.py | {
"start": 2536,
"end": 3427
} | class ____(TestCase):
@mock.patch("readthedocs.notifications.email.send_email")
def test_email_backend(self, send_email):
class TestNotification(EmailNotification):
name = "foo"
subject = "This is {{ foo.id }}"
context_object_name = "foo"
build = fixture.get(Build)
user = fixture.get(User)
notify = TestNotification(context_object=build, user=user)
notify.send()
send_email.assert_has_calls(
[
mock.call(
template=["builds/notifications/foo_email.txt"],
context=notify.get_context_data(),
subject="This is {}".format(build.id),
template_html=["builds/notifications/foo_email.html"],
recipient=user.email,
),
]
)
| NotificationBackendTests |
python | zarr-developers__zarr-python | src/zarr/core/metadata/v2.py | {
"start": 1270,
"end": 1595
} | class ____(TypedDict):
"""
A typed dictionary model for Zarr format 2 metadata.
"""
zarr_format: Literal[2]
attributes: dict[str, JSON]
# Union of acceptable types for v2 compressors
CompressorLikev2: TypeAlias = dict[str, JSON] | Numcodec | None
@dataclass(frozen=True, kw_only=True)
| ArrayV2MetadataDict |
python | dagster-io__dagster | python_modules/libraries/dagster-airbyte/dagster_airbyte/managed/types.py | {
"start": 7902,
"end": 11195
} | class ____:
"""A user-defined Airbyte connection, pairing an Airbyte source and destination and configuring
which streams to sync.
Args:
name (str): The display name of the connection.
source (AirbyteSource): The source to sync from.
destination (AirbyteDestination): The destination to sync to.
stream_config (Mapping[str, AirbyteSyncMode]): A mapping from stream name to
the sync mode for that stream, including any additional configuration
of primary key or cursor field.
normalize_data (Optional[bool]): Whether to normalize the data in the
destination.
destination_namespace (Optional[Union[AirbyteDestinationNamespace, str]]):
The namespace to sync to in the destination. If set to
AirbyteDestinationNamespace.SAME_AS_SOURCE, the namespace will be the
same as the source namespace. If set to
AirbyteDestinationNamespace.DESTINATION_DEFAULT, the namespace will be
the default namespace for the destination. If set to a string, the
namespace will be that string.
prefix (Optional[str]): A prefix to add to the table names in the destination.
Example:
.. code-block:: python
from dagster_airbyte.managed.generated.sources import FileSource
from dagster_airbyte.managed.generated.destinations import LocalJsonDestination
from dagster_airbyte import AirbyteConnection, AirbyteSyncMode
cereals_csv_source = FileSource(...)
local_json_destination = LocalJsonDestination(...)
cereals_connection = AirbyteConnection(
name="download-cereals",
source=cereals_csv_source,
destination=local_json_destination,
stream_config={"cereals": AirbyteSyncMode.full_refresh_overwrite()},
)
"""
@public
def __init__(
self,
name: str,
source: AirbyteSource,
destination: AirbyteDestination,
stream_config: Mapping[str, AirbyteSyncMode],
normalize_data: Optional[bool] = None,
destination_namespace: Optional[
Union[AirbyteDestinationNamespace, str]
] = AirbyteDestinationNamespace.SAME_AS_SOURCE,
prefix: Optional[str] = None,
):
self.name = check.str_param(name, "name")
self.source = check.inst_param(source, "source", AirbyteSource)
self.destination = check.inst_param(destination, "destination", AirbyteDestination)
self.stream_config = check.mapping_param(
stream_config, "stream_config", key_type=str, value_type=AirbyteSyncMode
)
self.normalize_data = check.opt_bool_param(normalize_data, "normalize_data")
self.destination_namespace = check.opt_inst_param(
destination_namespace, "destination_namespace", (str, AirbyteDestinationNamespace)
)
self.prefix = check.opt_str_param(prefix, "prefix")
def must_be_recreated(self, other: Optional["AirbyteConnection"]) -> bool:
return (
not other
or self.source.must_be_recreated(other.source)
or self.destination.must_be_recreated(other.destination)
)
| AirbyteConnection |
python | run-llama__llama_index | llama-index-integrations/llms/llama-index-llms-bedrock/tests/test_bedrock.py | {
"start": 1615,
"end": 12140
} | class ____:
def __init__(self, expected_prompt: str):
self.expected_prompt = expected_prompt
def mock_stream_completion_with_retry(
self, request_body: str, *args: Any, **kwargs: Any
) -> dict:
assert json.loads(request_body) == {
"inputText": self.expected_prompt,
"textGenerationConfig": {"maxTokenCount": 512, "temperature": 0.1},
}
return {
"ResponseMetadata": {
"HTTPHeaders": {
"connection": "keep-alive",
"content-type": "application/vnd.amazon.eventstream",
"date": "Fri, 20 Oct 2023 11:59:03 GMT",
"transfer-encoding": "chunked",
"x-amzn-bedrock-content-type": "application/json",
"x-amzn-requestid": "ef9af51b-7ba5-4020-3793-f4733226qb84",
},
"HTTPStatusCode": 200,
"RequestId": "ef9af51b-7ba5-4020-3793-f4733226qb84",
"RetryAttempts": 0,
},
"body": MockEventStream(),
"contentType": "application/json",
}
@pytest.mark.parametrize(
("model", "complete_request", "response_body", "chat_request"),
[
(
"amazon.titan-text-express-v1",
'{"inputText": "test prompt", "textGenerationConfig": {"temperature": 0.1, "maxTokenCount": 512}}',
'{"inputTextTokenCount": 3, "results": [{"tokenCount": 14, "outputText": "\\n\\nThis is indeed a test", "completionReason": "FINISH"}]}',
'{"inputText": "user: test prompt\\nassistant: ", "textGenerationConfig": {"temperature": 0.1, "maxTokenCount": 512}}',
),
(
"ai21.j2-grande-instruct",
'{"prompt": "test prompt", "temperature": 0.1, "maxTokens": 512}',
'{"completions": [{"data": {"text": "\\n\\nThis is indeed a test"}}]}',
'{"prompt": "user: test prompt\\nassistant: ", "temperature": 0.1, "maxTokens": 512}',
),
(
"cohere.command-text-v14",
'{"prompt": "test prompt", "temperature": 0.1, "max_tokens": 512}',
'{"generations": [{"text": "\\n\\nThis is indeed a test"}]}',
'{"prompt": "user: test prompt\\nassistant: ", "temperature": 0.1, "max_tokens": 512}',
),
# TODO: these need to get fixed
# (
# "anthropic.claude-instant-v1",
# '{"messages": [{"role": "user", "content": [{"text": "test prompt", "type": "text"}]}], "anthropic_version": "bedrock-2023-05-31", '
# '"temperature": 0.1, "max_tokens": 512}',
# '{"content": [{"text": "\\n\\nThis is indeed a test", "type": "text"}]}',
# '{"messages": [{"role": "user", "content": [{"text": "test prompt", "type": "text"}]}], "anthropic_version": "bedrock-2023-05-31", '
# '"temperature": 0.1, "max_tokens": 512}',
# ),
# (
# "meta.llama2-13b-chat-v1",
# '{"prompt": "<s> [INST] <<SYS>>\\n You are a helpful, respectful and '
# "honest assistant. Always answer as helpfully as possible and follow "
# "ALL given instructions. Do not speculate or make up information. Do "
# "not reference any given instructions or context. \\n<</SYS>>\\n\\n "
# 'test prompt [/INST]", "temperature": 0.1, "max_gen_len": 512}',
# '{"generation": "\\n\\nThis is indeed a test"}',
# '{"prompt": "<s> [INST] <<SYS>>\\n You are a helpful, respectful and '
# "honest assistant. Always answer as helpfully as possible and follow "
# "ALL given instructions. Do not speculate or make up information. Do "
# "not reference any given instructions or context. \\n<</SYS>>\\n\\n "
# 'test prompt [/INST]", "temperature": 0.1, "max_gen_len": 512}',
# ),
# (
# "mistral.mistral-7b-instruct-v0:2",
# '{"prompt": "<s> [INST] <<SYS>>\\n You are a helpful, respectful and '
# "honest assistant. Always answer as helpfully as possible and follow "
# "ALL given instructions. Do not speculate or make up information. Do "
# "not reference any given instructions or context. \\n<</SYS>>\\n\\n "
# 'test prompt [/INST]", "temperature": 0.1, "max_tokens": 512}',
# '{"outputs": [{"text": "\\n\\nThis is indeed a test", "stop_reason": "length"}]}',
# '{"prompt": "<s> [INST] <<SYS>>\\n You are a helpful, respectful and '
# "honest assistant. Always answer as helpfully as possible and follow "
# "ALL given instructions. Do not speculate or make up information. Do "
# "not reference any given instructions or context. \\n<</SYS>>\\n\\n "
# 'test prompt [/INST]", "temperature": 0.1, "max_tokens": 512}',
# ),
],
)
def test_model_basic(
model: str, complete_request: str, response_body: str, chat_request: str
) -> None:
llm = Bedrock(
model=model,
profile_name=None,
region_name="us-east-1",
aws_access_key_id="test",
guardrail_identifier="test",
guardrail_version="test",
trace="ENABLED",
)
bedrock_stubber = Stubber(llm._client)
# response for llm.complete()
bedrock_stubber.add_response(
"invoke_model",
get_invoke_model_response(response_body),
{
"body": complete_request,
"modelId": model,
"guardrailIdentifier": "test",
"guardrailVersion": "test",
"trace": "ENABLED",
},
)
# response for llm.chat()
bedrock_stubber.add_response(
"invoke_model",
get_invoke_model_response(response_body),
{
"body": chat_request,
"modelId": model,
"guardrailIdentifier": "test",
"guardrailVersion": "test",
"trace": "ENABLED",
},
)
bedrock_stubber.activate()
test_prompt = "test prompt"
response = llm.complete(test_prompt)
assert response.text == "\n\nThis is indeed a test"
message = ChatMessage(role="user", content=test_prompt)
chat_response = llm.chat([message])
assert chat_response.message.content == "\n\nThis is indeed a test"
bedrock_stubber.deactivate()
def test_model_streaming(monkeypatch: MonkeyPatch) -> None:
monkeypatch.setattr(
"llama_index.llms.bedrock.base.completion_with_retry",
MockStreamCompletionWithRetry("test prompt").mock_stream_completion_with_retry,
)
llm = Bedrock(
model="amazon.titan-text-express-v1",
profile_name=None,
region_name="us-east-1",
aws_access_key_id="test",
)
test_prompt = "test prompt"
response_gen = llm.stream_complete(test_prompt)
response = list(response_gen)
assert response[-1].text == "\n\nThis is indeed a test"
monkeypatch.setattr(
"llama_index.llms.bedrock.base.completion_with_retry",
MockStreamCompletionWithRetry(
"user: test prompt\nassistant: "
).mock_stream_completion_with_retry,
)
message = ChatMessage(role="user", content=test_prompt)
chat_response_gen = llm.stream_chat([message])
chat_response = list(chat_response_gen)
assert chat_response[-1].message.content == "\n\nThis is indeed a test"
@pytest.mark.parametrize(
("model", "provider_type", "complete_request", "response_body", "chat_request"),
[
(
"arn:aws:bedrock:eu-west-3:011111111111:application-inference-profile/j0ddxltg25q9",
ProviderType.AMAZON,
'{"inputText": "test prompt", "textGenerationConfig": {"temperature": 0.1, "maxTokenCount": 512}}',
'{"inputTextTokenCount": 3, "results": [{"tokenCount": 14, "outputText": "\\n\\nThis is indeed a test", "completionReason": "FINISH"}]}',
'{"inputText": "user: test prompt\\nassistant: ", "textGenerationConfig": {"temperature": 0.1, "maxTokenCount": 512}}',
),
(
"arn:aws:bedrock:eu-west-3:011111111111:application-inference-profile/j0ddxltg25f5",
ProviderType.AI21,
'{"prompt": "test prompt", "temperature": 0.1, "maxTokens": 512}',
'{"completions": [{"data": {"text": "\\n\\nThis is indeed a test"}}]}',
'{"prompt": "user: test prompt\\nassistant: ", "temperature": 0.1, "maxTokens": 512}',
),
(
"arn:aws:bedrock:eu-west-3:011111111111:application-inference-profile/k1ddxltg25f5",
ProviderType.COHERE,
'{"prompt": "test prompt", "temperature": 0.1, "max_tokens": 512}',
'{"generations": [{"text": "\\n\\nThis is indeed a test"}]}',
'{"prompt": "user: test prompt\\nassistant: ", "temperature": 0.1, "max_tokens": 512}',
),
],
)
def test_application_inference_profile(
model: str,
complete_request: str,
response_body: str,
chat_request: str,
provider_type: ProviderType,
) -> None:
llm = Bedrock(
model=model,
profile_name=None,
context_size=7000,
region_name="us-east-1",
aws_access_key_id="test",
guardrail_identifier="test",
guardrail_version="test",
trace="ENABLED",
provider_type=provider_type,
)
bedrock_stubber = Stubber(llm._client)
# response for llm.complete()
bedrock_stubber.add_response(
"invoke_model",
get_invoke_model_response(response_body),
{
"body": complete_request,
"modelId": model,
"guardrailIdentifier": "test",
"guardrailVersion": "test",
"trace": "ENABLED",
},
)
# response for llm.chat()
bedrock_stubber.add_response(
"invoke_model",
get_invoke_model_response(response_body),
{
"body": chat_request,
"modelId": model,
"guardrailIdentifier": "test",
"guardrailVersion": "test",
"trace": "ENABLED",
},
)
bedrock_stubber.activate()
test_prompt = "test prompt"
response = llm.complete(test_prompt)
assert response.text == "\n\nThis is indeed a test"
message = ChatMessage(role="user", content=test_prompt)
chat_response = llm.chat([message])
assert chat_response.message.content == "\n\nThis is indeed a test"
bedrock_stubber.deactivate()
| MockStreamCompletionWithRetry |
python | psf__black | src/black/handle_ipynb_magics.py | {
"start": 10670,
"end": 10944
} | class ____:
name: str
params: str | None
body: str
@property
def header(self) -> str:
if self.params:
return f"%%{self.name} {self.params}"
return f"%%{self.name}"
# ast.NodeVisitor + dataclass = breakage under mypyc.
| CellMagic |
python | pypa__pip | src/pip/_internal/index/sources.py | {
"start": 1365,
"end": 2930
} | class ____:
"""Scans directory and caches results"""
def __init__(self, path: str) -> None:
self._path = path
self._page_candidates: list[str] = []
self._project_name_to_urls: dict[str, list[str]] = defaultdict(list)
self._scanned_directory = False
def _scan_directory(self) -> None:
"""Scans directory once and populates both page_candidates
and project_name_to_urls at the same time
"""
for entry in os.scandir(self._path):
url = path_to_url(entry.path)
if _is_html_file(url):
self._page_candidates.append(url)
continue
# File must have a valid wheel or sdist name,
# otherwise not worth considering as a package
try:
project_filename = parse_wheel_filename(entry.name)[0]
except InvalidWheelFilename:
try:
project_filename = parse_sdist_filename(entry.name)[0]
except InvalidSdistFilename:
continue
self._project_name_to_urls[project_filename].append(url)
self._scanned_directory = True
@property
def page_candidates(self) -> list[str]:
if not self._scanned_directory:
self._scan_directory()
return self._page_candidates
@property
def project_name_to_urls(self) -> dict[str, list[str]]:
if not self._scanned_directory:
self._scan_directory()
return self._project_name_to_urls
| _FlatDirectoryToUrls |
python | charliermarsh__ruff | crates/ruff_linter/resources/test/fixtures/pylint/eq_without_hash.py | {
"start": 1293,
"end": 1395
} | class ____:
match ...:
case int():
def __eq__(self, other): ...
| MaybeEqMatchCase |
python | scrapy__scrapy | scrapy/contracts/__init__.py | {
"start": 566,
"end": 3324
} | class ____:
"""Abstract class for contracts"""
request_cls: type[Request] | None = None
name: str
def __init__(self, method: Callable, *args: Any):
self.testcase_pre = _create_testcase(method, f"@{self.name} pre-hook")
self.testcase_post = _create_testcase(method, f"@{self.name} post-hook")
self.args: tuple[Any, ...] = args
def add_pre_hook(self, request: Request, results: TestResult) -> Request:
if hasattr(self, "pre_process"):
cb = request.callback
assert cb is not None
@wraps(cb)
def wrapper(response: Response, **cb_kwargs: Any) -> list[Any]:
try:
results.startTest(self.testcase_pre)
self.pre_process(response)
results.stopTest(self.testcase_pre)
except AssertionError:
results.addFailure(self.testcase_pre, sys.exc_info())
except Exception:
results.addError(self.testcase_pre, sys.exc_info())
else:
results.addSuccess(self.testcase_pre)
cb_result = cb(response, **cb_kwargs)
if isinstance(cb_result, (AsyncGenerator, CoroutineType)):
raise TypeError("Contracts don't support async callbacks")
return list(cast("Iterable[Any]", iterate_spider_output(cb_result)))
request.callback = wrapper
return request
def add_post_hook(self, request: Request, results: TestResult) -> Request:
if hasattr(self, "post_process"):
cb = request.callback
assert cb is not None
@wraps(cb)
def wrapper(response: Response, **cb_kwargs: Any) -> list[Any]:
cb_result = cb(response, **cb_kwargs)
if isinstance(cb_result, (AsyncGenerator, CoroutineType)):
raise TypeError("Contracts don't support async callbacks")
output = list(cast("Iterable[Any]", iterate_spider_output(cb_result)))
try:
results.startTest(self.testcase_post)
self.post_process(output)
results.stopTest(self.testcase_post)
except AssertionError:
results.addFailure(self.testcase_post, sys.exc_info())
except Exception:
results.addError(self.testcase_post, sys.exc_info())
else:
results.addSuccess(self.testcase_post)
return output
request.callback = wrapper
return request
def adjust_request_args(self, args: dict[str, Any]) -> dict[str, Any]:
return args
| Contract |
python | pypa__pipenv | pipenv/patched/pip/_vendor/distlib/locators.py | {
"start": 41483,
"end": 51026
} | class ____(object):
"""
Locate dependencies for distributions.
"""
def __init__(self, locator=None):
"""
Initialise an instance, using the specified locator
to locate distributions.
"""
self.locator = locator or default_locator
self.scheme = get_scheme(self.locator.scheme)
def add_distribution(self, dist):
"""
Add a distribution to the finder. This will update internal information
about who provides what.
:param dist: The distribution to add.
"""
logger.debug('adding distribution %s', dist)
name = dist.key
self.dists_by_name[name] = dist
self.dists[(name, dist.version)] = dist
for p in dist.provides:
name, version = parse_name_and_version(p)
logger.debug('Add to provided: %s, %s, %s', name, version, dist)
self.provided.setdefault(name, set()).add((version, dist))
def remove_distribution(self, dist):
"""
Remove a distribution from the finder. This will update internal
information about who provides what.
:param dist: The distribution to remove.
"""
logger.debug('removing distribution %s', dist)
name = dist.key
del self.dists_by_name[name]
del self.dists[(name, dist.version)]
for p in dist.provides:
name, version = parse_name_and_version(p)
logger.debug('Remove from provided: %s, %s, %s', name, version, dist)
s = self.provided[name]
s.remove((version, dist))
if not s:
del self.provided[name]
def get_matcher(self, reqt):
"""
Get a version matcher for a requirement.
:param reqt: The requirement
:type reqt: str
:return: A version matcher (an instance of
:class:`distlib.version.Matcher`).
"""
try:
matcher = self.scheme.matcher(reqt)
except UnsupportedVersionError: # pragma: no cover
# XXX compat-mode if cannot read the version
name = reqt.split()[0]
matcher = self.scheme.matcher(name)
return matcher
def find_providers(self, reqt):
"""
Find the distributions which can fulfill a requirement.
:param reqt: The requirement.
:type reqt: str
:return: A set of distribution which can fulfill the requirement.
"""
matcher = self.get_matcher(reqt)
name = matcher.key # case-insensitive
result = set()
provided = self.provided
if name in provided:
for version, provider in provided[name]:
try:
match = matcher.match(version)
except UnsupportedVersionError:
match = False
if match:
result.add(provider)
break
return result
def try_to_replace(self, provider, other, problems):
"""
Attempt to replace one provider with another. This is typically used
when resolving dependencies from multiple sources, e.g. A requires
(B >= 1.0) while C requires (B >= 1.1).
For successful replacement, ``provider`` must meet all the requirements
which ``other`` fulfills.
:param provider: The provider we are trying to replace with.
:param other: The provider we're trying to replace.
:param problems: If False is returned, this will contain what
problems prevented replacement. This is currently
a tuple of the literal string 'cantreplace',
``provider``, ``other`` and the set of requirements
that ``provider`` couldn't fulfill.
:return: True if we can replace ``other`` with ``provider``, else
False.
"""
rlist = self.reqts[other]
unmatched = set()
for s in rlist:
matcher = self.get_matcher(s)
if not matcher.match(provider.version):
unmatched.add(s)
if unmatched:
# can't replace other with provider
problems.add(('cantreplace', provider, other, frozenset(unmatched)))
result = False
else:
# can replace other with provider
self.remove_distribution(other)
del self.reqts[other]
for s in rlist:
self.reqts.setdefault(provider, set()).add(s)
self.add_distribution(provider)
result = True
return result
def find(self, requirement, meta_extras=None, prereleases=False):
"""
Find a distribution and all distributions it depends on.
:param requirement: The requirement specifying the distribution to
find, or a Distribution instance.
:param meta_extras: A list of meta extras such as :test:, :build: and
so on.
:param prereleases: If ``True``, allow pre-release versions to be
returned - otherwise, don't return prereleases
unless they're all that's available.
Return a set of :class:`Distribution` instances and a set of
problems.
The distributions returned should be such that they have the
:attr:`required` attribute set to ``True`` if they were
from the ``requirement`` passed to ``find()``, and they have the
:attr:`build_time_dependency` attribute set to ``True`` unless they
are post-installation dependencies of the ``requirement``.
The problems should be a tuple consisting of the string
``'unsatisfied'`` and the requirement which couldn't be satisfied
by any distribution known to the locator.
"""
self.provided = {}
self.dists = {}
self.dists_by_name = {}
self.reqts = {}
meta_extras = set(meta_extras or [])
if ':*:' in meta_extras:
meta_extras.remove(':*:')
# :meta: and :run: are implicitly included
meta_extras |= set([':test:', ':build:', ':dev:'])
if isinstance(requirement, Distribution):
dist = odist = requirement
logger.debug('passed %s as requirement', odist)
else:
dist = odist = self.locator.locate(requirement, prereleases=prereleases)
if dist is None:
raise DistlibException('Unable to locate %r' % requirement)
logger.debug('located %s', odist)
dist.requested = True
problems = set()
todo = set([dist])
install_dists = set([odist])
while todo:
dist = todo.pop()
name = dist.key # case-insensitive
if name not in self.dists_by_name:
self.add_distribution(dist)
else:
# import pdb; pdb.set_trace()
other = self.dists_by_name[name]
if other != dist:
self.try_to_replace(dist, other, problems)
ireqts = dist.run_requires | dist.meta_requires
sreqts = dist.build_requires
ereqts = set()
if meta_extras and dist in install_dists:
for key in ('test', 'build', 'dev'):
e = ':%s:' % key
if e in meta_extras:
ereqts |= getattr(dist, '%s_requires' % key)
all_reqts = ireqts | sreqts | ereqts
for r in all_reqts:
providers = self.find_providers(r)
if not providers:
logger.debug('No providers found for %r', r)
provider = self.locator.locate(r, prereleases=prereleases)
# If no provider is found and we didn't consider
# prereleases, consider them now.
if provider is None and not prereleases:
provider = self.locator.locate(r, prereleases=True)
if provider is None:
logger.debug('Cannot satisfy %r', r)
problems.add(('unsatisfied', r))
else:
n, v = provider.key, provider.version
if (n, v) not in self.dists:
todo.add(provider)
providers.add(provider)
if r in ireqts and dist in install_dists:
install_dists.add(provider)
logger.debug('Adding %s to install_dists', provider.name_and_version)
for p in providers:
name = p.key
if name not in self.dists_by_name:
self.reqts.setdefault(p, set()).add(r)
else:
other = self.dists_by_name[name]
if other != p:
# see if other can be replaced by p
self.try_to_replace(p, other, problems)
dists = set(self.dists.values())
for dist in dists:
dist.build_time_dependency = dist not in install_dists
if dist.build_time_dependency:
logger.debug('%s is a build-time dependency only.', dist.name_and_version)
logger.debug('find done for %s', odist)
return dists, problems
| DependencyFinder |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-jira/integration_tests/fixtures/data_generator/streams.py | {
"start": 4400,
"end": 6041
} | class ____(IssueComments, GeneratorMixin):
"""
https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-issue-comments/#api-rest-api-3-issue-issueidorkey-comment-post
"""
def generate(self):
issues_stream = Issues(authenticator=self._session.auth, domain=self._domain)
for issue in issues_stream.read_records(sync_mode=SyncMode.full_refresh):
for index in range(20):
payload = json.dumps(
{
"body": {
"type": "doc",
"version": 1,
"content": [
{
"type": "paragraph",
"content": [
{
"text": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. "
"Pellentesque eget "
"venenatis elit. Duis eu justo eget augue iaculis fermentum. Sed "
"semper quam "
"laoreet nisi egestas at posuere augue semper.",
"type": "text",
}
],
}
],
}
}
)
self.generate_record(payload, stream_slice={"key": issue["key"]})
| IssueCommentsGenerator |
python | Textualize__rich | rich/styled.py | {
"start": 225,
"end": 1234
} | class ____:
"""Apply a style to a renderable.
Args:
renderable (RenderableType): Any renderable.
style (StyleType): A style to apply across the entire renderable.
"""
def __init__(self, renderable: "RenderableType", style: "StyleType") -> None:
self.renderable = renderable
self.style = style
def __rich_console__(
self, console: "Console", options: "ConsoleOptions"
) -> "RenderResult":
style = console.get_style(self.style)
rendered_segments = console.render(self.renderable, options)
segments = Segment.apply_style(rendered_segments, style)
return segments
def __rich_measure__(
self, console: "Console", options: "ConsoleOptions"
) -> Measurement:
return Measurement.get(console, options, self.renderable)
if __name__ == "__main__": # pragma: no cover
from rich import print
from rich.panel import Panel
panel = Styled(Panel("hello"), "on blue")
print(panel)
| Styled |
python | getsentry__sentry-python | sentry_sdk/integrations/langchain.py | {
"start": 5824,
"end": 6045
} | class ____:
span = None # type: Span
children = [] # type: List[WatchedSpan]
is_pipeline = False # type: bool
def __init__(self, span):
# type: (Span) -> None
self.span = span
| WatchedSpan |
python | qdrant__qdrant-client | qdrant_client/http/models/models.py | {
"start": 140887,
"end": 140975
} | class ____(BaseModel):
cancelled: str = Field(..., description="")
| TrackerStatusOneOf1 |
python | doocs__leetcode | solution/1000-1099/1065.Index Pairs of a String/Solution2.py | {
"start": 0,
"end": 362
} | class ____:
def __init__(self):
self.children = [None] * 26
self.is_end = False
def insert(self, word):
node = self
for c in word:
idx = ord(c) - ord('a')
if node.children[idx] is None:
node.children[idx] = Trie()
node = node.children[idx]
node.is_end = True
| Trie |
python | ray-project__ray | ci/ray_ci/doc/test_update_cache_env.py | {
"start": 234,
"end": 2674
} | class ____:
def __init__(self, srcdir: str, doctreedir: str, project: Project, all_docs: dict):
self.srcdir = "srcdir"
self.doctreedir = "doctreedir"
self.project = Project(
"srcdir",
{".rst": "restructuredtext", ".md": "myst-nb", ".ipynb": "myst-nb"},
)
self.project.discover()
self.all_docs = {}
def _generate_test_env():
list_files = ["file1", "file2", "file3", "file4", "file5"]
with tempfile.TemporaryDirectory() as temp_dir:
env = FakeBuildEnv(
os.path.join(temp_dir, "source"),
os.path.join(temp_dir, "_build/doctrees"),
Project(
os.path.join(temp_dir, "source"),
{".rst": "restructuredtext", ".md": "myst-nb", ".ipynb": "myst-nb"},
),
{},
)
p = Project(
env.srcdir,
{".rst": "restructuredtext", ".md": "myst-nb", ".ipynb": "myst-nb"},
)
p.discover()
env.project = p
env.all_docs = {}
# If you have a list of documents, you can add them like this:
current_time = 1234567890
for doc in list_files:
env.all_docs[doc] = current_time
return env
def test_update_environment_pickle():
with tempfile.TemporaryDirectory() as temp_dir:
env = _generate_test_env()
os.makedirs(os.path.join(temp_dir, "doc/_build/doctrees"))
with open(os.path.join(temp_dir, "doc", ENVIRONMENT_PICKLE), "wb+") as f:
pickle.dump(env, f, pickle.HIGHEST_PROTOCOL)
pending_files = ["file1", "file2", "file3"]
update_environment_pickle(temp_dir, pending_files)
with open(os.path.join(temp_dir, "doc", ENVIRONMENT_PICKLE), "rb+") as f:
env = pickle.load(f)
assert env.srcdir == os.path.join(temp_dir, "doc/source")
assert env.doctreedir == os.path.join(temp_dir, "doc/_build/doctrees")
assert env.project.srcdir == os.path.join(temp_dir, "doc/source")
assert len(env.all_docs) == 5
assert env.all_docs["file1"] == 1234567890
assert env.all_docs["file2"] == 1234567890
assert env.all_docs["file3"] == 1234567890
assert env.all_docs["file4"] != 1234567890
assert env.all_docs["file5"] != 1234567890
if __name__ == "__main__":
sys.exit(pytest.main(["-vv", __file__]))
| FakeBuildEnv |
python | PyCQA__pylint | tests/functional/a/abstract/abstract_class_instantiated.py | {
"start": 1335,
"end": 1435
} | class ____(Structure):
def keys(self):
return iter([1, 2, 3])
__iter__ = keys
| Iterator |
python | spack__spack | var/spack/test_repos/spack_repo/builtin_mock/packages/mvapich2/package.py | {
"start": 216,
"end": 558
} | class ____(Package):
homepage = "http://www.homepage.org"
url = "http://www.someurl"
version("1.5", md5="9c5d5d4fe1e17dd12153f40bc5b6dbc0")
variant(
"file_systems",
description="List of the ROMIO file systems to activate",
values=auto_or_any_combination_of("lustre", "gpfs", "nfs", "ufs"),
)
| Mvapich2 |
python | dagster-io__dagster | python_modules/libraries/dagster-airflow/dagster_airflow/links/dagster_link.py | {
"start": 337,
"end": 1086
} | class ____(BaseOperatorLink):
name = "Dagster Cloud" # type: ignore # (airflow 1 compat)
def get_link(self, operator, dttm): # pyright: ignore[reportIncompatibleMethodOverride]
ti = TaskInstance(task=operator, execution_date=dttm)
run_id = ti.xcom_pull(task_ids=operator.task_id, key="run_id")
organization_id = ti.xcom_pull(task_ids=operator.task_id, key="organization_id")
deployment_name = ti.xcom_pull(task_ids=operator.task_id, key="deployment_name")
if run_id and organization_id and deployment_name:
return LINK_FMT.format(
organization_id=organization_id, deployment_name=deployment_name, run_id=run_id
)
else:
return ""
| DagsterLink |
python | realpython__materials | wordcount/tests/realpython/models.py | {
"start": 718,
"end": 2169
} | class ____:
item: Item
status: TestStatus
exception: RealPythonAssertionError | None
@cached_property
def id(self) -> str:
return self.item.nodeid
@cached_property
def function(self) -> Function | None:
if hasattr(self.item, "function"):
return self.item.function
else:
return None
@cached_property
def task_number(self) -> int | None:
if self.function and hasattr(self.function, "task"):
return self.function.task.number
else:
return None
@cached_property
def name(self) -> str:
docstring = self.function.__doc__ if self.function else None
full_name = self.id.split("::")[-1]
if match := re.fullmatch(r"([^\[]+)(\[([^]]+)])?", full_name):
function_name = match.group(1)
params = match.group(3)
pretty_name = (
function_name.removeprefix("test_")
.replace("_", " ")
.capitalize()
)
if params:
if docstring:
return f"{docstring} ({params})"
else:
return f"{pretty_name} ({params})"
else:
if docstring:
return docstring
else:
return pretty_name
else:
return docstring if docstring else full_name
@dataclass
| Test |
python | google__python-fire | fire/trace.py | {
"start": 1550,
"end": 8251
} | class ____:
"""A FireTrace represents the steps taken during a single Fire execution.
A FireTrace consists of a sequence of FireTraceElement objects. Each element
represents an action taken by Fire during a single Fire execution. An action
may be instantiating a class, calling a routine, or accessing a property.
"""
def __init__(self, initial_component, name=None, separator='-', verbose=False,
show_help=False, show_trace=False):
initial_trace_element = FireTraceElement(
component=initial_component,
action=INITIAL_COMPONENT,
)
self.name = name
self.separator = separator
self.elements = [initial_trace_element]
self.verbose = verbose
self.show_help = show_help
self.show_trace = show_trace
def GetResult(self):
"""Returns the component from the last element of the trace."""
return self.GetLastHealthyElement().component
def GetLastHealthyElement(self):
"""Returns the last element of the trace that is not an error.
This element will contain the final component indicated by the trace.
Returns:
The last element of the trace that is not an error.
"""
for element in reversed(self.elements):
if not element.HasError():
return element
return self.elements[0] # The initial element is always healthy.
def HasError(self):
"""Returns whether the Fire execution encountered a Fire usage error."""
return self.elements[-1].HasError()
def AddAccessedProperty(self, component, target, args, filename, lineno):
element = FireTraceElement(
component=component,
action=ACCESSED_PROPERTY,
target=target,
args=args,
filename=filename,
lineno=lineno,
)
self.elements.append(element)
def AddCalledComponent(self, component, target, args, filename, lineno,
capacity, action=CALLED_CALLABLE):
"""Adds an element to the trace indicating that a component was called.
Also applies to instantiating a class.
Args:
component: The result of calling the callable.
target: The name of the callable.
args: The args consumed in order to call this callable.
filename: The file in which the callable is defined, or None if N/A.
lineno: The line number on which the callable is defined, or None if N/A.
capacity: (bool) Whether the callable could have accepted additional args.
action: The value to include as the action in the FireTraceElement.
"""
element = FireTraceElement(
component=component,
action=action,
target=target,
args=args,
filename=filename,
lineno=lineno,
capacity=capacity,
)
self.elements.append(element)
def AddCompletionScript(self, script):
element = FireTraceElement(
component=script,
action=COMPLETION_SCRIPT,
)
self.elements.append(element)
def AddInteractiveMode(self):
element = FireTraceElement(action=INTERACTIVE_MODE)
self.elements.append(element)
def AddError(self, error, args):
element = FireTraceElement(error=error, args=args)
self.elements.append(element)
def AddSeparator(self):
"""Marks that the most recent element of the trace used a separator.
A separator is an argument you can pass to a Fire CLI to separate args left
of the separator from args right of the separator.
Here's an example to demonstrate the separator. Let's say you have a
function that takes a variable number of args, and you want to call that
function, and then upper case the result. Here's how to do it:
# in Python
def display(arg1, arg2='!'):
return arg1 + arg2
# from Bash (the default separator is the hyphen -)
display hello # hello!
display hello upper # helloupper
display hello - upper # HELLO!
Note how the separator caused the display function to be called with the
default value for arg2.
"""
self.elements[-1].AddSeparator()
def _Quote(self, arg):
if arg.startswith('--') and '=' in arg:
prefix, value = arg.split('=', 1)
return shlex.quote(prefix) + '=' + shlex.quote(value)
return shlex.quote(arg)
def GetCommand(self, include_separators=True):
"""Returns the command representing the trace up to this point.
Args:
include_separators: Whether or not to include separators in the command.
Returns:
A string representing a Fire CLI command that would produce this trace.
"""
args = []
if self.name:
args.append(self.name)
for element in self.elements:
if element.HasError():
continue
if element.args:
args.extend(element.args)
if element.HasSeparator() and include_separators:
args.append(self.separator)
if self.NeedsSeparator() and include_separators:
args.append(self.separator)
return ' '.join(self._Quote(arg) for arg in args)
def NeedsSeparator(self):
"""Returns whether a separator should be added to the command.
If the command is a function call, then adding an additional argument to the
command sometimes would add an extra arg to the function call, and sometimes
would add an arg acting on the result of the function call.
This function tells us whether we should add a separator to the command
before adding additional arguments in order to make sure the arg is applied
to the result of the function call, and not the function call itself.
Returns:
Whether a separator should be added to the command if order to keep the
component referred to by the command the same when adding additional args.
"""
element = self.GetLastHealthyElement()
return element.HasCapacity() and not element.HasSeparator()
def __str__(self):
lines = []
for index, element in enumerate(self.elements):
line = f'{index + 1}. {element}'
lines.append(line)
return '\n'.join(lines)
def NeedsSeparatingHyphenHyphen(self, flag='help'):
"""Returns whether a the trace need '--' before '--help'.
'--' is needed when the component takes keyword arguments, when the value of
flag matches one of the argument of the component, or the component takes in
keyword-only arguments(e.g. argument with default value).
Args:
flag: the flag available for the trace
Returns:
True for needed '--', False otherwise.
"""
element = self.GetLastHealthyElement()
component = element.component
spec = inspectutils.GetFullArgSpec(component)
return (spec.varkw is not None
or flag in spec.args
or flag in spec.kwonlyargs)
| FireTrace |
python | openai__openai-python | src/openai/types/responses/function_shell_tool.py | {
"start": 194,
"end": 311
} | class ____(BaseModel):
type: Literal["shell"]
"""The type of the shell tool. Always `shell`."""
| FunctionShellTool |
python | huggingface__transformers | tests/models/metaclip_2/test_modeling_metaclip_2.py | {
"start": 5837,
"end": 7743
} | class ____(ModelTesterMixin):
"""
Subclass of ModelTesterMixin with methods specific to testing MetaClip2 models.
The SDPA equivalence test is overridden here because MetaClip2 models may have test/vision/text+vision inputs,
different output logits, and are not supposed to be used or tested with padding_side="left".
"""
def test_sdpa_can_dispatch_composite_models(self):
for model_class in self.all_model_classes:
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
model = model_class(config)
with tempfile.TemporaryDirectory() as tmpdirname:
model.save_pretrained(tmpdirname)
# Load the model with SDPA (it is the default, but we explicit it for clarity)
model_sdpa = model_class.from_pretrained(tmpdirname, attn_implementation="sdpa")
model_sdpa = model_sdpa.eval().to(torch_device)
# Load model with eager attention
model_eager = model_class.from_pretrained(
tmpdirname,
attn_implementation="eager",
)
model_eager = model_eager.eval().to(torch_device)
if hasattr(model_sdpa, "vision_model"):
self.assertTrue(model_sdpa.vision_model.config._attn_implementation == "sdpa")
self.assertTrue(model_eager.vision_model.config._attn_implementation == "eager")
if hasattr(model_sdpa, "text_model"):
self.assertTrue(model_sdpa.text_model.config._attn_implementation == "sdpa")
self.assertTrue(model_eager.text_model.config._attn_implementation == "eager")
self.assertTrue(model_sdpa.config._attn_implementation == "sdpa")
self.assertTrue(model_eager.config._attn_implementation == "eager")
@require_torch
| MetaClip2ModelTesterMixin |
python | nedbat__coveragepy | coverage/html.py | {
"start": 2922,
"end": 7277
} | class ____:
"""Generate structured data to be turned into HTML reports."""
EMPTY = "(empty)"
def __init__(self, cov: Coverage) -> None:
self.coverage = cov
self.config = self.coverage.config
self.data = self.coverage.get_data()
self.has_arcs = self.data.has_arcs()
if self.config.show_contexts:
if self.data.measured_contexts() == {""}:
self.coverage._warn("No contexts were measured")
self.data.set_query_contexts(self.config.report_contexts)
def data_for_file(self, fr: FileReporter, analysis: Analysis) -> FileData:
"""Produce the data needed for one file's report."""
if self.has_arcs:
missing_branch_arcs = analysis.missing_branch_arcs()
arcs_executed = analysis.arcs_executed
else:
missing_branch_arcs = {}
arcs_executed = []
if self.config.show_contexts:
contexts_by_lineno = self.data.contexts_by_lineno(analysis.filename)
lines = []
branch_stats = analysis.branch_stats()
multiline_map = {}
if hasattr(fr, "multiline_map"):
multiline_map = fr.multiline_map()
for lineno, tokens in enumerate(fr.source_token_lines(), start=1):
# Figure out how to mark this line.
category = category2 = ""
short_annotations = []
long_annotations = []
if lineno in analysis.excluded:
category = "exc"
elif lineno in analysis.missing:
category = "mis"
elif self.has_arcs and lineno in missing_branch_arcs:
category = "par"
mba = missing_branch_arcs[lineno]
if len(mba) == branch_stats[lineno][0]:
# None of the branches were taken from this line.
short_annotations.append("anywhere")
long_annotations.append(
f"line {lineno} didn't jump anywhere: it always raised an exception."
)
else:
for b in missing_branch_arcs[lineno]:
if b < 0:
short_annotations.append("exit")
else:
short_annotations.append(str(b))
long_annotations.append(
fr.missing_arc_description(lineno, b, arcs_executed)
)
elif lineno in analysis.statements:
category = "run"
elif first_line := multiline_map.get(lineno):
if first_line in analysis.excluded:
category2 = "exc2"
elif first_line in analysis.missing:
category2 = "mis2"
elif self.has_arcs and first_line in missing_branch_arcs:
category2 = "par2"
# I don't understand why this last condition is marked as
# partial. If I add an else with an exception, the exception
# is raised.
elif first_line in analysis.statements: # pragma: part covered
category2 = "run2"
contexts = []
contexts_label = ""
context_list = []
if category and self.config.show_contexts:
contexts = human_sorted(c or self.EMPTY for c in contexts_by_lineno.get(lineno, ()))
if contexts == [self.EMPTY]:
contexts_label = self.EMPTY
else:
contexts_label = f"{len(contexts)} ctx"
context_list = contexts
lines.append(
LineData(
tokens=tokens,
number=lineno,
category=category or category2,
contexts=contexts,
contexts_label=contexts_label,
context_list=context_list,
short_annotations=short_annotations,
long_annotations=long_annotations,
)
)
file_data = FileData(
relative_filename=fr.relative_filename(),
nums=analysis.numbers,
lines=lines,
)
return file_data
| HtmlDataGeneration |
python | eventlet__eventlet | tests/patcher_test.py | {
"start": 9510,
"end": 12062
} | class ____(ProcessBase):
def test_orig_thread(self):
new_mod = """import eventlet
eventlet.monkey_patch()
from eventlet import patcher
import threading
_threading = patcher.original('threading')
def test():
print(repr(threading.currentThread()))
t = _threading.Thread(target=test)
t.start()
t.join()
print(len(threading._active))
print(len(_threading._active))
"""
self.write_to_tempfile("newmod", new_mod)
output, lines = self.launch_subprocess('newmod.py')
self.assertEqual(len(lines), 4, "\n".join(lines))
assert lines[0].startswith('<Thread'), lines[0]
assert lines[1] == '1', lines
assert lines[2] == '1', lines
def test_tpool(self):
new_mod = """import eventlet
eventlet.monkey_patch()
from eventlet import tpool
import threading
def test():
print(repr(threading.currentThread()))
tpool.execute(test)
print(len(threading._active))
"""
self.write_to_tempfile("newmod", new_mod)
output, lines = self.launch_subprocess('newmod.py')
self.assertEqual(len(lines), 3, "\n".join(lines))
assert lines[0].startswith('<Thread'), lines[0]
self.assertEqual(lines[1], "1", lines[1])
def test_greenlet(self):
new_mod = """import eventlet
eventlet.monkey_patch()
from eventlet import event
import threading
evt = event.Event()
def test():
print(repr(threading.currentThread()))
evt.send()
eventlet.spawn_n(test)
evt.wait()
print(len(threading._active))
"""
self.write_to_tempfile("newmod", new_mod)
output, lines = self.launch_subprocess('newmod.py')
self.assertEqual(len(lines), 3, "\n".join(lines))
assert lines[0].startswith('<_MainThread'), lines[0]
self.assertEqual(lines[1], "1", lines[1])
def test_greenthread(self):
new_mod = """import eventlet
eventlet.monkey_patch()
import threading
def test():
print(repr(threading.currentThread()))
t = eventlet.spawn(test)
t.wait()
print(len(threading._active))
"""
self.write_to_tempfile("newmod", new_mod)
output, lines = self.launch_subprocess('newmod.py')
self.assertEqual(len(lines), 3, "\n".join(lines))
assert lines[0].startswith('<_GreenThread'), lines[0]
self.assertEqual(lines[1], "1", lines[1])
def test_keyerror(self):
new_mod = """import eventlet
eventlet.monkey_patch()
"""
self.write_to_tempfile("newmod", new_mod)
output, lines = self.launch_subprocess('newmod.py')
self.assertEqual(len(lines), 1, "\n".join(lines))
| Threading |
python | getsentry__sentry | tests/sentry/db/postgres/schema/safe_migrations/integration/test_migrations.py | {
"start": 11758,
"end": 12351
} | class ____(BaseSafeMigrationTest):
app = "good_flow_delete_simple_app"
migrate_from = "0001"
migrate_to = "0003"
def test(self) -> None:
self._run_migration(self.app, "0001_initial")
assert f"{self.app}_testtable" in connection.introspection.table_names()
self._run_migration(self.app, "0002_set_pending")
assert f"{self.app}_testtable" in connection.introspection.table_names()
self._run_migration(self.app, "0003_delete")
assert f"{self.app}_testtable" not in connection.introspection.table_names()
| DeletionModelGoodDeleteSimple |
python | kennethreitz__tablib | src/tablib/packages/dbfpy/utils.py | {
"start": 3922,
"end": 4835
} | class ____:
"""Value returned from DBF records when field validation fails
The value is not equal to anything except for itself
and equal to all empty values: None, 0, empty string etc.
In other words, invalid value is equal to None and not equal
to None at the same time.
This value yields zero upon explicit conversion to a number type,
empty string for string types, and False for boolean.
"""
def __eq__(self, other):
return not other
def __ne__(self, other):
return not (other is self)
def __bool__(self):
return False
def __int__(self):
return 0
__long__ = __int__
def __float__(self):
return 0.0
def __str__(self):
return ""
def __repr__(self):
return "<INVALID>"
# invalid value is a constant singleton
INVALID_VALUE = _InvalidValue()
# vim: set et sts=4 sw=4 :
| _InvalidValue |
python | getsentry__sentry | tests/sentry/core/endpoints/test_organization_member_team_details.py | {
"start": 7320,
"end": 10744
} | class ____(OrganizationMemberTeamTestBase):
method = "post"
def test_member_can_join_team(self) -> None:
self.login_as(self.member)
self.get_success_response(
self.org.slug, self.member.id, self.team.slug, status_code=status.HTTP_201_CREATED
)
assert OrganizationMemberTeam.objects.filter(
team=self.team, organizationmember=self.member
).exists()
def test_admin_can_join_team(self) -> None:
self.login_as(self.admin)
self.get_success_response(
self.org.slug, self.admin.id, self.team.slug, status_code=status.HTTP_201_CREATED
)
assert OrganizationMemberTeam.objects.filter(
team=self.team, organizationmember=self.admin
).exists()
def test_cannot_join_idp_team(self) -> None:
self.login_as(self.admin)
self.get_error_response(self.org.slug, self.admin.id, self.idp_team.slug, status_code=403)
assert not OrganizationMemberTeam.objects.filter(
team=self.team, organizationmember=self.admin
).exists()
self.login_as(self.member)
self.get_error_response(self.org.slug, self.member.id, self.idp_team.slug, status_code=403)
assert not OrganizationMemberTeam.objects.filter(
team=self.team, organizationmember=self.member
).exists()
def test_member_can_add_member_to_team(self) -> None:
target_member = self.create_member(
organization=self.org, user=self.create_user(), role="member"
)
self.login_as(self.member)
self.get_success_response(
self.org.slug, target_member.id, self.team.slug, status_code=status.HTTP_201_CREATED
)
assert OrganizationMemberTeam.objects.filter(
team=self.team, organizationmember=target_member
).exists()
def test_admin_can_add_member_to_team(self) -> None:
self.login_as(self.admin)
self.get_success_response(
self.org.slug, self.member.id, self.team.slug, status_code=status.HTTP_201_CREATED
)
assert OrganizationMemberTeam.objects.filter(
team=self.team, organizationmember=self.member
).exists()
def test_cannot_add_to_idp_team(self) -> None:
target_member = self.create_member(
organization=self.org, user=self.create_user(), role="member"
)
self.login_as(self.member)
self.get_error_response(
self.org.slug, target_member.id, self.idp_team.slug, status_code=403
)
assert not OrganizationMemberTeam.objects.filter(
team=self.team, organizationmember=target_member
).exists()
self.login_as(self.admin)
self.get_error_response(self.org.slug, self.member.id, self.idp_team.slug, status_code=403)
assert not OrganizationMemberTeam.objects.filter(
team=self.team, organizationmember=self.member
).exists()
@with_feature("organizations:team-roles")
def test_team_admin_can_add_member(self) -> None:
self.login_as(self.team_admin)
self.get_success_response(
self.org.slug, self.member.id, self.team.slug, status_code=status.HTTP_201_CREATED
)
assert OrganizationMemberTeam.objects.filter(
team=self.team, organizationmember=self.member
).exists()
| CreateWithOpenMembershipTest |
python | ethereum__web3.py | web3/net.py | {
"start": 205,
"end": 837
} | class ____(Module):
_listening: Method[Callable[[], bool]] = Method(
RPC.net_listening,
mungers=[default_root_munger],
)
_peer_count: Method[Callable[[], int]] = Method(
RPC.net_peerCount,
mungers=[default_root_munger],
)
_version: Method[Callable[[], str]] = Method(
RPC.net_version,
mungers=[default_root_munger],
)
@property
def listening(self) -> bool:
return self._listening()
@property
def peer_count(self) -> int:
return self._peer_count()
@property
def version(self) -> str:
return self._version()
| Net |
python | h5py__h5py | h5py/tests/test_dtype.py | {
"start": 12022,
"end": 13115
} | class ____(TestCase):
def test_vlen_utf8(self):
dt = h5py.string_dtype()
string_info = h5py.check_string_dtype(dt)
assert string_info.encoding == 'utf-8'
assert string_info.length is None
assert h5py.check_vlen_dtype(dt) is str
def test_vlen_ascii(self):
dt = h5py.string_dtype(encoding='ascii')
string_info = h5py.check_string_dtype(dt)
assert string_info.encoding == 'ascii'
assert string_info.length is None
assert h5py.check_vlen_dtype(dt) is bytes
def test_fixed_utf8(self):
dt = h5py.string_dtype(length=10)
string_info = h5py.check_string_dtype(dt)
assert string_info.encoding == 'utf-8'
assert string_info.length == 10
assert h5py.check_vlen_dtype(dt) is None
def test_fixed_ascii(self):
dt = h5py.string_dtype(encoding='ascii', length=10)
string_info = h5py.check_string_dtype(dt)
assert string_info.encoding == 'ascii'
assert string_info.length == 10
assert h5py.check_vlen_dtype(dt) is None
| TestStrings |
python | sqlalchemy__sqlalchemy | test/orm/test_mapper.py | {
"start": 74094,
"end": 81171
} | class ____(fixtures.MappedTest):
"""Tests the contract for user classes."""
@classmethod
def define_tables(cls, metadata):
Table(
"ht1",
metadata,
Column(
"id", Integer, primary_key=True, test_needs_autoincrement=True
),
Column("value", String(10)),
)
Table(
"ht2",
metadata,
Column(
"id", Integer, primary_key=True, test_needs_autoincrement=True
),
Column("ht1_id", Integer, ForeignKey("ht1.id")),
Column("value", String(10)),
)
Table(
"ht3",
metadata,
Column(
"id", Integer, primary_key=True, test_needs_autoincrement=True
),
Column("value", String(10)),
)
Table(
"ht4",
metadata,
Column("ht1_id", Integer, ForeignKey("ht1.id"), primary_key=True),
Column("ht3_id", Integer, ForeignKey("ht3.id"), primary_key=True),
)
Table(
"ht5",
metadata,
Column("ht1_id", Integer, ForeignKey("ht1.id"), primary_key=True),
)
Table(
"ht6",
metadata,
Column("ht1a_id", Integer, ForeignKey("ht1.id"), primary_key=True),
Column("ht1b_id", Integer, ForeignKey("ht1.id"), primary_key=True),
Column("value", String(10)),
)
class _ValueBase:
def __init__(self, value="abc", id_=None):
self.id = id_
self.value = value
def __bool__(self):
return False
def __hash__(self):
return hash(self.value)
def __eq__(self, other):
if isinstance(other, type(self)):
return self.value == other.value
return False
def test_comparison_overrides(self):
"""Simple tests to ensure users can supply comparison __methods__.
The suite-level test --options are better suited to detect
problems- they add selected __methods__ across the board on all
ORM tests. This test simply shoves a variety of operations
through the ORM to catch basic regressions early in a standard
test run.
"""
ht6, ht5, ht4, ht3, ht2, ht1 = (
self.tables.ht6,
self.tables.ht5,
self.tables.ht4,
self.tables.ht3,
self.tables.ht2,
self.tables.ht1,
)
class H1(self._ValueBase):
pass
class H2(self._ValueBase):
pass
class H3(self._ValueBase):
pass
class H6(self._ValueBase):
pass
self.mapper(
H1,
ht1,
properties={
"h2s": relationship(H2, backref="h1"),
"h3s": relationship(H3, secondary=ht4, backref="h1s"),
"h1s": relationship(H1, secondary=ht5, backref="parent_h1"),
"t6a": relationship(
H6, backref="h1a", primaryjoin=ht1.c.id == ht6.c.ht1a_id
),
"t6b": relationship(
H6, backref="h1b", primaryjoin=ht1.c.id == ht6.c.ht1b_id
),
},
)
self.mapper(H2, ht2)
self.mapper(H3, ht3)
self.mapper(H6, ht6)
s = fixture_session(future=True)
s.add_all([H1("abc"), H1("def")])
h1 = H1("ghi")
s.add(h1)
h1.h2s.append(H2("abc"))
h1.h3s.extend([H3(), H3()])
h1.h1s.append(H1())
s.flush()
eq_(s.connection().scalar(select(func.count("*")).select_from(ht1)), 4)
h6 = H6()
s.add(h6)
h6.h1a = h1
h6.h1b = h1
h6 = H6()
h6.h1a = h1
h6.h1b = x = H1()
s.add(x)
h6.h1b.h2s.append(H2("def"))
s.flush()
h1.h2s.extend([H2("abc"), H2("def")])
s.flush()
h1s = s.query(H1).options(sa.orm.joinedload(H1.h2s)).all()
eq_(len(h1s), 5)
self.assert_unordered_result(
h1s,
H1,
{"h2s": []},
{"h2s": []},
{
"h2s": (
H2,
[{"value": "abc"}, {"value": "def"}, {"value": "abc"}],
)
},
{"h2s": []},
{"h2s": (H2, [{"value": "def"}])},
)
h1s = s.query(H1).options(sa.orm.joinedload(H1.h3s)).all()
eq_(len(h1s), 5)
h1s = (
s.query(H1)
.options(
sa.orm.joinedload(H1.t6a).joinedload(H6.h1b),
sa.orm.joinedload(H1.h2s),
sa.orm.joinedload(H1.h3s).joinedload(H3.h1s),
)
.all()
)
eq_(len(h1s), 5)
def test_composite_results(self):
ht2, ht1 = (self.tables.ht2, self.tables.ht1)
class H1(self._ValueBase):
def __init__(self, value, id_, h2s):
self.value = value
self.id = id_
self.h2s = h2s
class H2(self._ValueBase):
def __init__(self, value, id_):
self.value = value
self.id = id_
self.mapper(
H1, ht1, properties={"h2s": relationship(H2, backref="h1")}
)
self.mapper(H2, ht2)
s = fixture_session()
s.add_all(
[
H1(
"abc",
1,
h2s=[H2("abc", id_=1), H2("def", id_=2), H2("def", id_=3)],
),
H1(
"def",
2,
h2s=[H2("abc", id_=4), H2("abc", id_=5), H2("def", id_=6)],
),
]
)
s.commit()
eq_(
[
(h1.value, h1.id, h2.value, h2.id)
for h1, h2 in s.query(H1, H2)
.join(H1.h2s)
.order_by(H1.id, H2.id)
],
[
("abc", 1, "abc", 1),
("abc", 1, "def", 2),
("abc", 1, "def", 3),
("def", 2, "abc", 4),
("def", 2, "abc", 5),
("def", 2, "def", 6),
],
)
def test_nonzero_len_recursion(self):
ht1 = self.tables.ht1
class H1:
def __len__(self):
return len(self.get_value())
def get_value(self):
self.value = "foobar"
return self.value
class H2:
def __bool__(self):
return bool(self.get_value())
def get_value(self):
self.value = "foobar"
return self.value
self.mapper(H1, ht1)
self.mapper(H2, ht1)
h1 = H1()
h1.value = "Asdf"
h1.value = "asdf asdf" # ding
h2 = H2()
h2.value = "Asdf"
h2.value = "asdf asdf" # ding
| RequirementsTest |
python | numpy__numpy | numpy/linalg/_linalg.py | {
"start": 2312,
"end": 2563
} | class ____(NamedTuple):
U: NDArray[Any]
S: NDArray[Any]
Vh: NDArray[Any]
array_function_dispatch = functools.partial(
overrides.array_function_dispatch, module='numpy.linalg'
)
fortran_int = intc
@set_module('numpy.linalg')
| SVDResult |
python | dagster-io__dagster | docs/sphinx/_ext/dagster-sphinx/dagster_sphinx/configurable.py | {
"start": 4657,
"end": 6468
} | class ____(DataDocumenter):
objtype = "configurable"
directivetype = "data"
@classmethod
def can_document_member( # pyright: ignore[reportIncompatibleMethodOverride]
cls, member: Any, _membername: str, _isattr: bool, _parent: Any
) -> bool:
return isinstance(member, ConfigurableDefinition) or (
isinstance(member, type) and issubclass(member, ConfigurableClass)
)
def add_content(self, more_content) -> None:
source_name = self.get_sourcename()
self.add_line("", source_name)
# explicit visual linebreak
self.add_line("|", source_name)
self.add_line("", source_name)
if inspect.isfunction(self.object):
# self.object is a function that returns a configurable class eg build_snowflake_io_manager
obj = self.object([])
else:
obj = self.object
obj = cast(
"Union[ConfigurableDefinition, type[ConfigurableClass], ConfigurableResource]", obj
)
config_field = None
if isinstance(obj, ConfigurableDefinition):
config_field = check.not_none(obj.config_schema).as_field()
elif inspect.isclass(obj) and (
issubclass(obj, ConfigurableResource) or issubclass(obj, ConfigurableResourceFactory)
):
config_field = infer_schema_from_config_class(obj)
elif isinstance(obj, type) and issubclass(obj, ConfigurableClass):
config_field = Field(obj.config_type())
for line in config_field_to_lines(config_field):
self.add_line(line, source_name)
self.add_line("", source_name)
# do this call at the bottom so that config schema is first thing in documentation
super().add_content(more_content)
| ConfigurableDocumenter |
python | spyder-ide__spyder | spyder/plugins/completion/providers/languageserver/conftabs/advanced.py | {
"start": 597,
"end": 9079
} | class ____(SpyderPreferencesTab):
"""PyLS advanced configuration tab."""
TITLE = _('Advanced')
def __init__(self, parent):
super().__init__(parent)
lsp_advanced_group = QGroupBox(_(
'Python Language Server configuration'))
advanced_label = QLabel(
_("<b>Warning</b>: Only modify these values if "
"you know what you're doing!"))
advanced_label.setWordWrap(True)
advanced_label.setAlignment(Qt.AlignJustify)
# Advanced settings checkbox
self.advanced_options_check = self.create_checkbox(
_("Enable advanced settings"), 'advanced/enabled')
# Advanced options
self.advanced_module = self.create_lineedit(
_("Module for the Python language server: "),
'advanced/module', alignment=Qt.Horizontal,
word_wrap=False)
self.advanced_host = self.create_lineedit(
_("IP Address and port to bind the server to: "),
'advanced/host', alignment=Qt.Horizontal,
word_wrap=False)
self.advanced_port = self.create_spinbox(
":", "", 'advanced/port', min_=1, max_=65535, step=1)
self.external_server = self.create_checkbox(
_("This is an external server"),
'advanced/external')
self.use_stdio = self.create_checkbox(
_("Use stdio pipes to communicate with server"),
'advanced/stdio')
self.use_stdio.checkbox.stateChanged.connect(self.disable_tcp)
self.external_server.checkbox.stateChanged.connect(self.disable_stdio)
# Advanced layout
advanced_g_layout = QGridLayout()
advanced_g_layout.addWidget(self.advanced_module.label, 1, 0)
advanced_g_layout.addWidget(self.advanced_module.textbox, 1, 1)
advanced_g_layout.addWidget(self.advanced_host.label, 2, 0)
advanced_host_port_g_layout = QGridLayout()
advanced_host_port_g_layout.addWidget(self.advanced_host.textbox, 1, 0)
advanced_host_port_g_layout.addWidget(self.advanced_port.plabel, 1, 1)
advanced_host_port_g_layout.addWidget(self.advanced_port.spinbox, 1, 2)
advanced_g_layout.addLayout(advanced_host_port_g_layout, 2, 1)
# External server and stdio options layout
advanced_server_layout = QVBoxLayout()
advanced_server_layout.addWidget(self.external_server)
advanced_server_layout.addWidget(self.use_stdio)
advanced_options_layout = QVBoxLayout()
advanced_options_layout.addLayout(advanced_g_layout)
advanced_options_layout.addLayout(advanced_server_layout)
# Set advanced options enabled/disabled
advanced_options_widget = QWidget()
advanced_options_widget.setLayout(advanced_options_layout)
advanced_options_widget.setEnabled(self.get_option('advanced/enabled'))
self.advanced_options_check.checkbox.toggled.connect(
advanced_options_widget.setEnabled)
self.advanced_options_check.checkbox.toggled.connect(
self.show_advanced_warning)
# Advanced options layout
advanced_layout = QVBoxLayout()
advanced_layout.addWidget(advanced_label)
advanced_layout.addWidget(self.advanced_options_check)
advanced_layout.addWidget(advanced_options_widget)
lsp_advanced_group.setLayout(advanced_layout)
layout = QVBoxLayout()
layout.addWidget(lsp_advanced_group)
self.setLayout(layout)
def disable_tcp(self, state):
if state == Qt.Checked:
self.advanced_host.textbox.setEnabled(False)
self.advanced_port.spinbox.setEnabled(False)
self.external_server.checkbox.stateChanged.disconnect()
self.external_server.checkbox.setChecked(False)
self.external_server.checkbox.setEnabled(False)
else:
self.advanced_host.textbox.setEnabled(True)
self.advanced_port.spinbox.setEnabled(True)
self.external_server.checkbox.setChecked(False)
self.external_server.checkbox.setEnabled(True)
self.external_server.checkbox.stateChanged.connect(
self.disable_stdio)
def disable_stdio(self, state):
if state == Qt.Checked:
self.advanced_host.textbox.setEnabled(True)
self.advanced_port.spinbox.setEnabled(True)
self.advanced_module.textbox.setEnabled(False)
self.use_stdio.stateChanged.disconnect()
self.use_stdio.setChecked(False)
self.use_stdio.setEnabled(False)
else:
self.advanced_host.textbox.setEnabled(True)
self.advanced_port.spinbox.setEnabled(True)
self.advanced_module.textbox.setEnabled(True)
self.use_stdio.setChecked(False)
self.use_stdio.setEnabled(True)
self.use_stdio.stateChanged.connect(self.disable_tcp)
@Slot(bool)
def show_advanced_warning(self, state):
"""
Show a warning when trying to modify the PyLS advanced
settings.
"""
# Don't show warning if the option is already enabled.
# This avoids showing it when the Preferences dialog
# is created.
if self.get_option('advanced/enabled'):
return
# Show warning when toggling the button state
if state:
QMessageBox.warning(
self,
_("Warning"),
_("<b>Modifying these options can break code completion!!</b>"
"<br><br>"
"If that's the case, please reset your Spyder preferences "
"by going to the menu"
"<br><br>"
"<tt>Tools > Reset all preferences to default</tt>"
"<br><br>"
"instead of reporting a bug."))
def is_valid(self):
host = self.advanced_host.textbox.text()
# If host is not local, the server must be external
# and we need to automatically check the corresponding
# option
if host not in ['127.0.0.1', 'localhost']:
self.external_server.checkbox.setChecked(True)
# Checks for external PyLS
if self.external_server.checkbox.isChecked():
port = int(self.advanced_port.spinbox.text())
# Check that host and port of the current server are
# different from the new ones provided to connect to
# an external server.
lsp = self.plugin.get_provider('lsp')
pyclient = lsp.clients.get('python')
if pyclient is not None:
instance = pyclient['instance']
if (instance is not None and
not pyclient['config']['external']):
if (instance.server_host == host and
instance.server_port == port):
self.report_no_address_change()
return False
# Check connection to LSP server using a TCP socket
response = check_connection_port(host, port)
if not response:
self.report_no_external_server(host, port, 'python')
return False
return True
def report_no_external_server(self, host, port, language):
"""
Report that connection couldn't be established with
an external server.
"""
QMessageBox.critical(
self,
_("Error"),
_("It appears there is no {language} language server listening "
"at address:"
"<br><br>"
"<tt>{host}:{port}</tt>"
"<br><br>"
"Please verify that the provided information is correct "
"and try again.").format(host=host, port=port,
language=language.capitalize())
)
def report_no_address_change(self):
"""
Report that server address has no changed after checking the
external server option.
"""
QMessageBox.critical(
self,
_("Error"),
_("The address of the external server you are trying to connect "
"to is the same as the one of the current internal server "
"started by Spyder."
"<br><br>"
"Please provide a different address!")
)
| AdvancedConfigTab |
python | yaml__pyyaml | lib/yaml/loader.py | {
"start": 864,
"end": 1180
} | class ____(Reader, Scanner, Parser, Composer, SafeConstructor, Resolver):
def __init__(self, stream):
Reader.__init__(self, stream)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
SafeConstructor.__init__(self)
Resolver.__init__(self)
| SafeLoader |
python | django__django | tests/admin_changelist/models.py | {
"start": 1545,
"end": 1672
} | class ____(models.Model):
name = models.CharField(max_length=30)
group = models.ForeignKey(Group, models.CASCADE)
| Concert |
python | tensorflow__tensorflow | tensorflow/python/training/experimental/mixed_precision_test.py | {
"start": 1579,
"end": 8614
} | class ____(test.TestCase, parameterized.TestCase):
IGNORE_PERF_VAR = 'TF_AUTO_MIXED_PRECISION_GRAPH_REWRITE_IGNORE_PERFORMANCE'
def setUp(self):
super(MixedPrecisionTest, self).setUp()
# Enable the tests to be run on pre-Volta GPUs by telling the grappler pass
# to ignore performance and always transform the graph.
self._original_ignore_perf_value = os.getenv(self.IGNORE_PERF_VAR)
os.environ[self.IGNORE_PERF_VAR] = '1'
def tearDown(self):
# Set the IGNORE_PERF_VAR variable back to it's original value.
if self._original_ignore_perf_value is not None:
os.environ[self.IGNORE_PERF_VAR] = self._original_ignore_perf_value
else:
del os.environ[self.IGNORE_PERF_VAR]
mixed_precision.disable_mixed_precision_graph_rewrite_v1()
super(MixedPrecisionTest, self).tearDown()
@test_util.run_in_graph_and_eager_modes
def test_wrap_optimizer(self):
opt = gradient_descent_v1.GradientDescentOptimizer(1.0)
opt = mixed_precision.enable_mixed_precision_graph_rewrite_v1(opt, 123.)
self.assertIsInstance(
opt, loss_scale_optimizer_v1.MixedPrecisionLossScaleOptimizer)
self.assertEqual(self.evaluate(opt._loss_scale()), 123.)
@test_util.run_in_graph_and_eager_modes
def test_optimizer_errors(self):
opt = 1
expected_regex = ('"opt" must be an instance of a tf.train.Optimizer or '
'a tf.keras.optimizers.Optimizer, but got')
with self.assertRaisesRegex(ValueError, expected_regex):
mixed_precision.enable_mixed_precision_graph_rewrite_v1(opt)
self.assertFalse(config.get_optimizer_experimental_options()
.get('auto_mixed_precision', False))
opt = gradient_descent_v1.GradientDescentOptimizer(1.0)
opt = loss_scale_optimizer_v1.MixedPrecisionLossScaleOptimizer(opt,
'dynamic')
with self.assertRaisesRegex(
ValueError, '"opt" must not already be an instance of a '
'MixedPrecisionLossScaleOptimizer.'):
mixed_precision.enable_mixed_precision_graph_rewrite_v1(opt)
self.assertFalse(config.get_optimizer_experimental_options()
.get('auto_mixed_precision', False))
@test_util.run_in_graph_and_eager_modes()
def test_register_loss_scale_wrapper_with_2_arguments(self):
class MyOptimizer:
pass
class MyLossScaleOptimizer(MyOptimizer):
def __init__(self, inner_optimizer, loss_scale):
self.inner_optimizer = inner_optimizer
self.loss_scale = loss_scale
mixed_precision.register_loss_scale_wrapper(MyOptimizer,
MyLossScaleOptimizer)
opt = MyOptimizer()
opt = mixed_precision.enable_mixed_precision_graph_rewrite_v1(opt, 123.)
self.assertIsInstance(opt, MyLossScaleOptimizer)
self.assertEqual(opt.loss_scale, 123.)
@test_util.run_in_graph_and_eager_modes()
def test_register_loss_scale_wrapper_with_3_arguments(self):
class MyOptimizer:
pass
class MyLossScaleOptimizer(MyOptimizer):
def __init__(self, inner_optimizer, loss_scale):
self.inner_optimizer = inner_optimizer
self.loss_scale = loss_scale
is_called = False
def create_lso(inner_optimizer, loss_scale):
nonlocal is_called
is_called = True
return MyLossScaleOptimizer(inner_optimizer, loss_scale)
mixed_precision.register_loss_scale_wrapper(MyOptimizer,
create_lso,
MyLossScaleOptimizer)
opt = MyOptimizer()
opt = mixed_precision.enable_mixed_precision_graph_rewrite_v1(opt, 123.)
self.assertIsInstance(opt, MyLossScaleOptimizer)
self.assertEqual(opt.loss_scale, 123.)
self.assertTrue(is_called)
@test_util.run_gpu_only
@test_util.run_in_graph_and_eager_modes
@test_util.disable_tfrt('Grappler rewrite doesn\'t apply to tfrt.')
def test_grappler_pass_enabled(self):
opt = gradient_descent_v1.GradientDescentOptimizer(1.0)
mixed_precision.enable_mixed_precision_graph_rewrite_v1(opt, 123.)
var = variables.Variable([[1.0]])
def overflow_in_float16():
out = var * 2 ** 10
out = math_ops.matmul(out, out)
return array_ops.reshape(out, ())
if context.executing_eagerly():
f = def_function.function(overflow_in_float16)
self.assertEqual(f().numpy(), float('Inf'))
# Outside a def_function.function, the grappler pass will not be applied.
self.assertAlmostEqual(overflow_in_float16().numpy(), 2 ** 20)
# Test disabling mixed precision.
mixed_precision.disable_mixed_precision_graph_rewrite_v1()
self.assertEqual(f().numpy(), 2 ** 20)
else:
with session.Session() as sess:
out = overflow_in_float16()
sess.run(var.initializer)
self.assertEqual(sess.run(out), float('Inf'))
# Test Session will enable the auto_mixed_precision grappler pass in a
# ConfigProto passed by the user
with session.Session(config=config_pb2.ConfigProto()) as sess:
out = overflow_in_float16()
sess.run(var.initializer)
self.assertEqual(sess.run(out), float('Inf'))
# Test disabling mixed precision.
mixed_precision.disable_mixed_precision_graph_rewrite_v1()
with session.Session() as sess:
out = overflow_in_float16()
sess.run(var.initializer)
self.assertAlmostEqual(sess.run(out), 2 ** 20)
@test.mock.patch.object(tf_logging, 'warn')
def test_warn_if_session_already_exists(self, mock_warn):
# Set this to False, so Sessions created in previous tests do not trigger
# the warning.
mixed_precision_global_state.set_non_mixed_precision_session_created(False)
with session.Session():
mixed_precision.enable_mixed_precision_graph_rewrite_v1(
gradient_descent_v1.GradientDescentOptimizer(1.0))
mock_warn.assert_any_call(
'You already have existing Sessions that do not use mixed precision. '
'enable_mixed_precision_graph_rewrite() will not affect these '
'Sessions.')
@test.mock.patch.object(tf_logging, 'warn')
def test_do_not_warn_if_session_does_not_already_exist(self, mock_warn):
# Set this to False, so Sessions created in previous tests do not trigger
# the warning.
mixed_precision_global_state.set_non_mixed_precision_session_created(False)
mixed_precision.enable_mixed_precision_graph_rewrite_v1(
gradient_descent_v1.GradientDescentOptimizer(1.0))
with session.Session():
# Make sure the "You already have existing Sessions" warning was not
# issued, since the Session was only created after
# enable_mixed_precision_graph_rewrite.
for call_arg in mock_warn.call_args_list:
msg = call_arg[0][0]
self.assertNotIn('You already have existing Sessions that do not use '
'mixed precision', msg)
if __name__ == '__main__':
test.main()
| MixedPrecisionTest |
python | protocolbuffers__protobuf | python/google/protobuf/json_format.py | {
"start": 1863,
"end": 1931
} | class ____(Error):
"""Thrown in case of parsing error."""
| ParseError |
python | automl__auto-sklearn | autosklearn/estimators.py | {
"start": 58229,
"end": 63301
} | class ____(AutoSklearnEstimator, ClassifierMixin):
"""This class implements the classification task."""
def fit(self, X, y, X_test=None, y_test=None, feat_type=None, dataset_name=None):
"""Fit *auto-sklearn* to given training set (X, y).
Fit both optimizes the machine learning models and builds an ensemble
out of them.
Parameters
----------
X : array-like or sparse matrix of shape = [n_samples, n_features]
The training input samples.
y : array-like, shape = [n_samples] or [n_samples, n_outputs]
The target classes.
X_test : array-like or sparse matrix of shape = [n_samples, n_features]
Test data input samples. Will be used to save test predictions for
all models. This allows to evaluate the performance of Auto-sklearn
over time.
y_test : array-like, shape = [n_samples] or [n_samples, n_outputs]
Test data target classes. Will be used to calculate the test error
of all models. This allows to evaluate the performance of
Auto-sklearn over time.
feat_type : list, optional (default=None)
List of str of `len(X.shape[1])` describing the attribute type.
Possible types are `Categorical` and `Numerical`. `Categorical`
attributes will be automatically One-Hot encoded. The values
used for a categorical attribute must be integers, obtained for
example by `sklearn.preprocessing.LabelEncoder
<https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html>`_.
dataset_name : str, optional (default=None)
Create nicer output. If None, a string will be determined by the
md5 hash of the dataset.
Returns
-------
self
"""
# AutoSklearn does not handle sparse y for now
y = convert_if_sparse(y)
# Before running anything else, first check that the
# type of data is compatible with auto-sklearn. Legal target
# types are: binary, multiclass, multilabel-indicator.
target_type = type_of_target(y)
supported_types = ["binary", "multiclass", "multilabel-indicator"]
if target_type not in supported_types:
raise ValueError(
"Classification with data of type {} is "
"not supported. Supported types are {}. "
"You can find more information about scikit-learn "
"data types in: "
"https://scikit-learn.org/stable/modules/multiclass.html"
"".format(target_type, supported_types)
)
# remember target type for using in predict_proba later.
self.target_type = target_type
super().fit(
X=X,
y=y,
X_test=X_test,
y_test=y_test,
feat_type=feat_type,
dataset_name=dataset_name,
)
# After fit, a classifier is expected to define classes_
# A list of class labels known to the classifier, mapping each label
# to a numerical index used in the model representation our output.
self.classes_ = self.automl_.InputValidator.target_validator.classes_
return self
def predict(self, X, batch_size=None, n_jobs=1):
"""Predict classes for X.
Parameters
----------
X : array-like or sparse matrix of shape = [n_samples, n_features]
Returns
-------
y : array of shape = [n_samples] or [n_samples, n_labels]
The predicted classes.
"""
return super().predict(X, batch_size=batch_size, n_jobs=n_jobs)
def predict_proba(self, X, batch_size=None, n_jobs=1):
"""Predict probabilities of classes for all samples X.
Parameters
----------
X : array-like or sparse matrix of shape = [n_samples, n_features]
batch_size : int (optional)
Number of data points to predict for (predicts all points at once
if ``None``.
n_jobs : int
Returns
-------
y : array of shape = [n_samples, n_classes] or [n_samples, n_labels]
The predicted class probabilities.
"""
pred_proba = super().predict_proba(X, batch_size=batch_size, n_jobs=n_jobs)
# Check if all probabilities sum up to 1.
# Assert only if target type is not multilabel-indicator.
if self.target_type not in ["multilabel-indicator"]:
assert np.allclose(
np.sum(pred_proba, axis=1), np.ones_like(pred_proba[:, 0])
), "prediction probability does not sum up to 1!"
# Check that all probability values lie between 0 and 1.
assert (pred_proba >= 0).all() and (
pred_proba <= 1
).all(), "found prediction probability value outside of [0, 1]!"
return pred_proba
def _get_automl_class(self):
return AutoMLClassifier
| AutoSklearnClassifier |
python | pypa__warehouse | tests/unit/manage/views/test_teams.py | {
"start": 23812,
"end": 30645
} | class ____:
@pytest.fixture
def organization(self, _enable_organizations, pyramid_user):
organization = OrganizationFactory.create()
OrganizationRoleFactory.create(
organization=organization,
user=pyramid_user,
role_name=OrganizationRoleType.Owner,
)
return organization
@pytest.fixture
def organization_project(self, organization):
project = ProjectFactory.create(organization=organization)
OrganizationProjectFactory(organization=organization, project=project)
return project
@pytest.fixture
def organization_member(self, organization):
member = UserFactory.create()
OrganizationRoleFactory.create(
organization=organization,
user=member,
role_name=OrganizationRoleType.Member,
)
return member
@pytest.fixture
def organization_team(self, organization, organization_member):
team = TeamFactory(organization=organization)
TeamRoleFactory.create(team=team, user=organization_member)
return team
def test_change_role(
self,
db_request,
pyramid_user,
organization_member,
organization_team,
organization_project,
monkeypatch,
):
role = TeamProjectRoleFactory.create(
team=organization_team,
project=organization_project,
role_name=TeamProjectRoleType.Owner,
)
new_role_name = TeamProjectRoleType.Maintainer
db_request.method = "POST"
db_request.POST = MultiDict(
{"role_id": role.id, "team_project_role_name": new_role_name}
)
db_request.session = pretend.stub(
flash=pretend.call_recorder(lambda *a, **kw: None)
)
db_request.route_path = pretend.call_recorder(lambda *a, **kw: "/the-redirect")
send_team_collaborator_role_changed_email = pretend.call_recorder(
lambda *a, **kw: None
)
monkeypatch.setattr(
team_views,
"send_team_collaborator_role_changed_email",
send_team_collaborator_role_changed_email,
)
send_role_changed_as_team_collaborator_email = pretend.call_recorder(
lambda *a, **kw: None
)
monkeypatch.setattr(
team_views,
"send_role_changed_as_team_collaborator_email",
send_role_changed_as_team_collaborator_email,
)
result = team_views.change_team_project_role(organization_project, db_request)
assert role.role_name == new_role_name
assert db_request.route_path.calls == [
pretend.call("manage.project.roles", project_name=organization_project.name)
]
assert send_team_collaborator_role_changed_email.calls == [
pretend.call(
db_request,
{pyramid_user},
team=organization_team,
submitter=pyramid_user,
project_name=organization_project.name,
role=new_role_name.value,
)
]
assert send_role_changed_as_team_collaborator_email.calls == [
pretend.call(
db_request,
{organization_member},
team=organization_team,
submitter=pyramid_user,
project_name=organization_project.name,
role=new_role_name.value,
)
]
assert db_request.session.flash.calls == [
pretend.call("Changed permissions", queue="success")
]
assert isinstance(result, HTTPSeeOther)
assert result.headers["Location"] == "/the-redirect"
entry = (
db_request.db.query(JournalEntry)
.options(joinedload(JournalEntry.submitted_by))
.one()
)
assert entry.name == organization_project.name
assert entry.action == f"change Owner {organization_team.name} to Maintainer"
assert entry.submitted_by == db_request.user
def test_change_role_invalid_role_name(self, pyramid_request, organization_project):
pyramid_request.method = "POST"
pyramid_request.POST = MultiDict(
{
"role_id": str(uuid.uuid4()),
"team_project_role_name": "Invalid Role Name",
}
)
pyramid_request.route_path = pretend.call_recorder(
lambda *a, **kw: "/the-redirect"
)
result = team_views.change_team_project_role(
organization_project, pyramid_request
)
assert pyramid_request.route_path.calls == [
pretend.call("manage.project.roles", project_name=organization_project.name)
]
assert isinstance(result, HTTPSeeOther)
assert result.headers["Location"] == "/the-redirect"
def test_change_missing_role(self, db_request, organization_project):
missing_role_id = str(uuid.uuid4())
db_request.method = "POST"
db_request.POST = MultiDict(
{"role_id": missing_role_id, "team_project_role_name": "Owner"}
)
db_request.session = pretend.stub(
flash=pretend.call_recorder(lambda *a, **kw: None)
)
db_request.route_path = pretend.call_recorder(lambda *a, **kw: "/the-redirect")
result = team_views.change_team_project_role(organization_project, db_request)
assert db_request.session.flash.calls == [
pretend.call("Could not find permissions", queue="error")
]
assert isinstance(result, HTTPSeeOther)
assert result.headers["Location"] == "/the-redirect"
def test_change_own_owner_role(
self,
db_request,
organization_member,
organization_team,
organization_project,
):
role = TeamProjectRoleFactory.create(
team=organization_team,
project=organization_project,
role_name=TeamProjectRoleType.Owner,
)
db_request.method = "POST"
db_request.user = organization_member
db_request.POST = MultiDict(
{"role_id": role.id, "team_project_role_name": "Maintainer"}
)
db_request.session = pretend.stub(
flash=pretend.call_recorder(lambda *a, **kw: None)
)
db_request.route_path = pretend.call_recorder(lambda *a, **kw: "/the-redirect")
result = team_views.change_team_project_role(organization_project, db_request)
assert db_request.session.flash.calls == [
pretend.call("Cannot remove your own team as Owner", queue="error")
]
assert isinstance(result, HTTPSeeOther)
assert result.headers["Location"] == "/the-redirect"
| TestChangeTeamProjectRole |
python | facelessuser__pymdown-extensions | pymdownx/magiclink.py | {
"start": 34224,
"end": 35225
} | class ____(_MagiclinkShorthandPattern):
"""Convert @user/repo to links."""
ANCESTOR_EXCLUDES = ('a',)
def handleMatch(self, m, data):
"""Handle email link patterns."""
text = m.group('mention')[1:]
parts = text.split(':')
if len(parts) > 1:
provider = parts[0]
user = parts[1]
else:
provider = self.provider
user = parts[0]
repo = m.group('mention_repo')
el = etree.Element("a")
el.set('href', '{}/{}/{}'.format(self.provider_info[provider]['url'], user, repo))
el.set(
'title',
"{} {}: {}/{}".format(
self.provider_info[provider]['provider'], self.labels.get('repository', 'Repository'), user, repo
)
)
el.set('class', f'magiclink magiclink-{provider} magiclink-repository')
el.text = md_util.AtomicString(f'{user}/{repo}')
return el, m.start(0), m.end(0)
| MagiclinkRepositoryPattern |
python | pytorch__pytorch | benchmarks/dynamo/pr_time_benchmarks/benchmarks/aotdispatcher_partitioner2.py | {
"start": 69,
"end": 1384
} | class ____(BenchmarkBase):
def __init__(self):
super().__init__(
category="aotdispatcher_partitioner",
backend="aot_eager_decomp_partition",
device="cpu",
)
def name(self):
return f"{self.category()}_{self.device()}2"
def description(self):
return """
Partitioner benchmark with many parallel use chains.
See https://github.com/pytorch/pytorch/issues/145081"""
def _prepare_once(self):
self.x = torch.randn(4, 4, requires_grad=True)
def _prepare(self):
torch._dynamo.reset()
def _work(self):
@torch.compile(backend=self.backend(), fullgraph=True)
def f(x):
tmps = [x + i for i in range(16)]
tmps = [x + tmp for tmp in tmps]
for i in range(len(tmps) - 4):
tmps[i] = tmps[i].sin().mul(tmps[i])
tmps[i + 1] -= tmps[i]
tmps[i + 2] -= tmps[i]
tmps[i + 3] -= tmps[i]
return sum(tmps)
f(self.x)
def main():
result_path = sys.argv[1]
all = [
Benchmark(),
]
for benchmark in all:
benchmark.enable_compile_time_instruction_count().collect_all().append_results(
result_path
)
if __name__ == "__main__":
main()
| Benchmark |
python | readthedocs__readthedocs.org | readthedocs/search/views.py | {
"start": 1260,
"end": 5734
} | class ____(TemplateView):
"""
Global search enabled for logged out users and anyone using the dashboard.
Query params:
- q: search term
- type: type of document to search (project or file)
- language: project language to filter by (only valid if type is project)
"""
http_method_names = ["get"]
max_search_results = 50
available_facets = ["language"]
template_name = "search/elastic_search.html"
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
user_input = UserInput(
query=self.request.GET.get("q"),
type=self.request.GET.get("type", "file"),
language=self.request.GET.get("language"),
)
if user_input.type == "file":
context.update(self._searh_files())
else:
context.update(self._search_projects(user_input, self.request))
return context
def _searh_files(self):
results, facets = [], {}
search_query = ""
total_count = 0
query = self.request.GET.get("q")
if query:
search_executor = SearchExecutor(
request=self.request,
query=query,
arguments_required=False,
default_all=not settings.ALLOW_PRIVATE_REPOS,
)
search_query = search_executor.parser.query
use_advanced_query = should_use_advanced_query(search_executor.projects)
search = search_executor.search(use_advanced_query=use_advanced_query)
if search:
results = search[: self.max_search_results].execute()
facets = results.facets
total_count = results.hits.total["value"]
results = PageSearchSerializer(
results,
projects=search_executor.projects,
many=True,
).data
return {
"query": query,
"search_query": search_query,
"results": results,
"facets": facets,
"total_count": total_count,
"type": "file",
}
def _search_projects(self, user_input, request):
total_count = 0
projects = []
# If we allow private projects,
# we only search on projects the user belongs or have access to.
if settings.ALLOW_PRIVATE_REPOS:
projects = list(Project.objects.for_user(request.user).values_list("slug", flat=True))
# Make sure we always have projects to filter by if we allow private projects.
if settings.ALLOW_PRIVATE_REPOS and not projects:
results, facets = [], {}
else:
results, facets = self._search(
user_input=user_input,
projects=projects,
use_advanced_query=True,
)
if results:
total_count = results.hits.total["value"]
results = ProjectSearchSerializer(results, many=True).data
context = user_input._asdict()
context.update(
{
"search_query": user_input.query,
"results": results,
"total_count": total_count,
"facets": facets,
}
)
return context
def _search(self, *, user_input, projects, use_advanced_query):
"""Return search results and facets given a `user_input` and `projects` to filter by."""
if not user_input.query:
return [], {}
filters = {}
for avail_facet in self.available_facets:
value = getattr(user_input, avail_facet, None)
if value:
filters[avail_facet] = value
search = ProjectSearch(
query=user_input.query,
filters=filters,
projects=projects,
use_advanced_query=use_advanced_query,
)
# pep8 and blank don't agree on having a space before :.
results = search[: self.max_search_results].execute() # noqa
facets = results.facets
# Make sure the selected facets are displayed,
# even when they return 0 results.
for facet in facets:
value = getattr(user_input, facet, None)
if value and value not in (name for name, *_ in facets[facet]):
facets[facet].insert(0, (value, 0, True))
return results, facets
| GlobalSearchView |
python | plotly__plotly.py | plotly/graph_objs/scatterpolar/_textfont.py | {
"start": 233,
"end": 17129
} | class ____(_BaseTraceHierarchyType):
_parent_path_str = "scatterpolar"
_path_str = "scatterpolar.textfont"
_valid_props = {
"color",
"colorsrc",
"family",
"familysrc",
"lineposition",
"linepositionsrc",
"shadow",
"shadowsrc",
"size",
"sizesrc",
"style",
"stylesrc",
"textcase",
"textcasesrc",
"variant",
"variantsrc",
"weight",
"weightsrc",
}
@property
def color(self):
"""
The 'color' property is a color and may be specified as:
- A hex string (e.g. '#ff0000')
- An rgb/rgba string (e.g. 'rgb(255,0,0)')
- An hsl/hsla string (e.g. 'hsl(0,100%,50%)')
- An hsv/hsva string (e.g. 'hsv(0,100%,100%)')
- A named CSS color: see https://plotly.com/python/css-colors/ for a list
- A list or array of any of the above
Returns
-------
str|numpy.ndarray
"""
return self["color"]
@color.setter
def color(self, val):
self["color"] = val
@property
def colorsrc(self):
"""
Sets the source reference on Chart Studio Cloud for `color`.
The 'colorsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["colorsrc"]
@colorsrc.setter
def colorsrc(self, val):
self["colorsrc"] = val
@property
def family(self):
"""
HTML font family - the typeface that will be applied by the web
browser. The web browser can only apply a font if it is
available on the system where it runs. Provide multiple font
families, separated by commas, to indicate the order in which
to apply fonts if they aren't available.
The 'family' property is a string and must be specified as:
- A non-empty string
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
str|numpy.ndarray
"""
return self["family"]
@family.setter
def family(self, val):
self["family"] = val
@property
def familysrc(self):
"""
Sets the source reference on Chart Studio Cloud for `family`.
The 'familysrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["familysrc"]
@familysrc.setter
def familysrc(self, val):
self["familysrc"] = val
@property
def lineposition(self):
"""
Sets the kind of decoration line(s) with text, such as an
"under", "over" or "through" as well as combinations e.g.
"under+over", etc.
The 'lineposition' property is a flaglist and may be specified
as a string containing:
- Any combination of ['under', 'over', 'through'] joined with '+' characters
(e.g. 'under+over')
OR exactly one of ['none'] (e.g. 'none')
- A list or array of the above
Returns
-------
Any|numpy.ndarray
"""
return self["lineposition"]
@lineposition.setter
def lineposition(self, val):
self["lineposition"] = val
@property
def linepositionsrc(self):
"""
Sets the source reference on Chart Studio Cloud for
`lineposition`.
The 'linepositionsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["linepositionsrc"]
@linepositionsrc.setter
def linepositionsrc(self, val):
self["linepositionsrc"] = val
@property
def shadow(self):
"""
Sets the shape and color of the shadow behind text. "auto"
places minimal shadow and applies contrast text font color. See
https://developer.mozilla.org/en-US/docs/Web/CSS/text-shadow
for additional options.
The 'shadow' property is a string and must be specified as:
- A string
- A number that will be converted to a string
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
str|numpy.ndarray
"""
return self["shadow"]
@shadow.setter
def shadow(self, val):
self["shadow"] = val
@property
def shadowsrc(self):
"""
Sets the source reference on Chart Studio Cloud for `shadow`.
The 'shadowsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["shadowsrc"]
@shadowsrc.setter
def shadowsrc(self, val):
self["shadowsrc"] = val
@property
def size(self):
"""
The 'size' property is a number and may be specified as:
- An int or float in the interval [1, inf]
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
int|float|numpy.ndarray
"""
return self["size"]
@size.setter
def size(self, val):
self["size"] = val
@property
def sizesrc(self):
"""
Sets the source reference on Chart Studio Cloud for `size`.
The 'sizesrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["sizesrc"]
@sizesrc.setter
def sizesrc(self, val):
self["sizesrc"] = val
@property
def style(self):
"""
Sets whether a font should be styled with a normal or italic
face from its family.
The 'style' property is an enumeration that may be specified as:
- One of the following enumeration values:
['normal', 'italic']
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
Any|numpy.ndarray
"""
return self["style"]
@style.setter
def style(self, val):
self["style"] = val
@property
def stylesrc(self):
"""
Sets the source reference on Chart Studio Cloud for `style`.
The 'stylesrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["stylesrc"]
@stylesrc.setter
def stylesrc(self, val):
self["stylesrc"] = val
@property
def textcase(self):
"""
Sets capitalization of text. It can be used to make text appear
in all-uppercase or all-lowercase, or with each word
capitalized.
The 'textcase' property is an enumeration that may be specified as:
- One of the following enumeration values:
['normal', 'word caps', 'upper', 'lower']
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
Any|numpy.ndarray
"""
return self["textcase"]
@textcase.setter
def textcase(self, val):
self["textcase"] = val
@property
def textcasesrc(self):
"""
Sets the source reference on Chart Studio Cloud for `textcase`.
The 'textcasesrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["textcasesrc"]
@textcasesrc.setter
def textcasesrc(self, val):
self["textcasesrc"] = val
@property
def variant(self):
"""
Sets the variant of the font.
The 'variant' property is an enumeration that may be specified as:
- One of the following enumeration values:
['normal', 'small-caps', 'all-small-caps',
'all-petite-caps', 'petite-caps', 'unicase']
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
Any|numpy.ndarray
"""
return self["variant"]
@variant.setter
def variant(self, val):
self["variant"] = val
@property
def variantsrc(self):
"""
Sets the source reference on Chart Studio Cloud for `variant`.
The 'variantsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["variantsrc"]
@variantsrc.setter
def variantsrc(self, val):
self["variantsrc"] = val
@property
def weight(self):
"""
Sets the weight (or boldness) of the font.
The 'weight' property is a integer and may be specified as:
- An int (or float that will be cast to an int)
in the interval [1, 1000]
OR exactly one of ['normal', 'bold'] (e.g. 'bold')
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
int|numpy.ndarray
"""
return self["weight"]
@weight.setter
def weight(self, val):
self["weight"] = val
@property
def weightsrc(self):
"""
Sets the source reference on Chart Studio Cloud for `weight`.
The 'weightsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["weightsrc"]
@weightsrc.setter
def weightsrc(self, val):
self["weightsrc"] = val
@property
def _prop_descriptions(self):
return """\
color
colorsrc
Sets the source reference on Chart Studio Cloud for
`color`.
family
HTML font family - the typeface that will be applied by
the web browser. The web browser can only apply a font
if it is available on the system where it runs. Provide
multiple font families, separated by commas, to
indicate the order in which to apply fonts if they
aren't available.
familysrc
Sets the source reference on Chart Studio Cloud for
`family`.
lineposition
Sets the kind of decoration line(s) with text, such as
an "under", "over" or "through" as well as combinations
e.g. "under+over", etc.
linepositionsrc
Sets the source reference on Chart Studio Cloud for
`lineposition`.
shadow
Sets the shape and color of the shadow behind text.
"auto" places minimal shadow and applies contrast text
font color. See https://developer.mozilla.org/en-
US/docs/Web/CSS/text-shadow for additional options.
shadowsrc
Sets the source reference on Chart Studio Cloud for
`shadow`.
size
sizesrc
Sets the source reference on Chart Studio Cloud for
`size`.
style
Sets whether a font should be styled with a normal or
italic face from its family.
stylesrc
Sets the source reference on Chart Studio Cloud for
`style`.
textcase
Sets capitalization of text. It can be used to make
text appear in all-uppercase or all-lowercase, or with
each word capitalized.
textcasesrc
Sets the source reference on Chart Studio Cloud for
`textcase`.
variant
Sets the variant of the font.
variantsrc
Sets the source reference on Chart Studio Cloud for
`variant`.
weight
Sets the weight (or boldness) of the font.
weightsrc
Sets the source reference on Chart Studio Cloud for
`weight`.
"""
def __init__(
self,
arg=None,
color=None,
colorsrc=None,
family=None,
familysrc=None,
lineposition=None,
linepositionsrc=None,
shadow=None,
shadowsrc=None,
size=None,
sizesrc=None,
style=None,
stylesrc=None,
textcase=None,
textcasesrc=None,
variant=None,
variantsrc=None,
weight=None,
weightsrc=None,
**kwargs,
):
"""
Construct a new Textfont object
Sets the text font.
Parameters
----------
arg
dict of properties compatible with this constructor or
an instance of
:class:`plotly.graph_objs.scatterpolar.Textfont`
color
colorsrc
Sets the source reference on Chart Studio Cloud for
`color`.
family
HTML font family - the typeface that will be applied by
the web browser. The web browser can only apply a font
if it is available on the system where it runs. Provide
multiple font families, separated by commas, to
indicate the order in which to apply fonts if they
aren't available.
familysrc
Sets the source reference on Chart Studio Cloud for
`family`.
lineposition
Sets the kind of decoration line(s) with text, such as
an "under", "over" or "through" as well as combinations
e.g. "under+over", etc.
linepositionsrc
Sets the source reference on Chart Studio Cloud for
`lineposition`.
shadow
Sets the shape and color of the shadow behind text.
"auto" places minimal shadow and applies contrast text
font color. See https://developer.mozilla.org/en-
US/docs/Web/CSS/text-shadow for additional options.
shadowsrc
Sets the source reference on Chart Studio Cloud for
`shadow`.
size
sizesrc
Sets the source reference on Chart Studio Cloud for
`size`.
style
Sets whether a font should be styled with a normal or
italic face from its family.
stylesrc
Sets the source reference on Chart Studio Cloud for
`style`.
textcase
Sets capitalization of text. It can be used to make
text appear in all-uppercase or all-lowercase, or with
each word capitalized.
textcasesrc
Sets the source reference on Chart Studio Cloud for
`textcase`.
variant
Sets the variant of the font.
variantsrc
Sets the source reference on Chart Studio Cloud for
`variant`.
weight
Sets the weight (or boldness) of the font.
weightsrc
Sets the source reference on Chart Studio Cloud for
`weight`.
Returns
-------
Textfont
"""
super().__init__("textfont")
if "_parent" in kwargs:
self._parent = kwargs["_parent"]
return
if arg is None:
arg = {}
elif isinstance(arg, self.__class__):
arg = arg.to_plotly_json()
elif isinstance(arg, dict):
arg = _copy.copy(arg)
else:
raise ValueError("""\
The first argument to the plotly.graph_objs.scatterpolar.Textfont
constructor must be a dict or
an instance of :class:`plotly.graph_objs.scatterpolar.Textfont`""")
self._skip_invalid = kwargs.pop("skip_invalid", False)
self._validate = kwargs.pop("_validate", True)
self._set_property("color", arg, color)
self._set_property("colorsrc", arg, colorsrc)
self._set_property("family", arg, family)
self._set_property("familysrc", arg, familysrc)
self._set_property("lineposition", arg, lineposition)
self._set_property("linepositionsrc", arg, linepositionsrc)
self._set_property("shadow", arg, shadow)
self._set_property("shadowsrc", arg, shadowsrc)
self._set_property("size", arg, size)
self._set_property("sizesrc", arg, sizesrc)
self._set_property("style", arg, style)
self._set_property("stylesrc", arg, stylesrc)
self._set_property("textcase", arg, textcase)
self._set_property("textcasesrc", arg, textcasesrc)
self._set_property("variant", arg, variant)
self._set_property("variantsrc", arg, variantsrc)
self._set_property("weight", arg, weight)
self._set_property("weightsrc", arg, weightsrc)
self._process_kwargs(**dict(arg, **kwargs))
self._skip_invalid = False
| Textfont |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.