language stringclasses 1
value | repo stringclasses 346
values | path stringlengths 6 201 | class_span dict | source stringlengths 21 2.38M | target stringlengths 1 96 |
|---|---|---|---|---|---|
python | django__django | tests/template_tests/filter_tests/test_truncatewords.py | {
"start": 1032,
"end": 1840
} | class ____(SimpleTestCase):
def test_truncate(self):
self.assertEqual(truncatewords("A sentence with a few words in it", 1), "A …")
def test_truncate2(self):
self.assertEqual(
truncatewords("A sentence with a few words in it", 5),
"A sentence with a few …",
)
def test_overtruncate(self):
self.assertEqual(
truncatewords("A sentence with a few words in it", 100),
"A sentence with a few words in it",
)
def test_invalid_number(self):
self.assertEqual(
truncatewords("A sentence with a few words in it", "not a number"),
"A sentence with a few words in it",
)
def test_non_string_input(self):
self.assertEqual(truncatewords(123, 2), "123")
| FunctionTests |
python | mitmproxy__pdoc | pdoc/__init__.py | {
"start": 10635,
"end": 15772
} | class ____(BaseModel):
@computed_field(description="Docs for field a.")
@property
def a(self) -> int:
...
```
## ...render math formulas?
Run `pdoc --math`, and pdoc will render formulas in your docstrings. See
[`math_demo`](https://pdoc.dev/docs/math/math_demo.html) for details.
## ...render Mermaid diagrams?
Run `pdoc --mermaid`, and pdoc will render mermaid diagrams in your docstrings. See
[`mermaid_demo`](https://pdoc.dev/docs/mermaid/mermaid_demo.html) for details.
## ...add my project's logo?
See [*Customizing pdoc*](#customizing-pdoc).
## ...include Markdown files?
You can include external Markdown files in your documentation by using reStructuredText's
`.. include::` directive. For example, a common pattern is to include your project's README in your top-level `__init__.py` like this:
```python
"""
.. include:: ../README.md
"""
```
You can also include only parts of a file with the
[`start-line`, `end-line`, `start-after`, and `end-after` options](https://docutils.sourceforge.io/docs/ref/rst/directives.html#including-an-external-document-fragment):
```python
"""
.. include:: ../README.md
:start-line: 1
:end-before: Changelog
"""
```
## ...add a title page?
The landing page for your documentation is your project's top-level `<modulename>/__init__.py` file.
Adding a module-level docstring here is a great way to introduce users to your project.
For example, the documentation you are reading right now is sourced from
[`pdoc/__init__.py`](https://github.com/mitmproxy/pdoc/blob/main/pdoc/__init__.py).
You can also include your title page from a [Markdown file](#include-markdown-files).
If you have multiple top-level modules, a custom title page requires modifying the `index.html.jinja2` template.
You can find an example in [#410](https://github.com/mitmproxy/pdoc/issues/410).
## ...edit pdoc's HTML template?
For more advanced customization, we can edit pdoc's
[default HTML template](https://github.com/mitmproxy/pdoc/blob/main/pdoc/templates/default/module.html.jinja2),
which uses the
[Jinja2](https://jinja.palletsprojects.com/) templating language.
Let's assume you want to replace the logo with a custom button. We first find the right location in the template by searching
for "logo", which shows us that the logo is defined in a Jinja2 block named `nav_title`.
We now extend the default template by creating a file titled `module.html.jinja2` in the current directory
with the following contents:
```html+jinja
{% extends "default/module.html.jinja2" %}
{% block nav_title %}
<button>Donate dog food</button>
{% endblock %}
```
We then specify our custom template directory when invoking pdoc...
```shell
pdoc -t . ./demo.py
```
...and the updated documentation – with button – renders! 🎉
See [`examples/`](https://github.com/mitmproxy/pdoc/tree/main/examples/)
for more examples.
## ...pass arguments to the Jinja2 template?
If you need to pass additional data to pdoc's Jinja2 templates,
you can use system environment variables.
For example,
[`examples/custom-template/module.html.jinja2`](https://github.com/mitmproxy/pdoc/blob/main/examples/custom-template/module.html.jinja2)
shows how to include a version number in the rendered HTML.
## ...integrate pdoc into other systems?
pdoc's HTML and CSS are written in a way that the default template can be easily adjusted
to produce standalone HTML fragments that can be embedded in other systems.
This makes it possible to integrate pdoc with almost every CMS or static site generator.
The only limitation is that you need to retain pdoc's directory structure
if you would like to link between modules.
To do so, [create a custom `frame.html.jinja2` template](#edit-pdocs-html-template) which only emits CSS and the main
page contents instead of a full standalone HTML document:
```html+jinja
{% block content %}{% endblock %}
{% filter minify_css %}
{% block style %}
{# The same CSS files as in pdoc's default template, except for layout.css.
You may leave out Bootstrap Reboot, which corrects inconsistences across browsers
but may conflict with you website's stylesheet. #}
<style>{% include "resources/bootstrap-reboot.min.css" %}</style>
<style>{% include "syntax-highlighting.css" %}</style>
<style>{% include "theme.css" %}</style>
<style>{% include "content.css" %}</style>
{% endblock %}
{% endfilter %}
```
This should be enough to produce HTML files that can be embedded into other pages.
All CSS selectors are prefixed with `.pdoc` so that pdoc's page style does not interfere with the rest of your website.
You can find a full example for mkdocs in [`examples/mkdocs`](https://github.com/mitmproxy/pdoc/tree/main/examples/mkdocs/).
# Docstring Inheritance
pdoc extends the standard use of docstrings in two important ways:
by introducing variable docstrings (see [*How can I document variables?*](#document-variables)),
and by allowing functions and classes to inherit docstrings and type annotations.
This is useful to not unnecessarily repeat information. Consider this example:
```python
| ComputedFoo |
python | microsoft__pyright | packages/pyright-internal/src/tests/samples/dataclass5.py | {
"start": 1122,
"end": 1203
} | class ____:
x: int
foo3 = F(3) < F(3)
reveal_type(foo3, expected_text="bool")
| F |
python | modin-project__modin | modin/core/storage_formats/pandas/parsers.py | {
"start": 29943,
"end": 30699
} | class ____(PandasParser): # pragma: no cover
@staticmethod
@doc(
_doc_parse_func,
parameters="""fname : str, path object, pandas.HDFStore or file-like object
Name of the file, path pandas.HDFStore or file-like object to read.""",
)
def parse(fname, **kwargs):
kwargs["key"] = kwargs.pop("_key", None)
num_splits = kwargs.pop("num_splits", None)
if num_splits is None:
return pandas.read_hdf(fname, **kwargs)
df = pandas.read_hdf(fname, **kwargs)
# Append the length of the index here to build it externally
return _split_result_for_readers(0, num_splits, df) + [len(df.index), df.dtypes]
@doc(_doc_pandas_parser_class, data_type="FEATHER files")
| PandasHDFParser |
python | langchain-ai__langchain | libs/core/langchain_core/vectorstores/in_memory.py | {
"start": 740,
"end": 15690
} | class ____(VectorStore):
"""In-memory vector store implementation.
Uses a dictionary, and computes cosine similarity for search using numpy.
Setup:
Install `langchain-core`.
```bash
pip install -U langchain-core
```
Key init args — indexing params:
embedding_function: Embeddings
Embedding function to use.
Instantiate:
```python
from langchain_core.vectorstores import InMemoryVectorStore
from langchain_openai import OpenAIEmbeddings
vector_store = InMemoryVectorStore(OpenAIEmbeddings())
```
Add Documents:
```python
from langchain_core.documents import Document
document_1 = Document(id="1", page_content="foo", metadata={"baz": "bar"})
document_2 = Document(id="2", page_content="thud", metadata={"bar": "baz"})
document_3 = Document(id="3", page_content="i will be deleted :(")
documents = [document_1, document_2, document_3]
vector_store.add_documents(documents=documents)
```
Inspect documents:
```python
top_n = 10
for index, (id, doc) in enumerate(vector_store.store.items()):
if index < top_n:
# docs have keys 'id', 'vector', 'text', 'metadata'
print(f"{id}: {doc['text']}")
else:
break
```
Delete Documents:
```python
vector_store.delete(ids=["3"])
```
Search:
```python
results = vector_store.similarity_search(query="thud", k=1)
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
```
```txt
* thud [{'bar': 'baz'}]
```
Search with filter:
```python
def _filter_function(doc: Document) -> bool:
return doc.metadata.get("bar") == "baz"
results = vector_store.similarity_search(
query="thud", k=1, filter=_filter_function
)
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
```
```txt
* thud [{'bar': 'baz'}]
```
Search with score:
```python
results = vector_store.similarity_search_with_score(query="qux", k=1)
for doc, score in results:
print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
```
```txt
* [SIM=0.832268] foo [{'baz': 'bar'}]
```
Async:
```python
# add documents
# await vector_store.aadd_documents(documents=documents)
# delete documents
# await vector_store.adelete(ids=["3"])
# search
# results = vector_store.asimilarity_search(query="thud", k=1)
# search with score
results = await vector_store.asimilarity_search_with_score(query="qux", k=1)
for doc, score in results:
print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
```
```txt
* [SIM=0.832268] foo [{'baz': 'bar'}]
```
Use as Retriever:
```python
retriever = vector_store.as_retriever(
search_type="mmr",
search_kwargs={"k": 1, "fetch_k": 2, "lambda_mult": 0.5},
)
retriever.invoke("thud")
```
```txt
[Document(id='2', metadata={'bar': 'baz'}, page_content='thud')]
```
"""
def __init__(self, embedding: Embeddings) -> None:
"""Initialize with the given embedding function.
Args:
embedding: embedding function to use.
"""
# TODO: would be nice to change to
# dict[str, Document] at some point (will be a breaking change)
self.store: dict[str, dict[str, Any]] = {}
self.embedding = embedding
@property
@override
def embeddings(self) -> Embeddings:
return self.embedding
@override
def delete(self, ids: Sequence[str] | None = None, **kwargs: Any) -> None:
if ids:
for _id in ids:
self.store.pop(_id, None)
@override
async def adelete(self, ids: Sequence[str] | None = None, **kwargs: Any) -> None:
self.delete(ids)
@override
def add_documents(
self,
documents: list[Document],
ids: list[str] | None = None,
**kwargs: Any,
) -> list[str]:
texts = [doc.page_content for doc in documents]
vectors = self.embedding.embed_documents(texts)
if ids and len(ids) != len(texts):
msg = (
f"ids must be the same length as texts. "
f"Got {len(ids)} ids and {len(texts)} texts."
)
raise ValueError(msg)
id_iterator: Iterator[str | None] = (
iter(ids) if ids else iter(doc.id for doc in documents)
)
ids_ = []
for doc, vector in zip(documents, vectors, strict=False):
doc_id = next(id_iterator)
doc_id_ = doc_id or str(uuid.uuid4())
ids_.append(doc_id_)
self.store[doc_id_] = {
"id": doc_id_,
"vector": vector,
"text": doc.page_content,
"metadata": doc.metadata,
}
return ids_
@override
async def aadd_documents(
self, documents: list[Document], ids: list[str] | None = None, **kwargs: Any
) -> list[str]:
texts = [doc.page_content for doc in documents]
vectors = await self.embedding.aembed_documents(texts)
if ids and len(ids) != len(texts):
msg = (
f"ids must be the same length as texts. "
f"Got {len(ids)} ids and {len(texts)} texts."
)
raise ValueError(msg)
id_iterator: Iterator[str | None] = (
iter(ids) if ids else iter(doc.id for doc in documents)
)
ids_: list[str] = []
for doc, vector in zip(documents, vectors, strict=False):
doc_id = next(id_iterator)
doc_id_ = doc_id or str(uuid.uuid4())
ids_.append(doc_id_)
self.store[doc_id_] = {
"id": doc_id_,
"vector": vector,
"text": doc.page_content,
"metadata": doc.metadata,
}
return ids_
@override
def get_by_ids(self, ids: Sequence[str], /) -> list[Document]:
"""Get documents by their ids.
Args:
ids: The IDs of the documents to get.
Returns:
A list of `Document` objects.
"""
documents = []
for doc_id in ids:
doc = self.store.get(doc_id)
if doc:
documents.append(
Document(
id=doc["id"],
page_content=doc["text"],
metadata=doc["metadata"],
)
)
return documents
@override
async def aget_by_ids(self, ids: Sequence[str], /) -> list[Document]:
"""Async get documents by their ids.
Args:
ids: The IDs of the documents to get.
Returns:
A list of `Document` objects.
"""
return self.get_by_ids(ids)
def _similarity_search_with_score_by_vector(
self,
embedding: list[float],
k: int = 4,
filter: Callable[[Document], bool] | None = None, # noqa: A002
) -> list[tuple[Document, float, list[float]]]:
# get all docs with fixed order in list
docs = list(self.store.values())
if filter is not None:
docs = [
doc
for doc in docs
if filter(
Document(
id=doc["id"], page_content=doc["text"], metadata=doc["metadata"]
)
)
]
if not docs:
return []
similarity = cosine_similarity([embedding], [doc["vector"] for doc in docs])[0]
# get the indices ordered by similarity score
top_k_idx = similarity.argsort()[::-1][:k]
return [
(
Document(
id=doc_dict["id"],
page_content=doc_dict["text"],
metadata=doc_dict["metadata"],
),
float(similarity[idx].item()),
doc_dict["vector"],
)
for idx in top_k_idx
# Assign using walrus operator to avoid multiple lookups
if (doc_dict := docs[idx])
]
def similarity_search_with_score_by_vector(
self,
embedding: list[float],
k: int = 4,
filter: Callable[[Document], bool] | None = None, # noqa: A002
**_kwargs: Any,
) -> list[tuple[Document, float]]:
"""Search for the most similar documents to the given embedding.
Args:
embedding: The embedding to search for.
k: The number of documents to return.
filter: A function to filter the documents.
Returns:
A list of tuples of Document objects and their similarity scores.
"""
return [
(doc, similarity)
for doc, similarity, _ in self._similarity_search_with_score_by_vector(
embedding=embedding, k=k, filter=filter
)
]
@override
def similarity_search_with_score(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> list[tuple[Document, float]]:
embedding = self.embedding.embed_query(query)
return self.similarity_search_with_score_by_vector(
embedding,
k,
**kwargs,
)
@override
async def asimilarity_search_with_score(
self, query: str, k: int = 4, **kwargs: Any
) -> list[tuple[Document, float]]:
embedding = await self.embedding.aembed_query(query)
return self.similarity_search_with_score_by_vector(
embedding,
k,
**kwargs,
)
@override
def similarity_search_by_vector(
self,
embedding: list[float],
k: int = 4,
**kwargs: Any,
) -> list[Document]:
docs_and_scores = self.similarity_search_with_score_by_vector(
embedding,
k,
**kwargs,
)
return [doc for doc, _ in docs_and_scores]
@override
async def asimilarity_search_by_vector(
self, embedding: list[float], k: int = 4, **kwargs: Any
) -> list[Document]:
return self.similarity_search_by_vector(embedding, k, **kwargs)
@override
def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> list[Document]:
return [doc for doc, _ in self.similarity_search_with_score(query, k, **kwargs)]
@override
async def asimilarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> list[Document]:
return [
doc
for doc, _ in await self.asimilarity_search_with_score(query, k, **kwargs)
]
@override
def max_marginal_relevance_search_by_vector(
self,
embedding: list[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
*,
filter: Callable[[Document], bool] | None = None,
**kwargs: Any,
) -> list[Document]:
prefetch_hits = self._similarity_search_with_score_by_vector(
embedding=embedding,
k=fetch_k,
filter=filter,
)
if not _HAS_NUMPY:
msg = (
"numpy must be installed to use max_marginal_relevance_search "
"pip install numpy"
)
raise ImportError(msg)
mmr_chosen_indices = maximal_marginal_relevance(
np.array(embedding, dtype=np.float32),
[vector for _, _, vector in prefetch_hits],
k=k,
lambda_mult=lambda_mult,
)
return [prefetch_hits[idx][0] for idx in mmr_chosen_indices]
@override
def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> list[Document]:
embedding_vector = self.embedding.embed_query(query)
return self.max_marginal_relevance_search_by_vector(
embedding_vector,
k,
fetch_k,
lambda_mult=lambda_mult,
**kwargs,
)
@override
async def amax_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> list[Document]:
embedding_vector = await self.embedding.aembed_query(query)
return self.max_marginal_relevance_search_by_vector(
embedding_vector,
k,
fetch_k,
lambda_mult=lambda_mult,
**kwargs,
)
@classmethod
@override
def from_texts(
cls,
texts: list[str],
embedding: Embeddings,
metadatas: list[dict] | None = None,
**kwargs: Any,
) -> InMemoryVectorStore:
store = cls(
embedding=embedding,
)
store.add_texts(texts=texts, metadatas=metadatas, **kwargs)
return store
@classmethod
@override
async def afrom_texts(
cls,
texts: list[str],
embedding: Embeddings,
metadatas: list[dict] | None = None,
**kwargs: Any,
) -> InMemoryVectorStore:
store = cls(
embedding=embedding,
)
await store.aadd_texts(texts=texts, metadatas=metadatas, **kwargs)
return store
@classmethod
def load(
cls, path: str, embedding: Embeddings, **kwargs: Any
) -> InMemoryVectorStore:
"""Load a vector store from a file.
Args:
path: The path to load the vector store from.
embedding: The embedding to use.
**kwargs: Additional arguments to pass to the constructor.
Returns:
A VectorStore object.
"""
path_: Path = Path(path)
with path_.open("r", encoding="utf-8") as f:
store = load(json.load(f))
vectorstore = cls(embedding=embedding, **kwargs)
vectorstore.store = store
return vectorstore
def dump(self, path: str) -> None:
"""Dump the vector store to a file.
Args:
path: The path to dump the vector store to.
"""
path_: Path = Path(path)
path_.parent.mkdir(exist_ok=True, parents=True)
with path_.open("w", encoding="utf-8") as f:
json.dump(dumpd(self.store), f, indent=2)
| InMemoryVectorStore |
python | microsoft__pyright | packages/pyright-internal/src/tests/samples/dataclassTransform3.py | {
"start": 456,
"end": 776
} | class ____:
def __init__(self, *, init: bool = True, default: Any | None = None) -> None: ...
def model_field(
*, init: bool = True, default: Any | None = None, alias: str | None = None
) -> Any: ...
@__dataclass_transform__(
kw_only_default=True,
field_specifiers=(ModelField, model_field),
)
| ModelField |
python | getsentry__sentry | src/sentry/backup/comparators.py | {
"start": 28075,
"end": 29324
} | class ____(JSONScrubbingComparator):
"""Comparator for fields that are lists of unordered elements, which simply orders them before
doing the comparison."""
def compare(self, on: InstanceID, left: Any, right: Any) -> list[ComparatorFinding]:
findings = []
fields = sorted(self.fields)
for f in fields:
if left["fields"].get(f) is None and right["fields"].get(f) is None:
continue
lv = left["fields"][f] or []
rv = right["fields"][f] or []
if sorted(lv) != sorted(rv):
findings.append(
ComparatorFinding(
kind=self.get_kind(),
on=on,
left_pk=left["pk"],
right_pk=right["pk"],
reason=f"""the left value ({lv}) of the unordered list field `{f}` was not equal to the right value ({rv})""",
)
)
return findings
# Note: we could also use the `uuid` Python uuid module for this, but it is finicky and accepts some
# weird syntactic variations that are not very common and may cause weird failures when they are
# rejected elsewhere.
| UnorderedListComparator |
python | langchain-ai__langchain | libs/core/langchain_core/stores.py | {
"start": 7862,
"end": 8458
} | class ____(InMemoryBaseStore[Any]):
"""In-memory store for any type of data.
Attributes:
store: The underlying dictionary that stores the key-value pairs.
Examples:
```python
from langchain.storage import InMemoryStore
store = InMemoryStore()
store.mset([("key1", "value1"), ("key2", "value2")])
store.mget(["key1", "key2"])
# ['value1', 'value2']
store.mdelete(["key1"])
list(store.yield_keys())
# ['key2']
list(store.yield_keys(prefix="k"))
# ['key2']
```
"""
| InMemoryStore |
python | pandas-dev__pandas | asv_bench/benchmarks/timeseries.py | {
"start": 5388,
"end": 6986
} | class ____:
params = ["DataFrame", "Series"]
param_names = ["constructor"]
def setup(self, constructor):
N = 10000
M = 10
rng = date_range(start="1/1/1990", periods=N, freq="53s")
data = {
"DataFrame": DataFrame(np.random.randn(N, M)),
"Series": Series(np.random.randn(N)),
}
self.ts = data[constructor]
self.ts.index = rng
self.ts2 = self.ts.copy()
self.ts2.iloc[250:5000] = np.nan
self.ts3 = self.ts.copy()
self.ts3.iloc[-5000:] = np.nan
self.dates = date_range(start="1/1/1990", periods=N * 10, freq="5s")
self.date = self.dates[0]
self.date_last = self.dates[-1]
self.date_early = self.date - timedelta(10)
# test speed of pre-computing NAs.
def time_asof(self, constructor):
self.ts.asof(self.dates)
# should be roughly the same as above.
def time_asof_nan(self, constructor):
self.ts2.asof(self.dates)
# test speed of the code path for a scalar index
# without *while* loop
def time_asof_single(self, constructor):
self.ts.asof(self.date)
# test speed of the code path for a scalar index
# before the start. should be the same as above.
def time_asof_single_early(self, constructor):
self.ts.asof(self.date_early)
# test the speed of the code path for a scalar index
# with a long *while* loop. should still be much
# faster than pre-computing all the NAs.
def time_asof_nan_single(self, constructor):
self.ts3.asof(self.date_last)
| AsOf |
python | sympy__sympy | sympy/tensor/array/ndim_array.py | {
"start": 18878,
"end": 19136
} | class ____(NDimArray, Basic):
_op_priority = 11.0
def __hash__(self):
return Basic.__hash__(self)
def as_immutable(self):
return self
def as_mutable(self):
raise NotImplementedError("abstract method")
| ImmutableNDimArray |
python | PrefectHQ__prefect | src/integrations/prefect-databricks/prefect_databricks/models/jobs.py | {
"start": 75075,
"end": 76103
} | class ____(BaseModel):
"""
See source code for the fields' description.
"""
model_config = ConfigDict(extra="allow", frozen=True)
s3: Optional[S3StorageInfo] = Field(
None,
alias="S3",
description=(
"S3 location of init script. Destination and either region or endpoint must"
' be provided. For example, `{ "s3": { "destination" :'
' "s3://init_script_bucket/prefix", "region" : "us-west-2" } }`'
),
)
dbfs: Optional[DbfsStorageInfo] = Field(
None,
description=(
"DBFS location of init script. Destination must be provided. For example,"
' `{ "dbfs" : { "destination" : "dbfs:/home/init_script" } }`'
),
)
file: Optional[FileStorageInfo] = Field(
None,
description=(
"File location of init script. Destination must be provided. For example,"
' `{ "file" : { "destination" : "file:/my/local/file.sh" } }`'
),
)
| InitScriptInfo |
python | MTrajK__coding-problems | Trees/zigzag_level_order_traversal.py | {
"start": 649,
"end": 1566
} | class ____:
def __init__(self, val, left=None, right=None):
self.val = val
self.left= left
self.right = right
def zigzag_level_order_traversal(root):
results = []
queue = deque()
# save nodes and levels in queue
queue.append((root, 0))
while queue:
node, lvl = queue.popleft()
if node is None:
continue
if len(results) < lvl + 1:
results.append([])
results[lvl].append(node.val)
lvl += 1
queue.append((node.left, lvl))
queue.append((node.right, lvl))
# reverse odd level
for i in range(1, len(results), 2):
results[i] = results[i][::-1]
return results
###########
# Testing #
###########
# Test 1
# Correct result => [[3], [20, 9], [15, 7]]
tree = TreeNode(3, TreeNode(9), TreeNode(20, TreeNode(15), TreeNode(7)))
print(zigzag_level_order_traversal(tree)) | TreeNode |
python | anthropics__anthropic-sdk-python | src/anthropic/types/beta/beta_bash_code_execution_tool_result_error.py | {
"start": 213,
"end": 476
} | class ____(BaseModel):
error_code: Literal[
"invalid_tool_input", "unavailable", "too_many_requests", "execution_time_exceeded", "output_file_too_large"
]
type: Literal["bash_code_execution_tool_result_error"]
| BetaBashCodeExecutionToolResultError |
python | django-haystack__django-haystack | test_haystack/whoosh_tests/test_whoosh_backend.py | {
"start": 43167,
"end": 48076
} | class ____(WhooshTestCase):
fixtures = ["bulk_data.json"]
def setUp(self):
super().setUp()
# Stow.
self.old_ui = connections["whoosh"].get_unified_index()
self.ui = UnifiedIndex()
self.wmmi = WhooshMockSearchIndex()
self.wamsi = WhooshAnotherMockSearchIndex()
self.ui.build(indexes=[self.wmmi, self.wamsi])
self.sb = connections["whoosh"].get_backend()
connections["whoosh"]._index = self.ui
self.sb.setup()
self.raw_whoosh = self.sb.index
self.parser = QueryParser(self.sb.content_field_name, schema=self.sb.schema)
self.sb.delete_index()
self.wmmi.update()
self.wamsi.update()
self.sqs = SearchQuerySet("whoosh")
def tearDown(self):
connections["whoosh"]._index = self.old_ui
super().tearDown()
# We expect failure here because, despite not changing the code, Whoosh
# 2.5.1 returns incorrect counts/results. Huzzah.
@unittest.expectedFailure
def test_more_like_this(self):
mlt = self.sqs.more_like_this(MockModel.objects.get(pk=22))
self.assertEqual(mlt.count(), 22)
self.assertEqual(
sorted([result.pk for result in mlt]),
sorted(
[
"9",
"8",
"7",
"6",
"5",
"4",
"3",
"2",
"1",
"21",
"20",
"19",
"18",
"17",
"16",
"15",
"14",
"13",
"12",
"11",
"10",
"23",
]
),
)
self.assertEqual(len([result.pk for result in mlt]), 22)
alt_mlt = self.sqs.filter(name="daniel3").more_like_this(
MockModel.objects.get(pk=13)
)
self.assertEqual(alt_mlt.count(), 8)
self.assertEqual(
sorted([result.pk for result in alt_mlt]),
sorted(["4", "3", "22", "19", "17", "16", "10", "23"]),
)
self.assertEqual(len([result.pk for result in alt_mlt]), 8)
alt_mlt_with_models = self.sqs.models(MockModel).more_like_this(
MockModel.objects.get(pk=11)
)
self.assertEqual(alt_mlt_with_models.count(), 22)
self.assertEqual(
sorted([result.pk for result in alt_mlt_with_models]),
sorted(
[
"9",
"8",
"7",
"6",
"5",
"4",
"3",
"2",
"1",
"22",
"21",
"20",
"19",
"18",
"17",
"16",
"15",
"14",
"13",
"12",
"10",
"23",
]
),
)
self.assertEqual(len([result.pk for result in alt_mlt_with_models]), 22)
if hasattr(MockModel.objects, "defer"):
# Make sure MLT works with deferred bits.
mi = MockModel.objects.defer("foo").get(pk=22)
deferred = self.sqs.models(MockModel).more_like_this(mi)
self.assertEqual(deferred.count(), 22)
self.assertEqual(
sorted([result.pk for result in deferred]),
sorted(
[
"9",
"8",
"7",
"6",
"5",
"4",
"3",
"2",
"1",
"21",
"20",
"19",
"18",
"17",
"16",
"15",
"14",
"13",
"12",
"11",
"10",
"23",
]
),
)
self.assertEqual(len([result.pk for result in deferred]), 22)
# Ensure that swapping the ``result_class`` works.
self.assertTrue(
isinstance(
self.sqs.result_class(MockSearchResult).more_like_this(
MockModel.objects.get(pk=21)
)[0],
MockSearchResult,
)
)
@override_settings(DEBUG=True)
| LiveWhooshMoreLikeThisTestCase |
python | Delgan__loguru | tests/exceptions/source/others/exception_in_property.py | {
"start": 138,
"end": 373
} | class ____:
@property
def value(self):
try:
1 / 0
except:
logger.opt(exception=True).debug("test")
return None
else:
return "Never"
a = A()
value = a.value
| A |
python | prompt-toolkit__python-prompt-toolkit | src/prompt_toolkit/output/vt100.py | {
"start": 3888,
"end": 5007
} | class ____:
"""
Cache which maps (r, g, b) tuples to 16 ansi colors.
:param bg: Cache for background colors, instead of foreground.
"""
def __init__(self, bg: bool = False) -> None:
self.bg = bg
self._cache: dict[Hashable, _ColorCodeAndName] = {}
def get_code(
self, value: tuple[int, int, int], exclude: Sequence[str] = ()
) -> _ColorCodeAndName:
"""
Return a (ansi_code, ansi_name) tuple. (E.g. ``(44, 'ansiblue')``.) for
a given (r,g,b) value.
"""
key: Hashable = (value, tuple(exclude))
cache = self._cache
if key not in cache:
cache[key] = self._get(value, exclude)
return cache[key]
def _get(
self, value: tuple[int, int, int], exclude: Sequence[str] = ()
) -> _ColorCodeAndName:
r, g, b = value
match = _get_closest_ansi_color(r, g, b, exclude=exclude)
# Turn color name into code.
if self.bg:
code = BG_ANSI_COLORS[match]
else:
code = FG_ANSI_COLORS[match]
return code, match
| _16ColorCache |
python | python__mypy | mypyc/irbuild/targets.py | {
"start": 657,
"end": 1171
} | class ____(AssignmentTarget):
"""base[index] as assignment target"""
def __init__(self, base: Value, index: Value) -> None:
self.base = base
self.index = index
# TODO: object_rprimitive won't be right for user-defined classes. Store the
# lvalue type in mypy and use a better type to avoid unneeded boxing.
self.type = object_rprimitive
def __repr__(self) -> str:
return f"AssignmentTargetIndex({self.base!r}, {self.index!r})"
| AssignmentTargetIndex |
python | getsentry__sentry | tests/sentry/release_health/test_tasks.py | {
"start": 22447,
"end": 22574
} | class ____(BaseTestReleaseMonitor, BaseMetricsTestCase):
backend_class = MetricReleaseMonitorBackend
| TestMetricReleaseMonitor |
python | run-llama__llama_index | llama-index-integrations/readers/llama-index-readers-discord/llama_index/readers/discord/base.py | {
"start": 2763,
"end": 5497
} | class ____(BasePydanticReader):
"""
Discord reader.
Reads conversations from channels.
Args:
discord_token (Optional[str]): Discord token. If not provided, we
assume the environment variable `DISCORD_TOKEN` is set.
"""
is_remote: bool = True
discord_token: str
def __init__(self, discord_token: Optional[str] = None) -> None:
"""Initialize with parameters."""
try:
import discord # noqa: F401
except ImportError:
raise ImportError(
"`discord.py` package not found, please run `pip install discord.py`"
)
if discord_token is None:
discord_token = os.environ["DISCORD_TOKEN"]
if discord_token is None:
raise ValueError(
"Must specify `discord_token` or set environment "
"variable `DISCORD_TOKEN`."
)
super().__init__(discord_token=discord_token)
@classmethod
def class_name(cls) -> str:
"""Get the name identifier of the class."""
return "DiscordReader"
def _read_channel(
self, channel_id: int, limit: Optional[int] = None, oldest_first: bool = True
) -> List[Document]:
"""Read channel."""
return asyncio.get_event_loop().run_until_complete(
read_channel(
self.discord_token, channel_id, limit=limit, oldest_first=oldest_first
)
)
def load_data(
self,
channel_ids: List[int],
limit: Optional[int] = None,
oldest_first: bool = True,
) -> List[Document]:
"""
Load data from the input directory.
Args:
channel_ids (List[int]): List of channel ids to read.
limit (Optional[int]): Maximum number of messages to read.
oldest_first (bool): Whether to read oldest messages first.
Defaults to `True`.
Returns:
List[Document]: List of documents.
"""
results: List[Document] = []
for channel_id in channel_ids:
if not isinstance(channel_id, int):
raise ValueError(
f"Channel id {channel_id} must be an integer, "
f"not {type(channel_id)}."
)
channel_documents = self._read_channel(
channel_id, limit=limit, oldest_first=oldest_first
)
results += channel_documents
return results
if __name__ == "__main__":
reader = DiscordReader()
logger.info("initialized reader")
output = reader.load_data(channel_ids=[1057178784895348746], limit=10)
logger.info(output)
| DiscordReader |
python | pytorch__pytorch | torch/_export/serde/schema.py | {
"start": 1852,
"end": 1961
} | class ____(_Union):
as_expr: Annotated[SymExpr, 10]
as_int: Annotated[int, 20]
@_union_dataclass
| SymInt |
python | ray-project__ray | python/ray/dag/tests/experimental/test_collective_dag.py | {
"start": 778,
"end": 2255
} | class ____(CPUCommunicator):
"""
Use a mock communicator to test the actor schedules.
"""
def __init__(self, world_size: int, actor_handles: List["ray.actor.ActorHandle"]):
self._world_size = world_size
self._actor_handles = actor_handles
def send(self, value: "torch.Tensor", peer_rank: int) -> None:
raise NotImplementedError
def recv(
self,
shape: Tuple[int],
dtype: "torch.dtype",
peer_rank: int,
allocator: Optional[
Callable[[Tuple[int], "torch.dtype"], "torch.Tensor"]
] = None,
) -> "torch.Tensor":
raise NotImplementedError
def allgather(
self,
send_buf: "torch.Tensor",
recv_buf: "torch.Tensor",
) -> None:
raise NotImplementedError
def allreduce(
self,
send_buf: "torch.Tensor",
recv_buf: "torch.Tensor",
op: ReduceOp,
) -> None:
raise NotImplementedError
def reducescatter(
self,
send_buf: "torch.Tensor",
recv_buf: "torch.Tensor",
op: ReduceOp,
) -> None:
raise NotImplementedError
@property
def recv_stream(self) -> Optional["cp.cuda.ExternalStream"]:
raise NotImplementedError
@property
def send_stream(self) -> Optional["cp.cuda.ExternalStream"]:
raise NotImplementedError
def destroy(self) -> None:
raise NotImplementedError
@ray.remote
| MockCommunicator |
python | ray-project__ray | release/nightly_tests/dataset/image_loader_microbenchmark.py | {
"start": 9823,
"end": 19437
} | class ____(StreamingDataset):
def __init__(
self,
s3_bucket: str,
num_physical_nodes,
cache_dir: str,
transforms: Callable,
cache_limit=None,
epoch_size=None,
) -> None:
super().__init__(
remote=s3_bucket,
local=cache_dir,
cache_limit=cache_limit,
epoch_size=epoch_size,
# Set StreamingDataset to read sequentially.
shuffle=False,
num_canonical_nodes=num_physical_nodes,
)
self.transforms = transforms
def __getitem__(self, idx: int) -> Any:
obj = super().__getitem__(idx)
image = obj["image"]
label = obj["label"]
return self.transforms(image), label
def get_mosaic_dataloader(
mosaic_data_root,
batch_size,
num_physical_nodes,
epoch_size=None,
num_workers=None,
cache_limit=None,
):
# MosaicML StreamingDataset.
use_s3 = mosaic_data_root.startswith("s3://")
if not use_s3:
assert epoch_size is None, "epoch_size not supported for streaming.LocalDataset"
assert (
cache_limit is None
), "cache_limit not supported for streaming.LocalDataset"
if use_s3:
MOSAIC_CACHE = "/tmp/mosaic_cache"
try:
import shutil
shutil.rmtree(MOSAIC_CACHE)
except (OSError, FileNotFoundError):
pass
streaming.base.util.clean_stale_shared_memory()
print(f"Initializing mosaic StreamingDataset, cache_limit={cache_limit}")
mosaic_ds = S3MosaicDataset(
s3_bucket=mosaic_data_root,
num_physical_nodes=num_physical_nodes,
cache_dir=MOSAIC_CACHE,
cache_limit=cache_limit,
epoch_size=epoch_size,
transforms=get_transform(True),
)
else:
mosaic_ds = MosaicDataset(mosaic_data_root, transforms=get_transform(True))
if num_workers is None:
num_workers = os.cpu_count()
print(f"Initializing torch DataLoader with {num_workers} workers.")
mosaic_dl = torch.utils.data.DataLoader(
mosaic_ds,
batch_size=batch_size,
num_workers=num_workers,
drop_last=True,
)
return mosaic_dl
def get_ray_mosaic_dataset(mosaic_data_root):
mds_source = MdsDatasource(mosaic_data_root)
return ray.data.read_datasource(mds_source)
def get_ray_parquet_dataset(parquet_data_root, parallelism=None):
if parallelism is not None:
ray_dataset = ray.data.read_parquet(
parquet_data_root, override_num_blocks=parallelism
)
else:
ray_dataset = ray.data.read_parquet(parquet_data_root)
ray_dataset = ray_dataset.map(decode_image_crop_and_flip)
return ray_dataset
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"--data-root",
default=None,
type=str,
help='Directory path with TFRecords. Filenames should start with "train".',
)
parser.add_argument(
"--parquet-data-root",
default=None,
type=str,
help="Directory path with Parquet files.",
)
parser.add_argument(
"--mosaic-data-root",
default=None,
type=str,
help="Directory path with MDS files.",
)
parser.add_argument(
"--tf-data-root",
default=None,
type=str,
help="Directory path with TFRecords.",
)
parser.add_argument(
"--batch-size",
default=32,
type=int,
help="Batch size to use.",
)
parser.add_argument(
"--num-epochs",
default=3,
type=int,
help="Number of epochs to run. The throughput for the last epoch will be kept.",
)
parser.add_argument(
"--output-file",
default=None,
type=str,
help="Output CSV path.",
)
args = parser.parse_args()
metrics = {}
benchmark = Benchmark()
if args.data_root is not None:
# tf.data, load images.
tf_dataset = tf.keras.preprocessing.image_dataset_from_directory(
args.data_root,
batch_size=args.batch_size,
image_size=(DEFAULT_IMAGE_SIZE, DEFAULT_IMAGE_SIZE),
)
for i in range(args.num_epochs):
benchmark.run_iterate_ds(
"tf_data",
tf_dataset,
)
# tf.data, with transform.
tf_dataset = tf.keras.preprocessing.image_dataset_from_directory(args.data_root)
tf_dataset = tf_dataset.map(lambda img, label: (tf_crop_and_flip(img), label))
tf_dataset.unbatch().batch(args.batch_size)
for i in range(args.num_epochs):
benchmark.run_iterate_ds(
"tf_data+transform",
tf_dataset,
)
# torch, load images.
torch_dataset = build_torch_dataset(
args.data_root,
args.batch_size,
transform=torchvision.transforms.Compose(
[
torchvision.transforms.Resize(
(DEFAULT_IMAGE_SIZE, DEFAULT_IMAGE_SIZE)
),
torchvision.transforms.ToTensor(),
]
),
)
for i in range(args.num_epochs):
benchmark.run_iterate_ds(
"torch",
torch_dataset,
)
# torch, with transform.
torch_dataset = build_torch_dataset(
args.data_root, args.batch_size, transform=get_transform(True)
)
for i in range(args.num_epochs):
benchmark.run_iterate_ds(
"torch+transform",
torch_dataset,
)
# ray.data, load images.
ray_dataset = ray.data.read_images(
args.data_root, mode="RGB", size=(DEFAULT_IMAGE_SIZE, DEFAULT_IMAGE_SIZE)
)
for i in range(args.num_epochs):
benchmark.run_iterate_ds(
"ray_data",
ray_dataset.iter_torch_batches(batch_size=args.batch_size),
)
# ray.data, with transform.
ray_dataset = ray.data.read_images(args.data_root, mode="RGB").map(
crop_and_flip_image
)
for i in range(args.num_epochs):
benchmark.run_iterate_ds(
"ray_data+map_transform",
ray_dataset.iter_batches(batch_size=args.batch_size),
)
# Pass size to read_images when using map_batches to make sure that all
# batches have rows with the same dimensions.
ray_dataset = ray.data.read_images(
args.data_root, mode="RGB", size=(256, 256)
).map_batches(crop_and_flip_image_batch)
for i in range(args.num_epochs):
benchmark.run_iterate_ds(
"ray_data+transform",
ray_dataset.iter_torch_batches(batch_size=args.batch_size),
)
ray_dataset = ray.data.read_images(
args.data_root, mode="RGB", size=(256, 256)
).map_batches(crop_and_flip_image_batch, zero_copy_batch=True)
for i in range(args.num_epochs):
benchmark.run_iterate_ds(
"ray_data+transform+zerocopy",
ray_dataset.iter_torch_batches(batch_size=args.batch_size),
)
if args.tf_data_root is not None:
tf_dataset = build_tfrecords_tf_dataset(args.tf_data_root, args.batch_size)
for i in range(args.num_epochs):
benchmark.run_iterate_ds(
"tf_data_tfrecords+transform",
tf_dataset,
)
ray_dataset = ray.data.read_tfrecords(args.tf_data_root)
ray_dataset = ray_dataset.map_batches(
decode_crop_and_flip_tf_record_batch,
batch_size=args.batch_size,
batch_format="pandas",
)
for i in range(args.num_epochs):
tf_dataset = ray_dataset.to_tf(
batch_size=args.batch_size,
feature_columns="image",
label_columns="label",
)
benchmark.run_iterate_ds(
"ray_data_tfrecords+transform",
tf_dataset,
)
if args.parquet_data_root is not None:
ray_dataset = get_ray_parquet_dataset(args.parquet_data_root, parallelism=128)
for i in range(args.num_epochs):
benchmark.run_iterate_ds(
"ray_data_parquet+map_transform",
ray_dataset.iter_torch_batches(batch_size=args.batch_size),
)
print(ray_dataset.stats())
if args.mosaic_data_root is not None:
num_workers = None
mosaic_dl = get_mosaic_dataloader(
args.mosaic_data_root,
batch_size=args.batch_size,
num_physical_nodes=1,
num_workers=num_workers,
)
for i in range(args.num_epochs):
benchmark.run_iterate_ds("mosaicml_mds", mosaic_dl)
# ray.data.
use_s3 = args.mosaic_data_root.startswith("s3://")
if not use_s3:
mds_source = MdsDatasource(args.mosaic_data_root)
ray_dataset = ray.data.read_datasource(mds_source)
ray_dataset = ray_dataset.map(crop_and_flip_image)
for i in range(args.num_epochs):
benchmark.run_iterate_ds(
"ray_data_mds+map_transform",
ray_dataset.iter_torch_batches(batch_size=args.batch_size),
)
benchmark.write_result()
| S3MosaicDataset |
python | django__django | tests/migrations/test_fake_initial_case_insensitive/initial/0001_initial.py | {
"start": 43,
"end": 845
} | class ____(migrations.Migration):
initial = True
operations = [
migrations.CreateModel(
name="fakeinitialmodel",
fields=[
("id", models.AutoField(primary_key=True)),
("field", models.CharField(max_length=20)),
(
"field_mixed_case",
models.CharField(max_length=20, db_column="FiEld_MiXeD_CaSe"),
),
(
"fake_initial_mode",
models.ManyToManyField(
"migrations.FakeInitialModel", db_table="m2m_MiXeD_CaSe"
),
),
],
options={
"db_table": "migrations_MiXeD_CaSe_MoDel",
},
),
]
| Migration |
python | lepture__authlib | authlib/oauth2/client.py | {
"start": 636,
"end": 18938
} | class ____:
"""Construct a new OAuth 2 protocol client.
:param session: Requests session object to communicate with
authorization server.
:param client_id: Client ID, which you get from client registration.
:param client_secret: Client Secret, which you get from registration.
:param token_endpoint_auth_method: client authentication method for
token endpoint.
:param revocation_endpoint_auth_method: client authentication method for
revocation endpoint.
:param scope: Scope that you needed to access user resources.
:param state: Shared secret to prevent CSRF attack.
:param redirect_uri: Redirect URI you registered as callback.
:param code_challenge_method: PKCE method name, only S256 is supported.
:param token: A dict of token attributes such as ``access_token``,
``token_type`` and ``expires_at``.
:param token_placement: The place to put token in HTTP request. Available
values: "header", "body", "uri".
:param update_token: A function for you to update token. It accept a
:class:`OAuth2Token` as parameter.
:param leeway: Time window in seconds before the actual expiration of the
authentication token, that the token is considered expired and will
be refreshed.
"""
client_auth_class = ClientAuth
token_auth_class = TokenAuth
oauth_error_class = OAuth2Error
EXTRA_AUTHORIZE_PARAMS = ("response_mode", "nonce", "prompt", "login_hint")
SESSION_REQUEST_PARAMS = []
def __init__(
self,
session,
client_id=None,
client_secret=None,
token_endpoint_auth_method=None,
revocation_endpoint_auth_method=None,
scope=None,
state=None,
redirect_uri=None,
code_challenge_method=None,
token=None,
token_placement="header",
update_token=None,
leeway=60,
**metadata,
):
self.session = session
self.client_id = client_id
self.client_secret = client_secret
self.state = state
if token_endpoint_auth_method is None:
if client_secret:
token_endpoint_auth_method = "client_secret_basic"
else:
token_endpoint_auth_method = "none"
self.token_endpoint_auth_method = token_endpoint_auth_method
if revocation_endpoint_auth_method is None:
if client_secret:
revocation_endpoint_auth_method = "client_secret_basic"
else:
revocation_endpoint_auth_method = "none"
self.revocation_endpoint_auth_method = revocation_endpoint_auth_method
self.scope = scope
self.redirect_uri = redirect_uri
self.code_challenge_method = code_challenge_method
self.token_auth = self.token_auth_class(token, token_placement, self)
self.update_token = update_token
token_updater = metadata.pop("token_updater", None)
if token_updater:
raise ValueError(
"update token has been redesigned, checkout the documentation"
)
self.metadata = metadata
self.compliance_hook = {
"access_token_response": set(),
"refresh_token_request": set(),
"refresh_token_response": set(),
"revoke_token_request": set(),
"introspect_token_request": set(),
}
self._auth_methods = {}
self.leeway = leeway
def register_client_auth_method(self, auth):
"""Extend client authenticate for token endpoint.
:param auth: an instance to sign the request
"""
if isinstance(auth, tuple):
self._auth_methods[auth[0]] = auth[1]
else:
self._auth_methods[auth.name] = auth
def client_auth(self, auth_method):
if isinstance(auth_method, str) and auth_method in self._auth_methods:
auth_method = self._auth_methods[auth_method]
return self.client_auth_class(
client_id=self.client_id,
client_secret=self.client_secret,
auth_method=auth_method,
)
@property
def token(self):
return self.token_auth.token
@token.setter
def token(self, token):
self.token_auth.set_token(token)
def create_authorization_url(self, url, state=None, code_verifier=None, **kwargs):
"""Generate an authorization URL and state.
:param url: Authorization endpoint url, must be HTTPS.
:param state: An optional state string for CSRF protection. If not
given it will be generated for you.
:param code_verifier: An optional code_verifier for code challenge.
:param kwargs: Extra parameters to include.
:return: authorization_url, state
"""
if state is None:
state = generate_token()
response_type = self.metadata.get("response_type", "code")
response_type = kwargs.pop("response_type", response_type)
if "redirect_uri" not in kwargs:
kwargs["redirect_uri"] = self.redirect_uri
if "scope" not in kwargs:
kwargs["scope"] = self.scope
if (
code_verifier
and response_type == "code"
and self.code_challenge_method == "S256"
):
kwargs["code_challenge"] = create_s256_code_challenge(code_verifier)
kwargs["code_challenge_method"] = self.code_challenge_method
for k in self.EXTRA_AUTHORIZE_PARAMS:
if k not in kwargs and k in self.metadata:
kwargs[k] = self.metadata[k]
uri = prepare_grant_uri(
url,
client_id=self.client_id,
response_type=response_type,
state=state,
**kwargs,
)
return uri, state
def fetch_token(
self,
url=None,
body="",
method="POST",
headers=None,
auth=None,
grant_type=None,
state=None,
**kwargs,
):
"""Generic method for fetching an access token from the token endpoint.
:param url: Access Token endpoint URL, if not configured,
``authorization_response`` is used to extract token from
its fragment (implicit way).
:param body: Optional application/x-www-form-urlencoded body to add the
include in the token request. Prefer kwargs over body.
:param method: The HTTP method used to make the request. Defaults
to POST, but may also be GET. Other methods should
be added as needed.
:param headers: Dict to default request headers with.
:param auth: An auth tuple or method as accepted by requests.
:param grant_type: Use specified grant_type to fetch token.
:param state: Optional "state" value to fetch token.
:return: A :class:`OAuth2Token` object (a dict too).
"""
state = state or self.state
# implicit grant_type
authorization_response = kwargs.pop("authorization_response", None)
if authorization_response and "#" in authorization_response:
return self.token_from_fragment(authorization_response, state)
session_kwargs = self._extract_session_request_params(kwargs)
if authorization_response and "code=" in authorization_response:
grant_type = "authorization_code"
params = parse_authorization_code_response(
authorization_response,
state=state,
)
kwargs["code"] = params["code"]
if grant_type is None:
grant_type = self.metadata.get("grant_type")
if grant_type is None:
grant_type = _guess_grant_type(kwargs)
self.metadata["grant_type"] = grant_type
body = self._prepare_token_endpoint_body(body, grant_type, **kwargs)
if auth is None:
auth = self.client_auth(self.token_endpoint_auth_method)
if headers is None:
headers = DEFAULT_HEADERS
if url is None:
url = self.metadata.get("token_endpoint")
return self._fetch_token(
url, body=body, auth=auth, method=method, headers=headers, **session_kwargs
)
def token_from_fragment(self, authorization_response, state=None):
token = parse_implicit_response(authorization_response, state)
if "error" in token:
raise self.oauth_error_class(
error=token["error"], description=token.get("error_description")
)
self.token = token
return token
def refresh_token(
self, url=None, refresh_token=None, body="", auth=None, headers=None, **kwargs
):
"""Fetch a new access token using a refresh token.
:param url: Refresh Token endpoint, must be HTTPS.
:param refresh_token: The refresh_token to use.
:param body: Optional application/x-www-form-urlencoded body to add the
include in the token request. Prefer kwargs over body.
:param auth: An auth tuple or method as accepted by requests.
:param headers: Dict to default request headers with.
:return: A :class:`OAuth2Token` object (a dict too).
"""
session_kwargs = self._extract_session_request_params(kwargs)
refresh_token = refresh_token or self.token.get("refresh_token")
if "scope" not in kwargs and self.scope:
kwargs["scope"] = self.scope
body = prepare_token_request(
"refresh_token", body, refresh_token=refresh_token, **kwargs
)
if headers is None:
headers = DEFAULT_HEADERS.copy()
if url is None:
url = self.metadata.get("token_endpoint")
for hook in self.compliance_hook["refresh_token_request"]:
url, headers, body = hook(url, headers, body)
if auth is None:
auth = self.client_auth(self.token_endpoint_auth_method)
return self._refresh_token(
url,
refresh_token=refresh_token,
body=body,
headers=headers,
auth=auth,
**session_kwargs,
)
def ensure_active_token(self, token=None):
if token is None:
token = self.token
if not token.is_expired(leeway=self.leeway):
return True
refresh_token = token.get("refresh_token")
url = self.metadata.get("token_endpoint")
if refresh_token and url:
self.refresh_token(url, refresh_token=refresh_token)
return True
elif self.metadata.get("grant_type") == "client_credentials":
access_token = token["access_token"]
new_token = self.fetch_token(url, grant_type="client_credentials")
if self.update_token:
self.update_token(new_token, access_token=access_token)
return True
def revoke_token(
self,
url,
token=None,
token_type_hint=None,
body=None,
auth=None,
headers=None,
**kwargs,
):
"""Revoke token method defined via `RFC7009`_.
:param url: Revoke Token endpoint, must be HTTPS.
:param token: The token to be revoked.
:param token_type_hint: The type of the token that to be revoked.
It can be "access_token" or "refresh_token".
:param body: Optional application/x-www-form-urlencoded body to add the
include in the token request. Prefer kwargs over body.
:param auth: An auth tuple or method as accepted by requests.
:param headers: Dict to default request headers with.
:return: Revocation Response
.. _`RFC7009`: https://tools.ietf.org/html/rfc7009
"""
if auth is None:
auth = self.client_auth(self.revocation_endpoint_auth_method)
return self._handle_token_hint(
"revoke_token_request",
url,
token=token,
token_type_hint=token_type_hint,
body=body,
auth=auth,
headers=headers,
**kwargs,
)
def introspect_token(
self,
url,
token=None,
token_type_hint=None,
body=None,
auth=None,
headers=None,
**kwargs,
):
"""Implementation of OAuth 2.0 Token Introspection defined via `RFC7662`_.
:param url: Introspection Endpoint, must be HTTPS.
:param token: The token to be introspected.
:param token_type_hint: The type of the token that to be revoked.
It can be "access_token" or "refresh_token".
:param body: Optional application/x-www-form-urlencoded body to add the
include in the token request. Prefer kwargs over body.
:param auth: An auth tuple or method as accepted by requests.
:param headers: Dict to default request headers with.
:return: Introspection Response
.. _`RFC7662`: https://tools.ietf.org/html/rfc7662
"""
if auth is None:
auth = self.client_auth(self.token_endpoint_auth_method)
return self._handle_token_hint(
"introspect_token_request",
url,
token=token,
token_type_hint=token_type_hint,
body=body,
auth=auth,
headers=headers,
**kwargs,
)
def register_compliance_hook(self, hook_type, hook):
"""Register a hook for request/response tweaking.
Available hooks are:
* access_token_response: invoked before token parsing.
* refresh_token_request: invoked before refreshing token.
* refresh_token_response: invoked before refresh token parsing.
* protected_request: invoked before making a request.
* revoke_token_request: invoked before revoking a token.
* introspect_token_request: invoked before introspecting a token.
"""
if hook_type == "protected_request":
self.token_auth.hooks.add(hook)
return
if hook_type not in self.compliance_hook:
raise ValueError(
"Hook type %s is not in %s.", hook_type, self.compliance_hook
)
self.compliance_hook[hook_type].add(hook)
def parse_response_token(self, resp):
if resp.status_code >= 500:
resp.raise_for_status()
token = resp.json()
if "error" in token:
raise self.oauth_error_class(
error=token["error"], description=token.get("error_description")
)
self.token = token
return self.token
def _fetch_token(
self, url, body="", headers=None, auth=None, method="POST", **kwargs
):
if method.upper() == "POST":
resp = self.session.post(
url, data=dict(url_decode(body)), headers=headers, auth=auth, **kwargs
)
else:
if "?" in url:
url = "&".join([url, body])
else:
url = "?".join([url, body])
resp = self.session.request(
method, url, headers=headers, auth=auth, **kwargs
)
for hook in self.compliance_hook["access_token_response"]:
resp = hook(resp)
return self.parse_response_token(resp)
def _refresh_token(
self, url, refresh_token=None, body="", headers=None, auth=None, **kwargs
):
resp = self._http_post(url, body=body, auth=auth, headers=headers, **kwargs)
for hook in self.compliance_hook["refresh_token_response"]:
resp = hook(resp)
token = self.parse_response_token(resp)
if "refresh_token" not in token:
self.token["refresh_token"] = refresh_token
if callable(self.update_token):
self.update_token(self.token, refresh_token=refresh_token)
return self.token
def _handle_token_hint(
self,
hook,
url,
token=None,
token_type_hint=None,
body=None,
auth=None,
headers=None,
**kwargs,
):
if token is None and self.token:
token = self.token.get("refresh_token") or self.token.get("access_token")
if body is None:
body = ""
body, headers = prepare_revoke_token_request(
token, token_type_hint, body, headers
)
for compliance_hook in self.compliance_hook[hook]:
url, headers, body = compliance_hook(url, headers, body)
if auth is None:
auth = self.client_auth(self.revocation_endpoint_auth_method)
session_kwargs = self._extract_session_request_params(kwargs)
return self._http_post(url, body, auth=auth, headers=headers, **session_kwargs)
def _prepare_token_endpoint_body(self, body, grant_type, **kwargs):
if grant_type == "authorization_code":
if "redirect_uri" not in kwargs:
kwargs["redirect_uri"] = self.redirect_uri
return prepare_token_request(grant_type, body, **kwargs)
if "scope" not in kwargs and self.scope:
kwargs["scope"] = self.scope
return prepare_token_request(grant_type, body, **kwargs)
def _extract_session_request_params(self, kwargs):
"""Extract parameters for session object from the passing ``**kwargs``."""
rv = {}
for k in self.SESSION_REQUEST_PARAMS:
if k in kwargs:
rv[k] = kwargs.pop(k)
return rv
def _http_post(self, url, body=None, auth=None, headers=None, **kwargs):
return self.session.post(
url, data=dict(url_decode(body)), headers=headers, auth=auth, **kwargs
)
def __del__(self):
del self.session
def _guess_grant_type(kwargs):
if "code" in kwargs:
grant_type = "authorization_code"
elif "username" in kwargs and "password" in kwargs:
grant_type = "password"
else:
grant_type = "client_credentials"
return grant_type
| OAuth2Client |
python | pypa__pip | src/pip/_vendor/tomli/_parser.py | {
"start": 9204,
"end": 10261
} | class ____:
def __init__(self) -> None:
# The parsed content of the TOML document
self.dict: dict[str, Any] = {}
def get_or_create_nest(
self,
key: Key,
*,
access_lists: bool = True,
) -> dict[str, Any]:
cont: Any = self.dict
for k in key:
if k not in cont:
cont[k] = {}
cont = cont[k]
if access_lists and isinstance(cont, list):
cont = cont[-1]
if not isinstance(cont, dict):
raise KeyError("There is no nest behind this key")
return cont # type: ignore[no-any-return]
def append_nest_to_list(self, key: Key) -> None:
cont = self.get_or_create_nest(key[:-1])
last_key = key[-1]
if last_key in cont:
list_ = cont[last_key]
if not isinstance(list_, list):
raise KeyError("An object other than list found behind this key")
list_.append({})
else:
cont[last_key] = [{}]
| NestedDict |
python | bokeh__bokeh | examples/server/app/surface3d/surface3d.py | {
"start": 1038,
"end": 2349
} | class ____(LayoutDOM):
# The special class attribute ``__implementation__`` should contain a string
# of JavaScript (or TypeScript) code that implements the JavaScript side
# of the custom extension model.
__implementation__ = "surface3d.ts"
# Below are all the "properties" for this model. Bokeh properties are
# class attributes that define the fields (and their types) that can be
# communicated automatically between Python and the browser. Properties
# also support type validation. More information about properties in
# can be found here:
#
# https://docs.bokeh.org/en/latest/docs/reference/core/properties.html#bokeh-core-properties
# This is a Bokeh ColumnDataSource that can be updated in the Bokeh
# server by Python code
data_source = Instance(ColumnDataSource)
# The vis.js library that we are wrapping expects data for x, y, and z.
# The data will actually be stored in the ColumnDataSource, but these
# properties let us specify the *name* of the column that should be
# used for each field.
x = String()
y = String()
z = String()
# Any of the available vis.js options for Graph3d can be set by changing
# the contents of this dictionary.
options = Dict(String, Any, default=DEFAULTS)
| Surface3d |
python | chroma-core__chroma | chromadb/auth/token_authn/__init__.py | {
"start": 1884,
"end": 3264
} | class ____(ClientAuthProvider):
"""
Client auth provider for token-based auth. Header key will be either
"Authorization" or "X-Chroma-Token" depending on
`chroma_auth_token_transport_header`. If the header is "Authorization",
the token is passed as a bearer token.
"""
def __init__(self, system: System) -> None:
super().__init__(system)
self._settings = system.settings
system.settings.require("chroma_client_auth_credentials")
self._token = SecretStr(str(system.settings.chroma_client_auth_credentials))
_check_token(self._token.get_secret_value())
if system.settings.chroma_auth_token_transport_header:
_check_allowed_token_headers(
system.settings.chroma_auth_token_transport_header
)
self._token_transport_header = TokenTransportHeader(
system.settings.chroma_auth_token_transport_header
)
else:
self._token_transport_header = TokenTransportHeader.AUTHORIZATION
@override
def authenticate(self) -> ClientAuthHeaders:
val = self._token.get_secret_value()
if self._token_transport_header == TokenTransportHeader.AUTHORIZATION:
val = f"Bearer {val}"
return {
self._token_transport_header.value: SecretStr(val),
}
| TokenAuthClientProvider |
python | python-openxml__python-docx | tests/image/test_jpeg.py | {
"start": 14253,
"end": 16221
} | class ____:
def it_constructs_the_appropriate_marker_object(self, call_fixture):
marker_code, stream_, offset_, marker_cls_ = call_fixture
marker = _MarkerFactory(marker_code, stream_, offset_)
marker_cls_.from_stream.assert_called_once_with(stream_, marker_code, offset_)
assert marker is marker_cls_.from_stream.return_value
# fixtures -------------------------------------------------------
@pytest.fixture(
params=[
JPEG_MARKER_CODE.APP0,
JPEG_MARKER_CODE.APP1,
JPEG_MARKER_CODE.SOF0,
JPEG_MARKER_CODE.SOF7,
JPEG_MARKER_CODE.SOS,
]
)
def call_fixture(
self,
request,
stream_,
offset_,
_App0Marker_,
_App1Marker_,
_SofMarker_,
_Marker_,
):
marker_code = request.param
if marker_code == JPEG_MARKER_CODE.APP0:
marker_cls_ = _App0Marker_
elif marker_code == JPEG_MARKER_CODE.APP1:
marker_cls_ = _App1Marker_
elif marker_code in JPEG_MARKER_CODE.SOF_MARKER_CODES:
marker_cls_ = _SofMarker_
else:
marker_cls_ = _Marker_
return marker_code, stream_, offset_, marker_cls_
@pytest.fixture
def _App0Marker_(self, request):
return class_mock(request, "docx.image.jpeg._App0Marker")
@pytest.fixture
def _App1Marker_(self, request):
return class_mock(request, "docx.image.jpeg._App1Marker")
@pytest.fixture
def _Marker_(self, request):
return class_mock(request, "docx.image.jpeg._Marker")
@pytest.fixture
def offset_(self, request):
return instance_mock(request, int)
@pytest.fixture
def _SofMarker_(self, request):
return class_mock(request, "docx.image.jpeg._SofMarker")
@pytest.fixture
def stream_(self, request):
return instance_mock(request, io.BytesIO)
| Describe_MarkerFactory |
python | astropy__astropy | astropy/modeling/projections.py | {
"start": 18402,
"end": 18609
} | class ____(Projection):
r"""Base class for Cylindrical projections.
Cylindrical projections are so-named because the surface of
projection is a cylinder.
"""
_separable = True
| Cylindrical |
python | arrow-py__arrow | tests/test_factory.py | {
"start": 12828,
"end": 13217
} | class ____:
def test_no_tz(self):
assert_datetime_equality(self.factory.now(), datetime.now().astimezone())
def test_tzinfo(self):
assert_datetime_equality(
self.factory.now(ZoneInfo("EST")), datetime.now(ZoneInfo("EST"))
)
def test_tz_str(self):
assert_datetime_equality(self.factory.now("EST"), datetime.now(ZoneInfo("EST")))
| TestNow |
python | kamyu104__LeetCode-Solutions | Python/3sum-smaller.py | {
"start": 31,
"end": 566
} | class ____(object):
# @param {integer[]} nums
# @param {integer} target
# @return {integer}
def threeSumSmaller(self, nums, target):
nums.sort()
n = len(nums)
count, k = 0, 2
while k < n:
i, j = 0, k - 1
while i < j: # Two Pointers, linear time.
if nums[i] + nums[j] + nums[k] >= target:
j -= 1
else:
count += j - i
i += 1
k += 1
return count
| Solution |
python | keras-team__keras | keras/src/ops/numpy.py | {
"start": 12436,
"end": 14436
} | class ____(Operation):
def __init__(self, axis=None, keepdims=False, *, name=None):
super().__init__(name=name)
if isinstance(axis, int):
axis = [axis]
self.axis = axis
self.keepdims = keepdims
def call(self, x):
return backend.numpy.amax(
x,
axis=self.axis,
keepdims=self.keepdims,
)
def compute_output_spec(self, x):
return KerasTensor(
reduce_shape(x.shape, axis=self.axis, keepdims=self.keepdims),
dtype=x.dtype,
)
@keras_export(["keras.ops.amax", "keras.ops.numpy.amax"])
def amax(x, axis=None, keepdims=False):
"""Returns the maximum of an array or maximum value along an axis.
Args:
x: Input tensor.
axis: Axis along which to compute the maximum.
By default (`axis=None`), find the maximum value in all the
dimensions of the input array.
keepdims: If `True`, axes which are reduced are left in the result as
dimensions that are broadcast to the size of the original
input tensor. Defaults to `False`.
Returns:
An array with the maximum value. If `axis=None`, the result is a scalar
value representing the maximum element in the entire array. If `axis` is
given, the result is an array with the maximum values along
the specified axis.
Examples:
>>> x = keras.ops.convert_to_tensor([[1, 3, 5], [2, 3, 6]])
>>> keras.ops.amax(x)
array(6, dtype=int32)
>>> x = keras.ops.convert_to_tensor([[1, 6, 8], [1, 5, 2]])
>>> keras.ops.amax(x, axis=0)
array([1, 6, 8], dtype=int32)
>>> x = keras.ops.convert_to_tensor([[1, 6, 8], [1, 5, 2]])
>>> keras.ops.amax(x, axis=1, keepdims=True)
array([[8], [5]], dtype=int32)
"""
if any_symbolic_tensors((x,)):
return Amax(axis=axis, keepdims=keepdims).symbolic_call(x)
return backend.numpy.amax(x, axis=axis, keepdims=keepdims)
| Amax |
python | ray-project__ray | doc/source/serve/doc_code/http_guide/streaming_example.py | {
"start": 226,
"end": 1385
} | class ____:
def generate_numbers(self, max: int) -> Generator[str, None, None]:
for i in range(max):
yield str(i)
time.sleep(0.1)
def __call__(self, request: Request) -> StreamingResponse:
max = request.query_params.get("max", "25")
gen = self.generate_numbers(int(max))
return StreamingResponse(gen, status_code=200, media_type="text/plain")
serve.run(StreamingResponder.bind())
r = requests.get("http://localhost:8000?max=10", stream=True)
start = time.time()
r.raise_for_status()
for chunk in r.iter_content(chunk_size=None, decode_unicode=True):
print(f"Got result {round(time.time()-start, 1)}s after start: '{chunk}'")
# __end_example__
r = requests.get("http://localhost:8000?max=10", stream=True)
r.raise_for_status()
for i, chunk in enumerate(r.iter_content(chunk_size=None, decode_unicode=True)):
assert chunk == str(i)
# __begin_cancellation__
import asyncio
import time
from typing import AsyncGenerator
import requests
from starlette.responses import StreamingResponse
from starlette.requests import Request
from ray import serve
@serve.deployment
| StreamingResponder |
python | sqlalchemy__sqlalchemy | lib/sqlalchemy/ext/associationproxy.py | {
"start": 38423,
"end": 41899
} | class ____(AssociationProxyInstance[_T]):
"""an :class:`.AssociationProxyInstance` where we cannot determine
the type of target object.
"""
_is_canonical = False
def _ambiguous(self) -> NoReturn:
raise AttributeError(
"Association proxy %s.%s refers to an attribute '%s' that is not "
"directly mapped on class %s; therefore this operation cannot "
"proceed since we don't know what type of object is referred "
"towards"
% (
self.owning_class.__name__,
self.target_collection,
self.value_attr,
self.target_class,
)
)
def get(self, obj: Any) -> Any:
if obj is None:
return self
else:
return super().get(obj)
def __eq__(self, obj: object) -> NoReturn:
self._ambiguous()
def __ne__(self, obj: object) -> NoReturn:
self._ambiguous()
def any(
self,
criterion: Optional[_ColumnExpressionArgument[bool]] = None,
**kwargs: Any,
) -> NoReturn:
self._ambiguous()
def has(
self,
criterion: Optional[_ColumnExpressionArgument[bool]] = None,
**kwargs: Any,
) -> NoReturn:
self._ambiguous()
@util.memoized_property
def _lookup_cache(self) -> Dict[Type[Any], AssociationProxyInstance[_T]]:
# mapping of <subclass>->AssociationProxyInstance.
# e.g. proxy is A-> A.b -> B -> B.b_attr, but B.b_attr doesn't exist;
# only B1(B) and B2(B) have "b_attr", keys in here would be B1, B2
return {}
def _non_canonical_get_for_object(
self, parent_instance: Any
) -> AssociationProxyInstance[_T]:
if parent_instance is not None:
actual_obj = getattr(parent_instance, self.target_collection)
if actual_obj is not None:
try:
insp = inspect(actual_obj)
except exc.NoInspectionAvailable:
pass
else:
mapper = insp.mapper
instance_class = mapper.class_
if instance_class not in self._lookup_cache:
self._populate_cache(instance_class, mapper)
try:
return self._lookup_cache[instance_class]
except KeyError:
pass
# no object or ambiguous object given, so return "self", which
# is a proxy with generally only instance-level functionality
return self
def _populate_cache(
self, instance_class: Any, mapper: Mapper[Any]
) -> None:
prop = orm.class_mapper(self.owning_class).get_property(
self.target_collection
)
if mapper.isa(prop.mapper):
target_class = instance_class
try:
target_assoc = self._cls_unwrap_target_assoc_proxy(
target_class, self.value_attr
)
except AttributeError:
pass
else:
self._lookup_cache[instance_class] = self._construct_for_assoc(
cast("AssociationProxyInstance[_T]", target_assoc),
self.parent,
self.owning_class,
target_class,
self.value_attr,
)
| AmbiguousAssociationProxyInstance |
python | crytic__slither | slither/tools/doctor/checks/__init__.py | {
"start": 292,
"end": 585
} | class ____:
title: str
function: Callable[..., None]
ALL_CHECKS: List[Check] = [
Check("PATH configuration", check_slither_path),
Check("Software versions", show_versions),
Check("Project platform", detect_platform),
Check("Project compilation", compile_project),
]
| Check |
python | ray-project__ray | python/ray/experimental/collective/operations.py | {
"start": 5455,
"end": 6151
} | class ____:
"""Wrapper for NCCL all-reduce."""
def bind(
self,
input_nodes: List["ray.dag.DAGNode"],
op: ReduceOp = ReduceOp.SUM,
transport: Optional[Union[str, Communicator]] = None,
) -> List[CollectiveOutputNode]:
if not isinstance(op, ReduceOp):
raise ValueError(f"Unexpected operation: {op}")
return _bind(input_nodes, AllReduceOp(reduceOp=op), transport)
def __call__(
self,
tensor,
group_name: str = "default",
op: RayReduceOp = RayReduceOp.SUM,
):
from ray.util.collective.collective import allreduce
return allreduce(tensor, group_name, op)
| AllReduceWrapper |
python | streamlit__streamlit | lib/tests/streamlit/data_test_cases.py | {
"start": 3486,
"end": 3546
} | class ____(UserDict): # type: ignore
pass
| UserDictExample |
python | airbytehq__airbyte | airbyte-integrations/connectors/destination-aws-datalake/destination_aws_datalake/config_reader.py | {
"start": 76,
"end": 428
} | class ____(enum.Enum):
IAM_ROLE = "IAM Role"
IAM_USER = "IAM User"
@staticmethod
def from_string(s: str):
if s == "IAM Role":
return CredentialsType.IAM_ROLE
elif s == "IAM User":
return CredentialsType.IAM_USER
else:
raise ValueError(f"Unknown auth mode: {s}")
| CredentialsType |
python | huggingface__transformers | src/transformers/models/gpt_neox_japanese/modeling_gpt_neox_japanese.py | {
"start": 7086,
"end": 14127
} | class ____(nn.Module):
def __init__(self, config, use_bias=False, layer_idx=None):
super().__init__()
self.num_attention_heads = config.num_attention_heads
self.hidden_size = config.hidden_size
self.head_size = self.hidden_size // self.num_attention_heads
if layer_idx is None:
logger.warning_once(
f"Instantiating {self.__class__.__name__} without passing a `layer_idx` is not recommended and will "
"lead to errors during the forward call if caching is used. Please make sure to provide a `layer_idx` "
"when creating this class."
)
self.layer_idx = layer_idx
partial_rotary_factor = config.rope_parameters.get("partial_rotary_factor", 1.0)
self.rotary_ndims = int(self.head_size * partial_rotary_factor)
self.attention_dropout = nn.Dropout(config.attention_dropout)
self.norm_factor = math.sqrt(self.head_size)
self.query_key_value = nn.Linear(config.hidden_size, 3 * config.hidden_size, bias=False)
self.dense = nn.Linear(config.hidden_size, config.hidden_size, bias=False)
# Activate bias if the last layer
self.use_bias = use_bias
self.dense_bias = nn.Parameter(torch.zeros(config.hidden_size)) if use_bias else None
def forward(
self,
hidden_states: torch.FloatTensor,
attention_mask: torch.FloatTensor,
position_ids: torch.LongTensor,
layer_past: Optional[Cache] = None,
use_cache: Optional[bool] = False,
output_attentions: Optional[bool] = False,
cache_position: Optional[torch.LongTensor] = None,
position_embeddings: Optional[tuple[torch.Tensor, torch.Tensor]] = None,
):
# Compute QKV
# Attention heads [batch, seq_len, hidden_size]
# --> [batch, seq_len, (np * 3 * head_size)]
qkv = self.query_key_value(hidden_states)
# [batch, seq_len, (num_heads * 3 * head_size)]
# --> [batch, seq_len, num_heads, 3 * head_size]
new_qkv_shape = qkv.size()[:-1] + (self.num_attention_heads, 3 * self.head_size)
qkv = qkv.view(*new_qkv_shape)
# [batch, seq_len, num_attention_heads, 3 * head_size] --> 3 [batch, num_attention_heads, seq_len, head_size]
query = qkv[..., : self.head_size].permute(0, 2, 1, 3)
key = qkv[..., self.head_size : 2 * self.head_size].permute(0, 2, 1, 3)
value = qkv[..., 2 * self.head_size :].permute(0, 2, 1, 3)
# Compute rotary embeddings on rotary_ndims
query_rot = query[..., : self.rotary_ndims]
query_pass = query[..., self.rotary_ndims :]
key_rot = key[..., : self.rotary_ndims]
key_pass = key[..., self.rotary_ndims :]
cos, sin = position_embeddings
query, key = apply_rotary_pos_emb(query_rot, key_rot, cos, sin)
query = torch.cat((query, query_pass), dim=-1).contiguous()
key = torch.cat((key, key_pass), dim=-1).contiguous()
# Cache QKV values
if layer_past is not None:
cache_kwargs = {
"sin": sin,
"cos": cos,
"partial_rotation_size": self.rotary_ndims,
"cache_position": cache_position,
}
key, value = layer_past.update(key, value, self.layer_idx, cache_kwargs)
# Compute attention
attn_output, attn_weights = self._attn(query, key, value, attention_mask)
# Reshape outputs
attn_output = self._merge_heads(attn_output, self.num_attention_heads, self.head_size)
attn_output = self.dense(attn_output)
return attn_output, attn_weights, self.dense_bias
@classmethod
def _split_heads(cls, tensor, num_attention_heads, attn_head_size):
"""
Splits hidden dim into attn_head_size and num_attention_heads
"""
# tensor: [bs, seq_len, hidden_size]
new_shape = tensor.size()[:-1] + (num_attention_heads, attn_head_size)
# -> [bs, seq_len, num_attention_heads, attn_head_size]
tensor = tensor.view(new_shape)
# -> [bs, num_attention_heads, seq_len, attn_head_size]
tensor = tensor.permute(0, 2, 1, 3)
return tensor
@classmethod
def _merge_heads(cls, tensor, num_attention_heads, attn_head_size):
"""
Merges attn_head_size dim and num_attn_heads dim into hidden dim
"""
# tensor [bs, num_attention_heads, seq_len, attn_head_size]
tensor = tensor.permute(0, 2, 1, 3).contiguous()
# -> [bs, seq_len, num_attention_heads, attn_head_size]
tensor = tensor.view(tensor.size(0), tensor.size(1), num_attention_heads * attn_head_size)
# -> [bs, seq_len, hidden_size]
return tensor
def _attn(self, query, key, value, attention_mask=None):
# q, k, v: [bs, num_attention_heads, seq_len, attn_head_size]
# compute causal mask from causal mask buffer
batch_size, num_attention_heads, query_length, attn_head_size = query.size()
key_length = key.size(-2)
query = query.view(batch_size * num_attention_heads, query_length, attn_head_size)
key = key.view(batch_size * num_attention_heads, key_length, attn_head_size)
# [batch_size * num_heads, q_length, kv_length]
attn_scores = torch.zeros(
batch_size * num_attention_heads,
query_length,
key_length,
dtype=query.dtype,
device=key.device,
)
attention_scores = torch.baddbmm(
attn_scores,
query,
key.transpose(1, 2),
beta=1.0,
alpha=1.0 / self.norm_factor,
)
attention_scores = attention_scores.view(batch_size, num_attention_heads, query_length, -1)
if attention_mask is not None: # no matter the length, we just slice it
causal_mask = attention_mask[:, :, :, : key.shape[-2]]
attention_scores = attention_scores + causal_mask
attn_weights = nn.functional.softmax(attention_scores, dim=-1)
attn_weights = self.attention_dropout(attn_weights)
attn_weights = attn_weights.to(value.dtype)
attn_output = torch.matmul(attn_weights, value)
return attn_output, attn_weights
def bias_dropout_add(x: Tensor, bias: Tensor, residual: Optional[Tensor], prob: float, training: bool) -> Tensor:
"""add bias to x, apply dropout and residual connection
Args:
x (Tensor): main path of output
bias (Tensor): None or attn_bias of the last attention layer
residual (Optional[Tensor]): residual value
prob (float): dropout probability
training (bool): whether in training mode or not
Returns:
Tensor: dropout(x + bias) + residual
"""
if bias is not None:
x = x + bias
out = torch.nn.functional.dropout(x, p=prob, training=training)
if residual is not None:
out = residual + out
return out
| GPTNeoXJapaneseAttention |
python | pytorch__pytorch | torch/utils/data/dataset.py | {
"start": 12909,
"end": 14673
} | class ____(Dataset[_T_co]):
r"""Dataset as a concatenation of multiple datasets.
This class is useful to assemble different existing datasets.
Args:
datasets (sequence): List of datasets to be concatenated
"""
datasets: list[Dataset[_T_co]]
cumulative_sizes: list[int]
@staticmethod
def cumsum(sequence):
r, s = [], 0
for e in sequence:
l = len(e)
r.append(l + s)
s += l
return r
def __init__(self, datasets: Iterable[Dataset]) -> None:
super().__init__()
self.datasets = list(datasets)
if len(self.datasets) == 0:
raise AssertionError("datasets should not be an empty iterable")
for d in self.datasets:
if isinstance(d, IterableDataset):
raise AssertionError("ConcatDataset does not support IterableDataset")
self.cumulative_sizes = self.cumsum(self.datasets)
def __len__(self) -> int:
return self.cumulative_sizes[-1]
def __getitem__(self, idx):
if idx < 0:
if -idx > len(self):
raise ValueError(
"absolute value of index should not exceed dataset length"
)
idx = len(self) + idx
dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx)
if dataset_idx == 0:
sample_idx = idx
else:
sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]
return self.datasets[dataset_idx][sample_idx]
@property
@deprecated(
"`cummulative_sizes` attribute is renamed to `cumulative_sizes`",
category=FutureWarning,
)
def cummulative_sizes(self):
return self.cumulative_sizes
| ConcatDataset |
python | chroma-core__chroma | chromadb/errors.py | {
"start": 2093,
"end": 2296
} | class ____(ChromaError):
@overrides
def code(self) -> int:
return 409
@classmethod
@overrides
def name(cls) -> str:
return "UniqueConstraintError"
| UniqueConstraintError |
python | ray-project__ray | release/ray_release/exception.py | {
"start": 3496,
"end": 3578
} | class ____(CommandError):
exit_code = ExitCode.PREPARE_ERROR
| PrepareCommandError |
python | Textualize__textual | tests/select/test_value.py | {
"start": 239,
"end": 3899
} | class ____(App[None]):
def __init__(self, initial_value=Select.BLANK):
self.initial_value = initial_value
super().__init__()
def compose(self):
yield Select[int](SELECT_OPTIONS, value=self.initial_value)
async def test_initial_value_is_validated():
"""The initial value should be respected if it is a legal value.
Regression test for https://github.com/Textualize/textual/discussions/3037.
"""
app = SelectApp(1)
async with app.run_test():
assert app.query_one(Select).value == 1
async def test_value_unknown_option_raises_error():
"""Setting the value to an unknown value raises an error."""
app = SelectApp()
async with app.run_test():
with pytest.raises(InvalidSelectValueError):
app.query_one(Select).value = "french fries"
async def test_initial_value_inside_compose_is_validated():
"""Setting the value to an unknown value inside compose should raise an error."""
class SelectApp(App[None]):
def compose(self):
s = Select[int](SELECT_OPTIONS)
s.value = 73
yield s
app = SelectApp()
with pytest.raises(InvalidSelectValueError):
async with app.run_test():
pass
async def test_value_assign_to_blank():
"""Setting the value to BLANK should work with default `allow_blank` value."""
app = SelectApp(1)
async with app.run_test():
select = app.query_one(Select)
assert select.value == 1
select.value = Select.BLANK
assert select.is_blank()
async def test_default_value_is_picked_if_allow_blank_is_false():
"""The initial value should be picked by default if allow_blank=False."""
class SelectApp(App[None]):
def compose(self):
yield Select[int](SELECT_OPTIONS, allow_blank=False)
app = SelectApp()
async with app.run_test():
assert app.query_one(Select).value == 0
async def test_initial_value_is_picked_if_allow_blank_is_false():
"""The initial value should be respected even if allow_blank=False."""
class SelectApp(App[None]):
def compose(self):
yield Select[int](SELECT_OPTIONS, value=2, allow_blank=False)
app = SelectApp()
async with app.run_test():
assert app.query_one(Select).value == 2
async def test_set_value_to_blank_with_allow_blank_false():
"""Setting the value to BLANK with allow_blank=False should raise an error."""
class SelectApp(App[None]):
def compose(self):
yield Select[int](SELECT_OPTIONS, allow_blank=False)
app = SelectApp()
async with app.run_test():
with pytest.raises(InvalidSelectValueError):
app.query_one(Select).value = Select.BLANK
async def test_set_options_resets_value_to_blank():
"""Resetting the options should reset the value to BLANK."""
class SelectApp(App[None]):
def compose(self):
yield Select[int](SELECT_OPTIONS, value=2)
app = SelectApp()
async with app.run_test():
select = app.query_one(Select)
assert select.value == 2
select.set_options(MORE_OPTIONS)
assert select.is_blank()
async def test_set_options_resets_value_if_allow_blank_is_false():
"""Resetting the options should reset the value if allow_blank=False."""
class SelectApp(App[None]):
def compose(self):
yield Select[int](SELECT_OPTIONS, allow_blank=False)
app = SelectApp()
async with app.run_test():
select = app.query_one(Select)
assert select.value == 0
select.set_options(MORE_OPTIONS)
assert select.value > 2
| SelectApp |
python | scipy__scipy | benchmarks/benchmarks/sparse_csgraph_matching.py | {
"start": 2103,
"end": 3078
} | class ____(Benchmark):
sizes = range(100, 401, 100)
param_names = ['shapes', 'input_type']
params = [
[(i, i) for i in sizes] + [(i, 2 * i) for i in sizes],
['random_uniform', 'random_uniform_sparse', 'random_uniform_integer',
'random_geometric', 'random_two_cost', 'machol_wien']
]
def setup(self, shape, input_type):
rng = np.random.default_rng(42)
input_func = {'random_uniform': random_uniform,
'random_uniform_sparse': random_uniform_sparse,
'random_uniform_integer': random_uniform_integer,
'random_geometric': random_geometric,
'random_two_cost': random_two_cost,
'machol_wien': machol_wien}[input_type]
self.biadjacency_matrix = input_func(shape, rng)
def time_evaluation(self, *args):
min_weight_full_bipartite_matching(self.biadjacency_matrix)
| MinWeightFullBipartiteMatching |
python | charliermarsh__ruff | crates/ruff_linter/resources/test/fixtures/flake8_bugbear/B017_1.py | {
"start": 183,
"end": 560
} | class ____(unittest.TestCase):
def call_form_raises(self) -> None:
self.assertRaises(Exception, something_else)
self.assertRaises(BaseException, something_else)
def test_pytest_call_form() -> None:
pytest.raises(Exception, something_else)
pytest.raises(BaseException, something_else)
pytest.raises(Exception, something_else, match="hello")
| Foobar |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-gridly/source_gridly/source.py | {
"start": 568,
"end": 2712
} | class ____(HttpStream, ABC):
url_base = Helpers.base_url
primary_key = "id"
current_page = 1
limit = 100
def __init__(self, view_id: str, view_name: str, schema: Dict[str, Any], **kwargs):
super().__init__(**kwargs)
self.view_id = view_id
self.view_name = view_name
self.schema = schema
@property
def name(self):
return self.view_name
def get_json_schema(self) -> Mapping[str, Any]:
return self.schema
def next_page_token(self, response: requests.Response) -> Optional[Mapping[str, Any]]:
total_count = response.headers.get("x-total-count")
total_page = math.ceil(int(total_count) / self.limit)
self.logger.info("Total page: " + str(total_page))
if self.current_page >= total_page:
self.logger.info("No more page to load " + str(self.current_page))
return None
page_token = {"offset": self.current_page * self.limit, "limit": self.limit}
self.current_page += 1
return page_token
def request_params(
self, stream_state: Mapping[str, Any], stream_slice: Mapping[str, any] = None, next_page_token: Mapping[str, Any] = None
) -> MutableMapping[str, Any]:
if next_page_token is None:
return {}
offset = next_page_token.get("offset")
limit = next_page_token.get("limit")
page = '{"offset":' + str(offset) + ',"limit":' + str(limit) + "}"
self.logger.info("Fetching page: " + page)
return {"page": page}
def parse_response(self, response: requests.Response, **kwargs) -> Iterable[Mapping]:
records = response.json()
if isinstance(records, list):
for record in records:
yield Helpers.transform_record(record, self.schema)
else:
Exception(f"Unsupported type of response data for stream {self.name}")
def path(
self, stream_state: Mapping[str, Any] = None, stream_slice: Mapping[str, Any] = None, next_page_token: Mapping[str, Any] = None
) -> str:
return f"views/{self.view_id}/records"
# Source
| GridlyStream |
python | streamlit__streamlit | lib/tests/streamlit/runtime/runtime_test_case.py | {
"start": 1732,
"end": 4104
} | class ____(SessionManager):
"""A MockSessionManager used for runtime tests.
This is done so that our runtime tests don't rely on a specific SessionManager
implementation.
"""
def __init__(
self,
session_storage: SessionStorage,
uploaded_file_manager: UploadedFileManager,
script_cache: ScriptCache,
message_enqueued_callback: Callable[[], None] | None,
) -> None:
self._uploaded_file_mgr = uploaded_file_manager
self._script_cache = script_cache
self._message_enqueued_callback = message_enqueued_callback
# Mapping of AppSession.id -> SessionInfo.
self._session_info_by_id: dict[str, SessionInfo] = {}
def connect_session(
self,
client: SessionClient,
script_data: ScriptData,
user_info: dict[str, str | None],
existing_session_id: str | None = None,
session_id_override: str | None = None,
) -> str:
with (
mock.patch(
"streamlit.runtime.scriptrunner.ScriptRunner", new=mock.MagicMock()
),
mock.patch.object(
PagesManager, "get_pages", mock.MagicMock(return_value={})
),
):
session = AppSession(
script_data=script_data,
uploaded_file_manager=self._uploaded_file_mgr,
script_cache=self._script_cache,
message_enqueued_callback=self._message_enqueued_callback,
user_info=user_info,
session_id_override=session_id_override,
)
assert session.id not in self._session_info_by_id, (
f"session.id '{session.id}' registered multiple times!"
)
self._session_info_by_id[session.id] = SessionInfo(client, session)
return session.id
def close_session(self, session_id: str) -> None:
if session_id in self._session_info_by_id:
session_info = self._session_info_by_id[session_id]
del self._session_info_by_id[session_id]
session_info.session.shutdown()
def get_session_info(self, session_id: str) -> SessionInfo | None:
return self._session_info_by_id.get(session_id, None)
def list_sessions(self) -> list[SessionInfo]:
return list(self._session_info_by_id.values())
| MockSessionManager |
python | matplotlib__matplotlib | galleries/examples/user_interfaces/toolmanager_sgskip.py | {
"start": 334,
"end": 1297
} | class ____(ToolBase):
"""List all the tools controlled by the `ToolManager`."""
default_keymap = 'm' # keyboard shortcut
description = 'List Tools'
def trigger(self, *args, **kwargs):
print('_' * 80)
fmt_tool = "{:12} {:45} {}".format
print(fmt_tool('Name (id)', 'Tool description', 'Keymap'))
print('-' * 80)
tools = self.toolmanager.tools
for name in sorted(tools):
if not tools[name].description:
continue
keys = ', '.join(sorted(self.toolmanager.get_tool_keymap(name)))
print(fmt_tool(name, tools[name].description, keys))
print('_' * 80)
fmt_active_toggle = "{!s:12} {!s:45}".format
print("Active Toggle tools")
print(fmt_active_toggle("Group", "Active"))
print('-' * 80)
for group, active in self.toolmanager.active_toggle.items():
print(fmt_active_toggle(group, active))
| ListTools |
python | airbytehq__airbyte | airbyte-ci/connectors/pipelines/tests/test_build_image/test_manifest_only_connectors.py | {
"start": 384,
"end": 5236
} | class ____:
@pytest.fixture
def all_platforms(self):
return BUILD_PLATFORMS
@pytest.fixture
def test_context(self, mocker):
return mocker.Mock(secrets_to_mask=[], targeted_platforms=BUILD_PLATFORMS)
@pytest.fixture
def test_context_with_connector_with_base_image(self, test_context):
test_context.connector.metadata = {
"connectorBuildOptions": {"baseImage": "xyz"},
"dockerImageTag": "0.0.0",
"dockerRepository": "test",
}
return test_context
@pytest.fixture
def mock_connector_directory(self, mocker):
mock_components_file = mocker.Mock()
mock_connector_dir = mocker.Mock()
mock_connector_dir.file.return_value = mock_components_file
return mock_connector_dir, mock_components_file
def _assert_file_not_handled(self, container_mock, file_path):
"""Assert that a specified file_path was not handled by the container_mock"""
assert not any(file_path in call.args[0] for call in container_mock.with_file.call_args_list)
async def test__run_using_base_image_with_mocks(self, mocker, test_context_with_connector_with_base_image, all_platforms):
container_built_from_base = mock_container()
container_built_from_base.with_label.return_value = container_built_from_base
mocker.patch.object(Path, "exists", return_value=True) # Mock Path.exists() to always return True
mocker.patch.object(
manifest_only_connectors.BuildConnectorImages,
"_build_from_base_image",
mocker.AsyncMock(return_value=container_built_from_base),
)
mocker.patch.object(manifest_only_connectors.BuildConnectorImages, "get_step_result", mocker.AsyncMock())
step = manifest_only_connectors.BuildConnectorImages(test_context_with_connector_with_base_image)
step_result = await step._run()
assert step._build_from_base_image.call_count == len(all_platforms)
container_built_from_base.with_exec.assert_called_with(["spec"], use_entrypoint=True)
container_built_from_base.with_label.assert_any_call(
"io.airbyte.version", test_context_with_connector_with_base_image.connector.metadata["dockerImageTag"]
)
container_built_from_base.with_label.assert_any_call(
"io.airbyte.name", test_context_with_connector_with_base_image.connector.metadata["dockerRepository"]
)
assert step_result.status is StepStatus.SUCCESS
for platform in all_platforms:
assert step_result.output[platform] == container_built_from_base
@pytest.mark.parametrize("components_file_exists", [True, False])
async def test__run_using_base_image_with_components_file(
self, mocker, all_platforms, test_context_with_connector_with_base_image, mock_connector_directory, components_file_exists
):
mock_connector_dir, mock_components_file = mock_connector_directory
container_built_from_base = mock_container()
container_built_from_base.with_label.return_value = container_built_from_base
container_built_from_base.with_file.return_value = container_built_from_base
test_context_with_connector_with_base_image.get_connector_dir = mocker.AsyncMock(return_value=mock_connector_dir)
test_context_with_connector_with_base_image.connector.manifest_only_components_path.exists = mocker.Mock(
return_value=components_file_exists
)
mocker.patch.object(
manifest_only_connectors.BuildConnectorImages,
"_get_base_container",
return_value=container_built_from_base,
)
mocker.patch.object(
manifest_only_connectors.BuildConnectorImages,
"get_image_user",
return_value="airbyte",
)
mocker.patch.object(
build_customization,
"apply_airbyte_entrypoint",
return_value=container_built_from_base,
)
mocker.patch.object(
manifest_only_connectors,
"apply_python_development_overrides",
side_effect=mocker.AsyncMock(return_value=container_built_from_base),
)
step = manifest_only_connectors.BuildConnectorImages(test_context_with_connector_with_base_image)
await step._build_connector(all_platforms[0], container_built_from_base)
if components_file_exists:
container_built_from_base.with_file.assert_any_call(
"source_declarative_manifest/components.py", mock_components_file, owner="airbyte"
)
mock_connector_dir.file.assert_any_call("components.py")
else:
self._assert_file_not_handled(container_built_from_base, "source_declarative_manifest/components.py")
| TestBuildConnectorImage |
python | getsentry__sentry | tests/sentry/users/api/serializers/test_user_identity_config.py | {
"start": 509,
"end": 4715
} | class ____(TestCase):
def setUp(self) -> None:
self.user = self.create_user()
self.idp = self.create_identity_provider(type="github", external_id="c3r1zyq9")
def test_user_social_auth(self) -> None:
identity = UserSocialAuth.objects.create(user=self.user, provider="github", uid="uf4romdj")
view = UserIdentityConfig.wrap(identity, Status.CAN_DISCONNECT)
result = serialize(view)
assert result == {
"category": "social-identity",
"id": str(identity.id),
"provider": {"key": "github", "name": "GitHub"},
"name": "uf4romdj",
"status": "can_disconnect",
"isLogin": False,
"organization": None,
"dateAdded": None,
"dateVerified": None,
"dateSynced": None,
}
def test_global_identity(self) -> None:
identity = Identity.objects.create(idp=self.idp, user=self.user, external_id="bk1zbu82")
identity.date_verified += timedelta(hours=1)
identity.save()
view = UserIdentityConfig.wrap(identity, Status.CAN_DISCONNECT)
result = serialize(view)
assert result == {
"category": "global-identity",
"id": str(identity.id),
"provider": {"key": "github", "name": "GitHub"},
"name": "bk1zbu82",
"status": "can_disconnect",
"isLogin": False,
"organization": None,
"dateAdded": identity.date_added,
"dateVerified": identity.date_verified,
"dateSynced": None,
}
@mock.patch("sentry.users.api.serializers.user_identity_config.is_login_provider")
def test_global_login_identity(self, mock_is_login_provider: mock.MagicMock) -> None:
mock_is_login_provider.return_value = True
identity = Identity.objects.create(idp=self.idp, user=self.user, external_id="m9p8bzua")
identity.date_verified += timedelta(hours=1)
identity.save()
view = UserIdentityConfig.wrap(identity, Status.NEEDED_FOR_GLOBAL_AUTH)
result = serialize(view)
assert result == {
"category": "global-identity",
"id": str(identity.id),
"provider": {"key": "github", "name": "GitHub"},
"name": "m9p8bzua",
"status": "needed_for_global_auth",
"isLogin": True,
"organization": None,
"dateAdded": identity.date_added,
"dateVerified": identity.date_verified,
"dateSynced": None,
}
def test_auth_identity(self) -> None:
org = self.create_organization()
provider = AuthProvider.objects.create(organization_id=org.id, provider="dummy")
identity = AuthIdentity.objects.create(
user=self.user, auth_provider=provider, ident="hhyjzna1"
)
identity.last_verified += timedelta(hours=1)
identity.last_synced += timedelta(hours=2)
identity.save()
view = UserIdentityConfig.wrap(identity, Status.NEEDED_FOR_ORG_AUTH)
result = serialize(view)
org_serial = result.pop("organization")
assert org_serial["id"] == str(org.id)
assert org_serial["slug"] == org.slug
assert result == {
"category": "org-identity",
"id": str(identity.id),
"provider": {"key": "dummy", "name": "Dummy"},
"name": "hhyjzna1",
"status": "needed_for_org_auth",
"isLogin": True,
"dateAdded": identity.date_added,
"dateVerified": identity.last_verified,
"dateSynced": identity.last_synced,
}
def test_global_identity_with_integration_provider(self) -> None:
integration_provider = self.create_identity_provider(type="msteams", external_id="ao645i51")
identity = Identity.objects.create(
idp=integration_provider, user=self.user, external_id="5ppj2dip"
)
view = UserIdentityConfig.wrap(identity, Status.CAN_DISCONNECT)
result = serialize(view)
assert result["provider"] == {"key": "msteams", "name": "Microsoft Teams"}
| UserIdentityConfigSerializerTest |
python | tensorflow__tensorflow | tensorflow/python/kernel_tests/array_ops/denormal_test.py | {
"start": 957,
"end": 2930
} | class ____(test.TestCase):
def testPythonHasDenormals(self):
"""Non-tf numpy code should treat denormals correctly."""
for dtype in np.float32, np.float64:
tiny = np.finfo(dtype).tiny
self.assertEqual(tiny, tiny / 16 * 16)
def _flushDenormalsTest(self, dtypes):
if (platform.machine() == "ppc64le" or platform.machine() == "s390x" or
platform.machine() == "aarch64"):
# Disabled denormal_test on power/s390x/aarch64 platform
# Check relevant discussion -
# https://github.com/tensorflow/tensorflow/issues/11902
return
for dtype in dtypes:
tiny = np.finfo(dtype).tiny
# Small shape to test main thread, large shape to test thread pool
for shape in (), (1 << 20,):
flush = 0.1 * constant_op.constant(tiny, shape=shape)
self.assertAllEqual(self.evaluate(flush), np.zeros(shape))
# Make sure the flags don't leak out
self.testPythonHasDenormals()
@test_util.run_in_graph_and_eager_modes(use_gpu=False)
def testFlushDenormalsCPU(self):
# On CPUs, the processor flags flush for both single and double precision.
self._flushDenormalsTest(dtypes=(np.float32, np.float64))
@test_util.run_in_graph_and_eager_modes(use_gpu=True)
def testFlushDenormalsGPU(self):
# On GPUs, only single precision can flush to zero.
self._flushDenormalsTest(dtypes=(np.float32,))
if __name__ == "__main__":
# When eager_op_as_function mode is enabled xla auto-clustering kicks in.
# By default xla does not enable flush-to-zero semantics in the GPU backend.
# This env flag has to be set before the test is setup. Setting it using the
# decorator does not seem to propagate the flag to all required locations.
original_xla_flags = os.environ.get("XLA_FLAGS")
new_xla_flags = "--xla_gpu_ftz=true"
if original_xla_flags:
new_xla_flags = new_xla_flags + " " + original_xla_flags
os.environ["XLA_FLAGS"] = new_xla_flags
test.main()
| DenormalTest |
python | pandas-dev__pandas | pandas/tests/extension/test_common.py | {
"start": 2205,
"end": 3071
} | class ____(pd.arrays.StringArray):
"""Extend StringArray to capture arguments to __getitem__"""
def __getitem__(self, item):
self.last_item_arg = item
return super().__getitem__(item)
def test_ellipsis_index():
# GH#42430 1D slices over extension types turn into N-dimensional slices
# over ExtensionArrays
dtype = pd.StringDtype()
df = pd.DataFrame(
{
"col1": CapturingStringArray(
np.array(["hello", "world"], dtype=object), dtype=dtype
)
}
)
_ = df.iloc[:1]
# String comparison because there's no native way to compare slices.
# Before the fix for GH#42430, last_item_arg would get set to the 2D slice
# (Ellipsis, slice(None, 1, None))
out = df["col1"]._values.last_item_arg
assert str(out) == "slice(None, 1, None)"
| CapturingStringArray |
python | encode__django-rest-framework | tests/test_fields.py | {
"start": 81627,
"end": 82079
} | class ____(FieldValues):
"""
Values for `DictField` with no `child` argument.
"""
valid_inputs = [
({'a': 1, 'b': [4, 5, 6], 1: 123}, {'a': 1, 'b': [4, 5, 6], '1': 123}),
]
invalid_inputs = [
('not a dict', ['Expected a dictionary of items but got type "str".']),
]
outputs = [
({'a': 1, 'b': [4, 5, 6]}, {'a': 1, 'b': [4, 5, 6]}),
]
field = serializers.DictField()
| TestUnvalidatedDictField |
python | huggingface__transformers | src/transformers/models/deepseek_vl/modular_deepseek_vl.py | {
"start": 6976,
"end": 12909
} | class ____(ProcessorMixin):
r"""
Constructs a DeepseekVL processor which wraps a DeepseekVL Image Processor and a Llama tokenizer into a single processor.
[`DeepseekVLProcessor`] offers all the functionalities of [`DeepseekVLImageProcessor`] and [`LlamaTokenizerFast`]. See the
[`~DeepseekVLProcessor.__call__`] and [`~DeepseekVLProcessor.decode`] for more information.
Args:
image_processor ([`DeepseekVLImageProcessor`]):
The image processor is a required input.
tokenizer ([`LlamaTokenizerFast`]):
The tokenizer is a required input.
chat_template (`str`, *optional*):
A Jinja template which will be used to convert lists of messages
in a chat into a tokenizable string.
num_image_tokens (`int`, *optional*, defaults to 576):
The number of special image tokens used as placeholders for visual content in text sequences.
"""
def __init__(
self,
image_processor,
tokenizer,
chat_template=None,
num_image_tokens=576,
):
self.image_token = tokenizer.image_token
self.num_image_tokens = num_image_tokens
super().__init__(image_processor, tokenizer, chat_template=chat_template)
def __call__(
self,
text: Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]] = None,
images: Optional[ImageInput] = None,
**kwargs: Unpack[DeepseekVLProcessorKwargs],
) -> BatchFeature:
"""
Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text`
and `kwargs` arguments to LlamaTokenizerFast's [`~LlamaTokenizerFast.__call__`] if `text` is not `None` to encode
the text. To prepare the image(s), this method forwards the `images` and `kwargs` arguments to
DeepseekVLImageProcessor's [`~DeepseekVLImageProcessor.__call__`] if `images` is not `None`. Please refer to the doctsring
of the above two methods for more information.
Args:
text (`str`, `List[str]`, `List[List[str]]`):
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. Both channels-first and channels-last formats are supported.
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors of a particular framework. Acceptable values are:
- `'pt'`: Return PyTorch `torch.Tensor` objects.
- `'np'`: Return NumPy `np.ndarray` objects.
Returns:
[`BatchFeature`]: A [`BatchFeature`] with the following fields:
- **input_ids** -- List of token ids to be fed to a model. Returned when `text` is not `None`.
- **attention_mask** -- List of indices specifying which tokens should be attended to by the model (when
`return_attention_mask=True` or if *"attention_mask"* is in `self.model_input_names` and if `text` is not
`None`).
- **pixel_values** -- Pixel values to be fed to a model. Returned when `images` is not `None`.
"""
output_kwargs = self._merge_kwargs(
DeepseekVLProcessorKwargs, tokenizer_init_kwargs=self.tokenizer.init_kwargs, **kwargs
)
if text is None and images is None:
raise ValueError("You must specify either text or images.")
if text is not None:
if isinstance(text, str):
text = [text]
elif not (isinstance(text, (list, tuple)) and all(isinstance(t, str) for t in text)):
raise ValueError("Invalid input text. Please provide a string, or a list of strings")
prompt_strings = []
one_img_tokens = self.image_token * self.num_image_tokens
for prompt in text:
prompt = prompt.replace(self.image_token, one_img_tokens)
prompt_strings.append(prompt)
data = self.tokenizer(prompt_strings, **output_kwargs["text_kwargs"])
# process images if pixel_values are provided
if images is not None:
data["pixel_values"] = self.image_processor(images, **output_kwargs["images_kwargs"])["pixel_values"]
return BatchFeature(data=data)
def batch_decode(self, *args, **kwargs):
"""
This method forwards all its arguments to LlamaTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please
refer to the docstring of this method for more information.
"""
return self.tokenizer.batch_decode(*args, **kwargs)
def decode(self, *args, **kwargs):
"""
This method forwards all its arguments to LlamaTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to
the docstring of this method for more information.
"""
return self.tokenizer.decode(*args, **kwargs)
@property
def model_input_names(self):
tokenizer_input_names = self.tokenizer.model_input_names
image_processor_input_names = self.image_processor.model_input_names
return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
__all__ = [
"DeepseekVLConfig",
"DeepseekVLPreTrainedModel",
"DeepseekVLModel",
"DeepseekVLForConditionalGeneration",
"DeepseekVLImageProcessor",
"DeepseekVLImageProcessorFast",
"DeepseekVLProcessor",
]
| DeepseekVLProcessor |
python | dagster-io__dagster | python_modules/dagster-graphql/dagster_graphql/schema/logs/events.py | {
"start": 12335,
"end": 13735
} | class ____(graphene.ObjectType, AssetEventMixin):
class Meta:
interfaces = (GrapheneMessageEvent, GrapheneStepEvent, GrapheneDisplayableEvent)
name = "MaterializationEvent"
assetLineage = non_null_list(GrapheneAssetLineageInfo)
def __init__(self, event: EventLogEntry, assetLineage=None):
self._asset_lineage = check.opt_list_param(assetLineage, "assetLineage", AssetLineageInfo)
dagster_event = check.not_none(event.dagster_event)
materialization = dagster_event.step_materialization_data.materialization
super().__init__(**_construct_asset_event_metadata_params(event, materialization))
AssetEventMixin.__init__(
self,
event=event,
metadata=materialization,
)
def resolve_assetLineage(self, _graphene_info: ResolveInfo):
return [
GrapheneAssetLineageInfo(
assetKey=lineage_info.asset_key,
partitions=lineage_info.partitions,
)
for lineage_info in self._asset_lineage
]
GrapheneAssetMaterializationFailureType = graphene.Enum.from_enum(
AssetMaterializationFailureType, name="AssetMaterializationFailureType"
)
GrapheneAssetMaterializationFailureReason = graphene.Enum.from_enum(
AssetMaterializationFailureReason, name="AssetMaterializationFailureReason"
)
| GrapheneMaterializationEvent |
python | scipy__scipy | benchmarks/benchmarks/go_benchmark_functions/go_funcs_S.py | {
"start": 12434,
"end": 13569
} | class ____(Benchmark):
r"""
Schwefel 6 objective function.
This class defines the Schwefel 6 [1]_ global optimization problem. This
is a unimodal minimization problem defined as follows:
.. math::
f_{\text{Schwefel06}}(x) = \max(\lvert x_1 + 2x_2 - 7 \rvert,
\lvert 2x_1 + x_2 - 5 \rvert)
with :math:`x_i \in [-100, 100]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [1, 3]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-100.0] * self.N,
[100.0] * self.N))
self.custom_bounds = ([-10.0, 10.0], [-10.0, 10.0])
self.global_optimum = [[1.0, 3.0]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
return max(abs(x[0] + 2 * x[1] - 7), abs(2 * x[0] + x[1] - 5))
| Schwefel06 |
python | tensorflow__tensorflow | tensorflow/python/ops/resource_variable_ops.py | {
"start": 97005,
"end": 105421
} | class ____(BaseResourceVariable):
"""Represents a future for a read of a variable.
Pretends to be the tensor if anyone looks.
"""
def __init__(self, handle, dtype, shape, in_graph_mode, parent_op, unique_id):
if isinstance(handle, ops.EagerTensor):
handle_name = ""
else:
handle_name = handle.name
# Only create a graph_element if we're in session.run-land as only
# session.run requires a preexisting tensor to evaluate. Otherwise we can
# avoid accidentally reading the variable.
if context.executing_eagerly() or ops.inside_function():
graph_element = None
else:
with ops.control_dependencies([parent_op]):
graph_element = gen_resource_variable_ops.read_variable_op(
handle, dtype)
_maybe_set_handle_data(dtype, handle, graph_element)
super(_UnreadVariable, self).__init__(
handle=handle,
shape=shape,
handle_name=handle_name,
unique_id=unique_id,
dtype=dtype,
graph_element=graph_element)
self._parent_op = parent_op
@property
def name(self):
if self._in_graph_mode:
return self._parent_op.name
else:
return "UnreadVariable"
def value(self):
return self._read_variable_op()
def read_value(self):
return self._read_variable_op()
def _read_variable_op(self):
with ops.control_dependencies([self._parent_op]):
result = gen_resource_variable_ops.read_variable_op(
self._handle, self._dtype)
_maybe_set_handle_data(self._dtype, self._handle, result)
return result
def assign_sub(self, delta, use_locking=None, name=None, read_value=True):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable, self).assign_sub(delta, use_locking, name,
read_value)
def assign_add(self, delta, use_locking=None, name=None, read_value=True):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable, self).assign_add(delta, use_locking, name,
read_value)
def assign(self, value, use_locking=None, name=None, read_value=True):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable, self).assign(value, use_locking, name,
read_value)
def scatter_sub(self, sparse_delta, use_locking=False, name=None):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable, self).scatter_sub(sparse_delta, use_locking,
name)
def scatter_add(self, sparse_delta, use_locking=False, name=None):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable, self).scatter_add(sparse_delta, use_locking,
name)
def scatter_max(self, sparse_delta, use_locking=False, name=None):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable, self).scatter_max(sparse_delta, use_locking,
name)
def scatter_min(self, sparse_delta, use_locking=False, name=None):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable, self).scatter_min(sparse_delta, use_locking,
name)
def scatter_mul(self, sparse_delta, use_locking=False, name=None):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable, self).scatter_mul(sparse_delta, use_locking,
name)
def scatter_div(self, sparse_delta, use_locking=False, name=None):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable, self).scatter_div(sparse_delta, use_locking,
name)
def scatter_update(self, sparse_delta, use_locking=False, name=None):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable,
self).scatter_update(sparse_delta, use_locking, name)
def batch_scatter_update(self, sparse_delta, use_locking=False, name=None):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable,
self).batch_scatter_update(sparse_delta, use_locking, name)
def scatter_nd_sub(self, indices, updates, name=None):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable, self).scatter_nd_sub(indices, updates, name)
def scatter_nd_add(self, indices, updates, name=None):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable, self).scatter_nd_add(indices, updates, name)
def scatter_nd_update(self, indices, updates, name=None):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable,
self).scatter_nd_update(indices, updates, name)
def scatter_nd_max(self, indices, updates, name=None):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable, self).scatter_nd_max(indices, updates, name)
def scatter_nd_min(self, indices, updates, name=None):
with ops.control_dependencies([self._parent_op]):
return super(_UnreadVariable, self).scatter_nd_min(indices, updates, name)
@property
def op(self) -> ops.Operation:
"""The op for this variable."""
return self._parent_op
@ops.RegisterGradient("ReadVariableOp")
def _ReadGrad(_, grad):
"""Gradient for read op."""
return grad
def variable_shape(handle, out_type=None):
"""Returns the shape of the variable from the handle.
If the output shape dtype is not specified, it will be set to int64 if
tf_shape_default_int64 is enabled, otherwise it will be set to int32.
Args:
handle: The handle of the variable.
out_type: The dtype of the output shape.
Returns:
The shape of the variable.
"""
if out_type is None:
if flags.config().tf_shape_default_int64.value():
out_type = dtypes.int64
else:
out_type = dtypes.int32
handle_data = get_eager_safe_handle_data(handle)
if handle_data is None or not handle_data.is_set:
return gen_resource_variable_ops.variable_shape(handle, out_type=out_type)
shape_proto = handle_data.shape_and_type[0].shape
if shape_proto.unknown_rank or any(x.size == -1 for x in shape_proto.dim):
return gen_resource_variable_ops.variable_shape(handle, out_type=out_type)
return constant_op.constant([x.size for x in shape_proto.dim], dtype=out_type)
@ops.RegisterGradient("ResourceGather")
def _GatherGrad(op, grad):
"""Gradient for gather op."""
# Build appropriately shaped IndexedSlices
handle = op.inputs[0]
indices = op.inputs[1]
params_shape = variable_shape(handle)
size = array_ops.expand_dims(array_ops.size(indices), 0)
values_shape = array_ops.concat([size, params_shape[1:]], 0)
values = array_ops.reshape(grad, values_shape)
indices = array_ops.reshape(indices, size)
return (indexed_slices.IndexedSlices(values, indices, params_shape), None)
@tf_export("__internal__.ops.is_resource_variable", v1=[])
def is_resource_variable(var):
""""Returns True if `var` is to be considered a ResourceVariable."""
return isinstance(var, BaseResourceVariable) or hasattr(
var, "_should_act_as_resource_variable")
def copy_to_graph_uninitialized(var):
"""Copies an existing variable to a new graph, with no initializer."""
# Like ResourceVariable.__deepcopy__, but does not set an initializer on the
# new variable.
# pylint: disable=protected-access
new_variable = UninitializedVariable(
trainable=var.trainable,
constraint=var._constraint,
shape=var.shape,
dtype=var.dtype,
name=var._shared_name,
synchronization=var.synchronization,
aggregation=var.aggregation,
extra_handle_data=var.handle)
new_variable._maybe_initialize_trackable()
# pylint: enable=protected-access
return new_variable
ops.NotDifferentiable("Assert")
ops.NotDifferentiable("VarIsInitializedOp")
ops.NotDifferentiable("VariableShape")
# TODO(b/246356867): This is the draft implementation. Currently VariableSpec is
# the only class using them. Move them to a separate file when necessary.
| _UnreadVariable |
python | django__django | tests/postgres_tests/__init__.py | {
"start": 719,
"end": 1391
} | class ____(TestCase):
@cached_property
def default_text_search_config(self):
with connection.cursor() as cursor:
cursor.execute("SHOW default_text_search_config")
row = cursor.fetchone()
return row[0] if row else None
def check_default_text_search_config(self):
if self.default_text_search_config != "pg_catalog.english":
self.skipTest("The default text search config is not 'english'.")
@unittest.skipUnless(connection.vendor == "postgresql", "PostgreSQL specific tests")
# To locate the widget's template.
@modify_settings(INSTALLED_APPS={"append": "django.contrib.postgres"})
| PostgreSQLTestCase |
python | streamlit__streamlit | lib/tests/streamlit/elements/button_group_test.py | {
"start": 3122,
"end": 4192
} | class ____:
def test_serialize(self):
option_indices = [5, 6, 7]
serde = _SingleSelectSerde[int](option_indices)
res = serde.serialize(6)
assert res == [1]
def test_serialize_raise_option_does_not_exist(self):
option_indices = [5, 6, 7]
serde = _SingleSelectSerde[int](option_indices)
with pytest.raises(StreamlitAPIException):
serde.serialize(8)
def test_deserialize(self):
option_indices = [5, 6, 7]
serde = _SingleSelectSerde[int](option_indices)
res = serde.deserialize([1])
assert res == 6
def test_deserialize_with_default_value(self):
option_indices = [5, 6, 7]
serde = _SingleSelectSerde[int](option_indices, default_value=[2])
res = serde.deserialize(None)
assert res == 7
def test_deserialize_raise_indexerror(self):
option_indices = [5, 6, 7]
serde = _SingleSelectSerde[int](option_indices)
with pytest.raises(IndexError):
serde.deserialize([3])
| TestSingleSelectSerde |
python | pydantic__pydantic | pydantic/v1/errors.py | {
"start": 16407,
"end": 16551
} | class ____(PydanticValueError):
code = 'payment_card_number.luhn_check'
msg_template = 'card number is not luhn valid'
| LuhnValidationError |
python | gevent__gevent | src/gevent/_socketcommon.py | {
"start": 4202,
"end": 14114
} | class ____(error): # pylint: disable=undefined-variable
def __init__(self):
super(cancel_wait_ex, self).__init__(
EBADF,
'File descriptor was closed in another greenlet')
def cancel_wait(watcher, error=cancel_wait_ex):
"""See :meth:`gevent.hub.Hub.cancel_wait`"""
get_hub().cancel_wait(watcher, error)
def gethostbyname(hostname):
"""
gethostbyname(host) -> address
Return the IP address (a string of the form '255.255.255.255') for a host.
.. seealso:: :doc:`/dns`
"""
return get_hub().resolver.gethostbyname(hostname)
def gethostbyname_ex(hostname):
"""
gethostbyname_ex(host) -> (name, aliaslist, addresslist)
Return the true host name, a list of aliases, and a list of IP addresses,
for a host. The host argument is a string giving a host name or IP number.
Resolve host and port into list of address info entries.
.. seealso:: :doc:`/dns`
"""
return get_hub().resolver.gethostbyname_ex(hostname)
def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0):
"""
Resolve host and port into list of address info entries.
Translate the host/port argument into a sequence of 5-tuples that contain
all the necessary arguments for creating a socket connected to that service.
host is a domain name, a string representation of an IPv4/v6 address or
None. port is a string service name such as 'http', a numeric port number or
None. By passing None as the value of host and port, you can pass NULL to
the underlying C API.
The family, type and proto arguments can be optionally specified in order to
narrow the list of addresses returned. Passing zero as a value for each of
these arguments selects the full range of results.
.. seealso:: :doc:`/dns`
"""
# Also, on Python 3, we need to translate into the special enums.
# Our lower-level resolvers, including the thread and blocking, which use _socket,
# function simply with integers.
addrlist = get_hub().resolver.getaddrinfo(host, port, family, type, proto, flags)
result = [
# pylint:disable=undefined-variable
(_intenum_converter(af, AddressFamily),
_intenum_converter(socktype, SocketKind),
proto, canonname, sa)
for af, socktype, proto, canonname, sa
in addrlist
]
return result
def _intenum_converter(value, enum_klass):
try:
return enum_klass(value)
except ValueError: # pragma: no cover
return value
def gethostbyaddr(ip_address):
"""
gethostbyaddr(ip_address) -> (name, aliaslist, addresslist)
Return the true host name, a list of aliases, and a list of IP addresses,
for a host. The host argument is a string giving a host name or IP number.
.. seealso:: :doc:`/dns`
"""
return get_hub().resolver.gethostbyaddr(ip_address)
def getnameinfo(sockaddr, flags):
"""
getnameinfo(sockaddr, flags) -> (host, port)
Get host and port for a sockaddr.
.. seealso:: :doc:`/dns`
"""
return get_hub().resolver.getnameinfo(sockaddr, flags)
def getfqdn(name=''):
"""Get fully qualified domain name from name.
An empty argument is interpreted as meaning the local host.
First the hostname returned by gethostbyaddr() is checked, then
possibly existing aliases. In case no FQDN is available, hostname
from gethostname() is returned.
.. versionchanged:: 23.7.0
The IPv6 generic address '::' now returns the result of
``gethostname``, like the IPv4 address '0.0.0.0'.
"""
# pylint: disable=undefined-variable
name = name.strip()
# IPv6 added in a late Python 3.10/3.11 patch release.
# https://github.com/python/cpython/issues/100374
if not name or name in ('0.0.0.0', '::'):
name = gethostname()
try:
hostname, aliases, _ = gethostbyaddr(name)
except error:
pass
else:
aliases.insert(0, hostname)
for name in aliases: # EWW! pylint:disable=redefined-argument-from-local
if isinstance(name, bytes):
if b'.' in name:
break
elif '.' in name:
break
else:
name = hostname
return name
def __send_chunk(socket, data_memory, flags, timeleft, end, timeout=_timeout_error):
"""
Send the complete contents of ``data_memory`` before returning.
This is the core loop around :meth:`send`.
:param timeleft: Either ``None`` if there is no timeout involved,
or a float indicating the timeout to use.
:param end: Either ``None`` if there is no timeout involved, or
a float giving the absolute end time.
:return: An updated value for ``timeleft`` (or None)
:raises timeout: If ``timeleft`` was given and elapsed while
sending this chunk.
"""
data_sent = 0
len_data_memory = len(data_memory)
started_timer = 0
while data_sent < len_data_memory:
chunk = data_memory[data_sent:]
if timeleft is None:
data_sent += socket.send(chunk, flags)
elif started_timer and timeleft <= 0:
# Check before sending to guarantee a check
# happens even if each chunk successfully sends its data
# (especially important for SSL sockets since they have large
# buffers). But only do this if we've actually tried to
# send something once to avoid spurious timeouts on non-blocking
# sockets.
raise timeout('timed out')
else:
started_timer = 1
data_sent += socket.send(chunk, flags, timeout=timeleft)
timeleft = end - time.time()
return timeleft
def _sendall(socket, data_memory, flags,
SOL_SOCKET=__socket__.SOL_SOCKET, # pylint:disable=no-member
SO_SNDBUF=__socket__.SO_SNDBUF): # pylint:disable=no-member
"""
Send the *data_memory* (which should be a memoryview)
using the gevent *socket*, performing well on PyPy.
"""
# On PyPy up through 5.10.0, both PyPy2 and PyPy3, subviews
# (slices) of a memoryview() object copy the underlying bytes the
# first time the builtin socket.send() method is called. On a
# non-blocking socket (that thus calls socket.send() many times)
# with a large input, this results in many repeated copies of an
# ever smaller string, depending on the networking buffering. For
# example, if each send() can process 1MB of a 50MB input, and we
# naively pass the entire remaining subview each time, we'd copy
# 49MB, 48MB, 47MB, etc, thus completely killing performance. To
# workaround this problem, we work in reasonable, fixed-size
# chunks. This results in a 10x improvement to bench_sendall.py,
# while having no measurable impact on CPython (since it doesn't
# copy at all the only extra overhead is a few python function
# calls, which is negligible for large inputs).
# On one macOS machine, PyPy3 5.10.1 produced ~ 67.53 MB/s before this change,
# and ~ 616.01 MB/s after.
# See https://bitbucket.org/pypy/pypy/issues/2091/non-blocking-socketsend-slow-gevent
# Too small of a chunk (the socket's buf size is usually too
# small) results in reduced perf due to *too many* calls to send and too many
# small copies. With a buffer of 143K (the default on my system), for
# example, bench_sendall.py yields ~264MB/s, while using 1MB yields
# ~653MB/s (matching CPython). 1MB is arbitrary and might be better
# chosen, say, to match a page size?
len_data_memory = len(data_memory)
if not len_data_memory:
# Don't try to send empty data at all, no point, and breaks ssl
# See issue 719
return 0
chunk_size = max(socket.getsockopt(SOL_SOCKET, SO_SNDBUF), 1024 * 1024)
data_sent = 0
end = None
timeleft = None
if socket.timeout is not None:
timeleft = socket.timeout
end = time.time() + timeleft
while data_sent < len_data_memory:
chunk_end = min(data_sent + chunk_size, len_data_memory)
chunk = data_memory[data_sent:chunk_end]
timeleft = __send_chunk(socket, chunk, flags, timeleft, end)
data_sent += len(chunk) # Guaranteed it sent the whole thing
# pylint:disable=no-member
_RESOLVABLE_FAMILIES = (__socket__.AF_INET,)
if __socket__.has_ipv6:
_RESOLVABLE_FAMILIES += (__socket__.AF_INET6,)
def _resolve_addr(sock, address):
# Internal method: resolve the AF_INET[6] address using
# getaddrinfo.
if sock.family not in _RESOLVABLE_FAMILIES or not isinstance(address, tuple):
return address
# address is (host, port) (ipv4) or (host, port, flowinfo, scopeid) (ipv6).
# If it's already resolved, no need to go through getaddrinfo() again.
# That can lose precision (e.g., on IPv6, it can lose scopeid). The standard library
# does this in socketmodule.c:setipaddr. (This is only part of the logic, the real
# thing is much more complex.)
try:
if __socket__.inet_pton(sock.family, address[0]):
return address
except AttributeError: # pragma: no cover
# inet_pton might not be available.
pass
except _SocketError:
# Not parseable, needs resolved.
pass
# We don't pass the port to getaddrinfo because the C
# socket module doesn't either (on some systems its
# illegal to do that without also passing socket type and
# protocol). Instead we join the port back at the end.
# See https://github.com/gevent/gevent/issues/1252
host, port = address[:2]
r = getaddrinfo(host, None, sock.family)
address = r[0][-1]
if len(address) == 2:
address = (address[0], port)
else:
address = (address[0], port, address[2], address[3])
return address
timeout_default = object()
| cancel_wait_ex |
python | doocs__leetcode | solution/1300-1399/1389.Create Target Array in the Given Order/Solution.py | {
"start": 0,
"end": 209
} | class ____:
def createTargetArray(self, nums: List[int], index: List[int]) -> List[int]:
target = []
for x, i in zip(nums, index):
target.insert(i, x)
return target
| Solution |
python | tensorflow__tensorflow | tensorflow/python/ops/image_ops_test.py | {
"start": 216995,
"end": 218408
} | class ____(test_util.TensorFlowTestCase):
@test_util.xla_allow_fallback(
"non_max_suppression with dynamic output shape unsupported.")
def testSelectFromThreeClustersWithSoftNMS(self):
boxes_np = [[0, 0, 1, 1], [0, 0.1, 1, 1.1], [0, -0.1, 1, 0.9],
[0, 10, 1, 11], [0, 10.1, 1, 11.1], [0, 100, 1, 101]]
scores_np = [0.9, 0.75, 0.6, 0.95, 0.5, 0.3]
max_output_size_np = 6
iou_threshold_np = 0.5
score_threshold_np = 0.0
soft_nms_sigma_np = 0.5
boxes = constant_op.constant(boxes_np)
scores = constant_op.constant(scores_np)
max_output_size = constant_op.constant(max_output_size_np)
iou_threshold = constant_op.constant(iou_threshold_np)
score_threshold = constant_op.constant(score_threshold_np)
soft_nms_sigma = constant_op.constant(soft_nms_sigma_np)
selected_indices, selected_scores = \
image_ops.non_max_suppression_with_scores(
boxes,
scores,
max_output_size,
iou_threshold,
score_threshold,
soft_nms_sigma)
selected_indices, selected_scores = self.evaluate(
[selected_indices, selected_scores])
self.assertAllClose(selected_indices, [3, 0, 1, 5, 4, 2])
self.assertAllClose(selected_scores,
[0.95, 0.9, 0.384, 0.3, 0.256, 0.197],
rtol=1e-2, atol=1e-2)
| NonMaxSuppressionWithScoresTest |
python | tensorflow__tensorflow | tensorflow/python/kernel_tests/array_ops/constant_op_test.py | {
"start": 15806,
"end": 19616
} | class ____(test.TestCase):
def _Zeros(self, shape):
with self.cached_session():
ret = array_ops.zeros(shape)
self.assertEqual(shape, ret.get_shape())
return self.evaluate(ret)
def testConst(self):
self.assertTrue(
np.array_equal(self._Zeros([2, 3]), np.array([[0] * 3] * 2)))
def testScalar(self):
self.assertEqual(0, self._Zeros([]))
self.assertEqual(0, self._Zeros(()))
with self.cached_session():
scalar = array_ops.zeros(constant_op.constant([], dtype=dtypes_lib.int32))
self.assertEqual(0, self.evaluate(scalar))
def testDynamicSizes(self):
np_ans = np.array([[0] * 3] * 2)
with self.cached_session():
# Creates a tensor of 2 x 3.
d = array_ops.fill([2, 3], 12., name="fill")
# Constructs a tensor of zeros of the same dimensions as "d".
z = array_ops.zeros(array_ops.shape(d))
out = self.evaluate(z)
self.assertAllEqual(np_ans, out)
self.assertShapeEqual(np_ans, d)
self.assertShapeEqual(np_ans, z)
@test_util.run_deprecated_v1
def testDtype(self):
with self.cached_session():
d = array_ops.fill([2, 3], 12., name="fill")
self.assertEqual(d.get_shape(), [2, 3])
# Test default type for both constant size and dynamic size
z = array_ops.zeros([2, 3])
self.assertEqual(z.dtype, dtypes_lib.float32)
self.assertEqual([2, 3], z.get_shape())
self.assertAllEqual(z, np.zeros([2, 3]))
z = array_ops.zeros(array_ops.shape(d))
self.assertEqual(z.dtype, dtypes_lib.float32)
self.assertEqual([2, 3], z.get_shape())
self.assertAllEqual(z, np.zeros([2, 3]))
# Test explicit type control
for dtype in [
dtypes_lib.float32, dtypes_lib.float64, dtypes_lib.int32,
dtypes_lib.uint8, dtypes_lib.int16, dtypes_lib.int8,
dtypes_lib.complex64, dtypes_lib.complex128, dtypes_lib.int64,
dtypes_lib.bool, dtypes_lib.string
]:
z = array_ops.zeros([2, 3], dtype=dtype)
self.assertEqual(z.dtype, dtype)
self.assertEqual([2, 3], z.get_shape())
z_value = self.evaluate(z)
self.assertFalse(np.any(z_value))
self.assertEqual((2, 3), z_value.shape)
z = array_ops.zeros(array_ops.shape(d), dtype=dtype)
self.assertEqual(z.dtype, dtype)
self.assertEqual([2, 3], z.get_shape())
z_value = self.evaluate(z)
self.assertFalse(np.any(z_value))
self.assertEqual((2, 3), z_value.shape)
@test_util.disable_tfrt("b/169901260")
def testQint8Dtype(self):
dtype = dtypes_lib.qint8
z = array_ops.zeros([2, 3], dtype=dtype)
self.assertEqual(z.dtype, dtype)
self.assertEqual([2, 3], z.get_shape())
# cast to int32 so that it can be compred with numpy
# where [qint|quint][8|16] are not available.
z_value = self.evaluate(math_ops.cast(z, dtypes_lib.int32))
self.assertFalse(np.any(z_value))
@test_util.disable_tfrt("b/169901260")
def testQint16Dtype(self):
dtype = dtypes_lib.qint16
z = array_ops.zeros([2, 3], dtype=dtype)
self.assertEqual(z.dtype, dtype)
self.assertEqual([2, 3], z.get_shape())
# cast to int32 so that it can be compred with numpy
# where [qint|quint][8|16] are not available.
z_value = self.evaluate(math_ops.cast(z, dtypes_lib.int32))
self.assertFalse(np.any(z_value))
@test_util.disable_tfrt("b/169901260")
def testQint32Dtype(self):
dtype = dtypes_lib.qint32
z = array_ops.zeros([2, 3], dtype=dtype)
self.assertEqual(z.dtype, dtype)
self.assertEqual([2, 3], z.get_shape())
# cast to int32 so that it can be compred with numpy
# where [qint|quint][8|16] are not available.
z_value = self.evaluate(math_ops.cast(z, dtypes_lib.int32))
self.assertFalse(np.any(z_value))
| ZerosTest |
python | doocs__leetcode | solution/2800-2899/2872.Maximum Number of K-Divisible Components/Solution.py | {
"start": 0,
"end": 539
} | class ____:
def maxKDivisibleComponents(
self, n: int, edges: List[List[int]], values: List[int], k: int
) -> int:
def dfs(i: int, fa: int) -> int:
s = values[i]
for j in g[i]:
if j != fa:
s += dfs(j, i)
nonlocal ans
ans += s % k == 0
return s
g = [[] for _ in range(n)]
for a, b in edges:
g[a].append(b)
g[b].append(a)
ans = 0
dfs(0, -1)
return ans
| Solution |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-microsoft-dataverse/source_microsoft_dataverse/streams.py | {
"start": 366,
"end": 3510
} | class ____(HttpStream, ABC):
# Base url will be set by init(), using information provided by the user through config input
url_base = ""
primary_key = ""
def __init__(self, url, stream_name, stream_path, schema, primary_key, odata_maxpagesize, **kwargs):
super().__init__(**kwargs)
self.url_base = url + "/api/data/v9.2/"
self.stream_name = stream_name
self.stream_path = stream_path
self.primary_key = primary_key
self.schema = schema
self.odata_maxpagesize = odata_maxpagesize
@property
def name(self) -> str:
"""Source name"""
return self.stream_name
def get_json_schema(self) -> Mapping[str, Any]:
return self.schema
def next_page_token(self, response: requests.Response) -> Optional[Mapping[str, Any]]:
"""
:param response: the most recent response from the API
:return If there is another page in the result, a mapping (e.g: dict) containing information needed to query the next page in the response.
If there are no more pages in the result, return None.
"""
response_json = response.json()
if "@odata.nextLink" in response_json:
next_link = response_json["@odata.nextLink"]
next_link_params = dict(parse.parse_qsl(parse.urlsplit(next_link).query))
return next_link_params
else:
return None
def request_params(
self, stream_state: Mapping[str, Any], stream_slice: Mapping[str, any] = None, next_page_token: Mapping[str, Any] = None
) -> MutableMapping[str, Any]:
"""
:return a dict containing the parameters to be used in the request
"""
request_params = super().request_params(stream_state)
# If there is not a nextLink(contains "next_page_token") in the response, means it is the last page.
# In this case, the deltatoken is passed instead.
if next_page_token is None:
request_params.update(stream_state)
return request_params
elif next_page_token is not None:
request_params.update(next_page_token)
return request_params
def parse_response(self, response: requests.Response, **kwargs) -> Iterable[Mapping]:
"""
:return an iterable containing each record in the response
"""
for result in response.json()["value"]:
yield result
def request_headers(
self, stream_state: Mapping[str, Any], stream_slice: Mapping[str, Any] = None, next_page_token: Mapping[str, Any] = None
) -> Mapping[str, Any]:
return {
"Cache-Control": "no-cache",
"OData-Version": "4.0",
"Content-Type": "application/json",
"Prefer": "odata.maxpagesize=" + str(self.odata_maxpagesize),
}
def path(
self,
*,
stream_state: Mapping[str, Any] = None,
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
) -> str:
return self.stream_path
# Basic incremental stream
| MicrosoftDataverseStream |
python | microsoft__pyright | packages/pyright-internal/src/tests/samples/property13.py | {
"start": 171,
"end": 294
} | class ____(metaclass=MyMeta):
def __new__(cls, arg) -> "Base": ...
reveal_type(Base.something, expected_text="Base")
| Base |
python | gevent__gevent | src/greentest/3.10/test_socket.py | {
"start": 21793,
"end": 21963
} | class ____(InetTestBase):
"""Base class for UDP-over-IPv4 tests."""
def newSocket(self):
return socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
| UDPTestBase |
python | ray-project__ray | python/ray/dashboard/modules/job/tests/test_job_manager.py | {
"start": 36562,
"end": 53385
} | class ____:
async def _tail_and_assert_logs(
self, job_id, job_manager, expected_log="", num_iteration=5
):
i = 0
async for lines in job_manager.tail_job_logs(job_id):
assert all(
s == expected_log
or "Runtime env" in s
or "Running entrypoint for job" in s
for s in lines.strip().split("\n")
)
print(lines, end="")
if i == num_iteration:
break
i += 1
async def test_unknown_job(self, job_manager):
with pytest.raises(RuntimeError, match="Job 'unknown' does not exist."):
async for _ in job_manager.tail_job_logs("unknown"):
pass
async def test_successful_job(self, job_manager):
"""Test tailing logs for a PENDING -> RUNNING -> SUCCESSFUL job."""
start_signal_actor = SignalActor.remote()
with tempfile.TemporaryDirectory() as tmp_dir:
_, tmp_file, job_id = await _run_hanging_command(
job_manager, tmp_dir, start_signal_actor=start_signal_actor
)
# TODO(edoakes): check we get no logs before actor starts (not sure
# how to timeout the iterator call).
job_status = await job_manager.get_job_status(job_id)
assert job_status == JobStatus.PENDING
# Signal job to start.
ray.get(start_signal_actor.send.remote())
await self._tail_and_assert_logs(
job_id, job_manager, expected_log="Waiting...", num_iteration=5
)
# Signal the job to exit by writing to the file.
with open(tmp_file, "w") as f:
print("hello", file=f)
async for lines in job_manager.tail_job_logs(job_id):
assert all(
s == "Waiting..."
or "Runtime env" in s
or "Running entrypoint for job" in s
for s in lines.strip().split("\n")
)
print(lines, end="")
await async_wait_for_condition(
check_job_succeeded, job_manager=job_manager, job_id=job_id
)
async def test_failed_job(self, job_manager):
"""Test tailing logs for a job that unexpectedly exits."""
with tempfile.TemporaryDirectory() as tmp_dir:
pid_file, _, job_id = await _run_hanging_command(job_manager, tmp_dir)
await self._tail_and_assert_logs(
job_id, job_manager, expected_log="Waiting...", num_iteration=5
)
# Kill the job unexpectedly.
with open(pid_file, "r") as f:
os.kill(int(f.read()), signal.SIGKILL)
async for lines in job_manager.tail_job_logs(job_id):
assert all(
s == "Waiting..."
or "Runtime env" in s
or "Running entrypoint for job" in s
for s in lines.strip().split("\n")
)
print(lines, end="")
await async_wait_for_condition(
check_job_failed,
job_manager=job_manager,
job_id=job_id,
expected_error_type=JobErrorType.JOB_ENTRYPOINT_COMMAND_ERROR,
)
# check if the driver is killed
data = await job_manager.get_job_info(job_id)
assert data.driver_exit_code == -signal.SIGKILL
async def test_stopped_job(self, job_manager):
"""Test tailing logs for a job that unexpectedly exits."""
with tempfile.TemporaryDirectory() as tmp_dir:
_, _, job_id = await _run_hanging_command(job_manager, tmp_dir)
await self._tail_and_assert_logs(
job_id, job_manager, expected_log="Waiting...", num_iteration=5
)
# Stop the job via the API.
job_manager.stop_job(job_id)
async for lines in job_manager.tail_job_logs(job_id):
assert all(
s == "Waiting..."
or s == "Terminated"
or "Runtime env" in s
or "Running entrypoint for job" in s
for s in lines.strip().split("\n")
)
print(lines, end="")
await async_wait_for_condition(
check_job_stopped, job_manager=job_manager, job_id=job_id
)
@pytest.mark.asyncio
async def test_stop_job_gracefully(job_manager):
"""
Stop job should send SIGTERM to child process (before trying to kill).
"""
entrypoint = """python -c \"
import sys
import signal
import time
def handler(*args):
print('SIGTERM signal handled!');
sys.exit()
signal.signal(signal.SIGTERM, handler)
while True:
print('Waiting...')
time.sleep(1)\"
"""
job_id = await job_manager.submit_job(entrypoint=entrypoint)
await async_wait_for_condition(
lambda: "Waiting..." in job_manager.get_job_logs(job_id)
)
assert job_manager.stop_job(job_id) is True
await async_wait_for_condition(
check_job_stopped, job_manager=job_manager, job_id=job_id
)
assert "SIGTERM signal handled!" in job_manager.get_job_logs(job_id)
@pytest.mark.asyncio
@pytest.mark.parametrize(
"use_env_var,stop_timeout",
[(True, 10), (False, JobSupervisor.DEFAULT_RAY_JOB_STOP_WAIT_TIME_S)],
)
async def test_stop_job_timeout(job_manager, use_env_var, stop_timeout):
"""
Stop job should send SIGTERM first, then if timeout occurs, send SIGKILL.
"""
entrypoint = """python -c \"
import sys
import signal
import time
def handler(*args):
print('SIGTERM signal handled!');
signal.signal(signal.SIGTERM, handler)
while True:
print('Waiting...')
time.sleep(1)\"
"""
if use_env_var:
job_id = await job_manager.submit_job(
entrypoint=entrypoint,
runtime_env={"env_vars": {"RAY_JOB_STOP_WAIT_TIME_S": str(stop_timeout)}},
)
else:
job_id = await job_manager.submit_job(entrypoint=entrypoint)
await async_wait_for_condition(
lambda: "Waiting..." in job_manager.get_job_logs(job_id)
)
assert job_manager.stop_job(job_id) is True
with pytest.raises(RuntimeError):
await async_wait_for_condition(
check_job_stopped,
job_manager=job_manager,
job_id=job_id,
timeout=stop_timeout - 1,
)
await async_wait_for_condition(
lambda: "SIGTERM signal handled!" in job_manager.get_job_logs(job_id)
)
await async_wait_for_condition(
check_job_stopped,
job_manager=job_manager,
job_id=job_id,
timeout=10,
)
@pytest.mark.asyncio
async def test_logs_streaming(job_manager):
"""Test that logs are streamed during the job, not just at the end."""
stream_logs_script = """
import time
print('STREAMED')
while True:
time.sleep(1)
"""
stream_logs_cmd = f'python -c "{stream_logs_script}"'
job_id = await job_manager.submit_job(entrypoint=stream_logs_cmd)
await async_wait_for_condition(
lambda: "STREAMED" in job_manager.get_job_logs(job_id)
)
job_manager.stop_job(job_id)
@pytest.mark.asyncio
async def test_bootstrap_address(job_manager, monkeypatch):
"""Ensure we always use bootstrap address in job manager even though ray
cluster might be started with http://ip:{dashboard_port} from previous
runs.
"""
ip = ray._private.ray_constants.DEFAULT_DASHBOARD_IP
port = ray._private.ray_constants.DEFAULT_DASHBOARD_PORT
monkeypatch.setenv("RAY_ADDRESS", f"http://{build_address(ip, port)}")
print_ray_address_cmd = (
'python -c"' "import os;" "import ray;" "ray.init();" "print('SUCCESS!');" '"'
)
job_id = await job_manager.submit_job(entrypoint=print_ray_address_cmd)
await async_wait_for_condition(
check_job_succeeded, job_manager=job_manager, job_id=job_id
)
assert "SUCCESS!" in job_manager.get_job_logs(job_id)
@pytest.mark.asyncio
async def test_job_runs_with_no_resources_available(job_manager):
script_path = _driver_script_path("consume_one_cpu.py")
hang_signal_actor = SignalActor.remote()
@ray.remote(num_cpus=ray.available_resources()["CPU"])
def consume_all_cpus():
ray.get(hang_signal_actor.wait.remote())
# Start a hanging task that consumes all CPUs.
hanging_ref = consume_all_cpus.remote()
try:
# Check that the job starts up properly even with no CPUs available.
# The job won't exit until it has a CPU available because it waits for
# a task.
job_id = await job_manager.submit_job(entrypoint=f"python {script_path}")
await async_wait_for_condition(
check_job_running, job_manager=job_manager, job_id=job_id
)
await async_wait_for_condition(
lambda: "Hanging..." in job_manager.get_job_logs(job_id)
)
# Signal the hanging task to exit and release its CPUs.
ray.get(hang_signal_actor.send.remote())
# Check the job succeeds now that resources are available.
await async_wait_for_condition(
check_job_succeeded, job_manager=job_manager, job_id=job_id
)
await async_wait_for_condition(
lambda: "Success!" in job_manager.get_job_logs(job_id)
)
finally:
# Just in case the test fails.
ray.cancel(hanging_ref)
@pytest.mark.asyncio
async def test_failed_job_logs_max_char(job_manager):
"""Test failed jobs does not print out too many logs"""
# Prints 21000 characters
print_large_logs_cmd = (
"python -c \"print('1234567890'* 2100); raise RuntimeError()\""
)
job_id = await job_manager.submit_job(
entrypoint=print_large_logs_cmd,
)
await async_wait_for_condition(
check_job_failed,
job_manager=job_manager,
job_id=job_id,
expected_error_type=JobErrorType.JOB_ENTRYPOINT_COMMAND_ERROR,
)
# Verify the status message length
job_info = await job_manager.get_job_info(job_id)
assert job_info
assert len(job_info.message) == 20000 + len(
"Job entrypoint command failed with exit code 1,"
" last available logs (truncated to 20,000 chars):\n"
)
assert job_info.driver_exit_code == 1
@pytest.mark.asyncio
async def test_simultaneous_drivers(job_manager):
"""Test that multiple drivers can be used to submit jobs at the same time."""
cmd = "python -c 'import ray; ray.init(); ray.shutdown();'"
job_id = await job_manager.submit_job(
entrypoint=f"{cmd} & {cmd} && wait && echo 'done'"
)
await async_wait_for_condition(
check_job_succeeded, job_manager=job_manager, job_id=job_id
)
assert "done" in job_manager.get_job_logs(job_id)
@pytest.mark.asyncio
async def test_monitor_job_pending(job_manager):
"""Test that monitor_job does not error when the job is PENDING."""
# Create a signal actor to keep the job pending.
start_signal_actor = SignalActor.remote()
# Submit a job.
job_id = await job_manager.submit_job(
entrypoint="echo 'hello world'",
_start_signal_actor=start_signal_actor,
)
# Trigger _recover_running_jobs while the job is still pending. This
# will pick up the new pending job.
await job_manager._recover_running_jobs()
# Trigger the job to start.
ray.get(start_signal_actor.send.remote())
# Wait for the job to finish.
await async_wait_for_condition(
check_job_succeeded, job_manager=job_manager, job_id=job_id
)
@pytest.mark.asyncio
@pytest.mark.parametrize(
"call_ray_start",
["ray start --head --num-cpus=1"],
indirect=True,
)
async def test_job_timeout_lack_of_entrypoint_resources(
call_ray_start, tmp_path, monkeypatch # noqa: F811
):
"""Test the timeout when there are not enough resources to schedule the supervisor actor)"""
monkeypatch.setenv(RAY_JOB_START_TIMEOUT_SECONDS_ENV_VAR, "1")
ray.init(address=call_ray_start)
gcs_client = ray._private.worker.global_worker.gcs_client
job_manager = JobManager(gcs_client, tmp_path)
# Submit a job with unsatisfied resource.
job_id = await job_manager.submit_job(
entrypoint="echo 'hello world'",
entrypoint_num_cpus=2,
)
# Wait for the job to timeout.
await async_wait_for_condition(
check_job_failed,
job_manager=job_manager,
job_id=job_id,
expected_error_type=JobErrorType.JOB_SUPERVISOR_ACTOR_START_TIMEOUT,
)
# Check that the job timed out.
job_info = await job_manager.get_job_info(job_id)
assert job_info.status == JobStatus.FAILED
assert "Job supervisor actor failed to start within" in job_info.message
assert job_info.driver_exit_code is None
@pytest.mark.asyncio
async def test_job_pending_timeout(job_manager, monkeypatch):
"""Test the timeout for pending jobs."""
monkeypatch.setenv(RAY_JOB_START_TIMEOUT_SECONDS_ENV_VAR, "0.1")
# Create a signal actor to keep the job pending.
start_signal_actor = SignalActor.remote()
# Submit a job.
job_id = await job_manager.submit_job(
entrypoint="echo 'hello world'",
_start_signal_actor=start_signal_actor,
)
# Trigger _recover_running_jobs while the job is still pending. This
# will pick up the new pending job.
await job_manager._recover_running_jobs()
# Wait for the job to timeout.
await async_wait_for_condition(
check_job_failed,
job_manager=job_manager,
job_id=job_id,
expected_error_type=JobErrorType.JOB_SUPERVISOR_ACTOR_START_TIMEOUT,
)
# Check that the job timed out.
job_info = await job_manager.get_job_info(job_id)
assert job_info.status == JobStatus.FAILED
assert "Job supervisor actor failed to start within" in job_info.message
assert job_info.driver_exit_code is None
@pytest.mark.asyncio
async def test_failed_driver_exit_code(job_manager):
"""Test driver exit code from finished task that failed"""
EXIT_CODE = 10
exit_code_script = f"""
import sys
sys.exit({EXIT_CODE})
"""
exit_code_cmd = f'python -c "{exit_code_script}"'
job_id = await job_manager.submit_job(entrypoint=exit_code_cmd)
# Wait for the job to timeout.
await async_wait_for_condition(
check_job_failed,
job_manager=job_manager,
job_id=job_id,
expected_error_type=JobErrorType.JOB_ENTRYPOINT_COMMAND_ERROR,
)
# Check that the job failed
job_info = await job_manager.get_job_info(job_id)
assert job_info.status == JobStatus.FAILED
assert job_info.driver_exit_code == EXIT_CODE
@pytest.mark.asyncio
async def test_actor_creation_error_not_overwritten(shared_ray_instance, tmp_path):
"""Regression test for: https://github.com/ray-project/ray/issues/40062.
Previously there existed a race condition that could overwrite error messages from
actor creation (such as an invalid `runtime_env`). This would happen
non-deterministically after an initial correct error message was set, so this test
runs many iterations.
Without the fix in place, this test failed consistently.
"""
for _ in range(10):
# Race condition existed when a job was submitted just after constructing the
# `JobManager`, so make a new one in each test iteration.
job_manager = create_job_manager(shared_ray_instance, tmp_path)
job_id = await job_manager.submit_job(
entrypoint="doesn't matter", runtime_env={"working_dir": "path_not_exist"}
)
# `await` many times to yield the `asyncio` loop and verify that the error
# message does not get overwritten.
for _ in range(100):
data = await job_manager.get_job_info(job_id)
assert data.status == JobStatus.FAILED
assert "path_not_exist is not a valid path" in data.message
assert data.driver_exit_code is None
@pytest.mark.asyncio
async def test_no_task_events_exported(shared_ray_instance, tmp_path):
"""Verify that no task events are exported by the JobSupervisor."""
job_manager = create_job_manager(shared_ray_instance, tmp_path)
job_id = await job_manager.submit_job(entrypoint="echo hello")
await async_wait_for_condition(
check_job_succeeded, job_manager=job_manager, job_id=job_id
)
assert "hello" in job_manager.get_job_logs(job_id)
# Assert no task events for the JobSupervisor are exported.
for t in list_tasks():
assert "JobSupervisor" not in t.name
if __name__ == "__main__":
sys.exit(pytest.main(["-v", __file__]))
| TestTailLogs |
python | tornadoweb__tornado | tornado/web.py | {
"start": 134086,
"end": 134195
} | class ____(UIModule):
def render(self) -> str:
return self.handler.xsrf_form_html()
| _xsrf_form_html |
python | apache__airflow | providers/google/src/airflow/providers/google/cloud/operators/dataplex.py | {
"start": 62943,
"end": 66165
} | class ____(GoogleCloudBaseOperator):
"""
Deletes a DataScan DataProfile resource.
:param project_id: Required. The ID of the Google Cloud project that the lake belongs to.
:param region: Required. The ID of the Google Cloud region that the lake belongs to.
:param data_scan_id: Required. Data Profile scan identifier.
:param api_version: The version of the api that will be requested for example 'v1'.
:param retry: A retry object used to retry requests. If `None` is specified, requests
will not be retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete.
Note that if `retry` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
:param gcp_conn_id: The connection ID to use when fetching connection info.
:param impersonation_chain: Optional service account to impersonate using short-term
credentials, or chained list of accounts required to get the access_token
of the last account in the list, which will be impersonated in the request.
If set as a string, the account must grant the originating account
the Service Account Token Creator IAM role.
If set as a sequence, the identities from the list must grant
Service Account Token Creator IAM role to the directly preceding identity, with first
account from the list granting this role to the originating account (templated).
:return: None
"""
template_fields = ("project_id", "data_scan_id", "impersonation_chain")
def __init__(
self,
project_id: str,
region: str,
data_scan_id: str,
api_version: str = "v1",
retry: Retry | _MethodDefault = DEFAULT,
timeout: float | None = None,
metadata: Sequence[tuple[str, str]] = (),
gcp_conn_id: str = "google_cloud_default",
impersonation_chain: str | Sequence[str] | None = None,
*args,
**kwargs,
) -> None:
super().__init__(*args, **kwargs)
self.project_id = project_id
self.region = region
self.data_scan_id = data_scan_id
self.api_version = api_version
self.retry = retry
self.timeout = timeout
self.metadata = metadata
self.gcp_conn_id = gcp_conn_id
self.impersonation_chain = impersonation_chain
def execute(self, context: Context) -> None:
hook = DataplexHook(
gcp_conn_id=self.gcp_conn_id,
api_version=self.api_version,
impersonation_chain=self.impersonation_chain,
)
self.log.info("Deleting Dataplex Data Profile Scan: %s", self.data_scan_id)
operation = hook.delete_data_scan(
project_id=self.project_id,
region=self.region,
data_scan_id=self.data_scan_id,
retry=self.retry,
timeout=self.timeout,
metadata=self.metadata,
)
hook.wait_for_operation(timeout=self.timeout, operation=operation)
self.log.info("Dataplex Data Profile scan %s deleted successfully!", self.data_scan_id)
| DataplexDeleteDataProfileScanOperator |
python | pandas-dev__pandas | asv_bench/benchmarks/io/csv.py | {
"start": 6975,
"end": 7099
} | class ____:
def data(self, stringio_object):
stringio_object.seek(0)
return stringio_object
| StringIORewind |
python | astropy__astropy | astropy/utils/masked/tests/test_functions.py | {
"start": 19042,
"end": 19129
} | class ____(TestMaskedArrayBroadcast, QuantitySetup):
pass
| TestMaskedQuantityBroadcast |
python | PyCQA__pylint | tests/functional/a/async_functions.py | {
"start": 387,
"end": 1337
} | class ____:
async def some_method(self):
super(OtherClass, self).test() # [bad-super-call]
# +1: [line-too-long]
# +1: [too-many-arguments, too-many-positional-arguments, too-many-return-statements, too-many-branches]
async def complex_function(this, function, has, more, arguments, than,
one, _, should, have):
if 1:
return this
if 1:
return function
if 1:
return has
if 1:
return more
if 1:
return arguments
if 1:
return than
try:
return one
except TypeError:
pass
finally:
pass
if 2:
return should
while True:
pass
if 1:
return have
if 2:
return function
if 3:
pass
# +1: [duplicate-argument-name, dangerous-default-value]
async def func(a, a, b=[]):
return a, b
# +1: [empty-docstring, disallowed-name]
async def foo():
""
| Class |
python | python-attrs__attrs | tests/test_filters.py | {
"start": 1735,
"end": 2896
} | class ____:
"""
Tests for `exclude`.
"""
@pytest.mark.parametrize(
("excl", "value"),
[
((str,), 42),
((int,), "hello"),
((str, fields(C).b), 42),
((int, fields(C).b), "hello"),
(("b",), 42),
(("b",), "hello"),
(("b", str), 42),
(("b", fields(C).b), "hello"),
],
)
def test_allow(self, excl, value):
"""
Return True if class or attribute is not excluded.
"""
e = exclude(*excl)
assert e(fields(C).a, value) is True
@pytest.mark.parametrize(
("excl", "value"),
[
((int,), 42),
((str,), "hello"),
((str, fields(C).a), 42),
((str, fields(C).b), "hello"),
(("a",), 42),
(("a",), "hello"),
(("a", str), 42),
(("a", fields(C).b), "hello"),
],
)
def test_drop_class(self, excl, value):
"""
Return True on non-excluded classes and attributes.
"""
e = exclude(*excl)
assert e(fields(C).a, value) is False
| TestExclude |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-github/source_github/github_schema.py | {
"start": 6241,
"end": 7123
} | class ____(sgqlc.types.Enum):
"""The possible errors that will prevent a user from updating a
comment.
Enumeration Choices:
* `ARCHIVED`: Unable to create comment because repository is
archived.
* `DENIED`: You cannot update this comment
* `INSUFFICIENT_ACCESS`: You must be the author or have write
access to this repository to update this comment.
* `LOCKED`: Unable to create comment because issue is locked.
* `LOGIN_REQUIRED`: You must be logged in to update this comment.
* `MAINTENANCE`: Repository is under maintenance.
* `VERIFIED_EMAIL_REQUIRED`: At least one email address must be
verified to update this comment.
"""
__schema__ = github_schema
__choices__ = ("ARCHIVED", "DENIED", "INSUFFICIENT_ACCESS", "LOCKED", "LOGIN_REQUIRED", "MAINTENANCE", "VERIFIED_EMAIL_REQUIRED")
| CommentCannotUpdateReason |
python | tensorflow__tensorflow | tensorflow/python/kernel_tests/linalg/linear_operator_tridiag_test.py | {
"start": 4022,
"end": 5511
} | class ____(
_LinearOperatorTriDiagBase,
linear_operator_test_util.SquareLinearOperatorDerivedClassTest):
"""Most tests done in the base class LinearOperatorDerivedClassTest."""
def tearDown(self):
config.enable_tensor_float_32_execution(self.tf32_keep_)
def setUp(self):
self.tf32_keep_ = config.tensor_float_32_execution_enabled()
config.enable_tensor_float_32_execution(False)
def operator_and_matrix(
self, build_info, dtype, use_placeholder,
ensure_self_adjoint_and_pd=False):
return self.build_operator_and_matrix(
build_info, dtype, use_placeholder,
ensure_self_adjoint_and_pd=ensure_self_adjoint_and_pd,
diagonals_format='compact')
@test_util.disable_xla('Current implementation does not yet support pivoting')
def test_tape_safe(self):
diag = variables_module.Variable([[3., 6., 2.], [2., 4., 2.], [5., 1., 2.]])
operator = linalg_lib.LinearOperatorTridiag(
diag, diagonals_format='compact')
self.check_tape_safe(operator)
def test_convert_variables_to_tensors(self):
diag = variables_module.Variable([[3., 6., 2.], [2., 4., 2.], [5., 1., 2.]])
operator = linalg_lib.LinearOperatorTridiag(
diag, diagonals_format='compact')
with self.cached_session() as sess:
sess.run([diag.initializer])
self.check_convert_variables_to_tensors(operator)
@test_util.with_eager_op_as_function
@test_util.run_all_in_graph_and_eager_modes
| LinearOperatorTriDiagCompactTest |
python | PrefectHQ__prefect | src/integrations/prefect-dbt/tests/cloud/test_runs.py | {
"start": 1592,
"end": 3626
} | class ____:
async def test_list_artifacts_success(self, dbt_cloud_credentials):
with respx.mock(using="httpx") as respx_mock:
respx_mock.get(
"https://cloud.getdbt.com/api/v2/accounts/123456789/runs/12/artifacts/",
headers={"Authorization": "Bearer my_api_key"},
).mock(return_value=Response(200, json={"data": ["manifest.json"]}))
response = await list_dbt_cloud_run_artifacts.fn(
dbt_cloud_credentials=dbt_cloud_credentials,
run_id=12,
)
assert response == ["manifest.json"]
async def test_list_artifacts_with_step(self, dbt_cloud_credentials):
with respx.mock(using="httpx") as respx_mock:
respx_mock.get(
"https://cloud.getdbt.com/api/v2/accounts/123456789/runs/12/artifacts/?step=1", # noqa
headers={"Authorization": "Bearer my_api_key"},
).mock(return_value=Response(200, json={"data": ["manifest.json"]}))
response = await list_dbt_cloud_run_artifacts.fn(
dbt_cloud_credentials=dbt_cloud_credentials, run_id=12, step=1
)
assert response == ["manifest.json"]
async def test_list_artifacts_failure(self, dbt_cloud_credentials):
with respx.mock(using="httpx") as respx_mock:
respx_mock.get(
"https://cloud.getdbt.com/api/v2/accounts/123456789/runs/12/artifacts/",
headers={"Authorization": "Bearer my_api_key"},
).mock(
return_value=Response(
500, json={"status": {"user_message": "This is what went wrong"}}
)
)
with pytest.raises(
DbtCloudListRunArtifactsFailed, match="This is what went wrong"
):
await list_dbt_cloud_run_artifacts.fn(
dbt_cloud_credentials=dbt_cloud_credentials,
run_id=12,
)
| TestDbtCloudListRunArtifacts |
python | pandas-dev__pandas | asv_bench/benchmarks/algos/isin.py | {
"start": 8416,
"end": 9062
} | class ____:
params = [
["int64", "int32", "float64", "float32", "object", "Int64", "Float64"],
["random", "monotone"],
]
param_names = ["dtype", "series_type"]
def setup(self, dtype, series_type):
N = 10**7
if series_type == "random":
vals = np.random.randint(0, 10 * N, N)
if series_type == "monotone":
vals = np.arange(N)
self.values = vals.astype(dtype.lower())
M = 10**6 + 1
self.series = Series(np.arange(M)).astype(dtype)
def time_isin(self, dtypes, series_type):
self.series.isin(self.values)
| IsInLongSeriesValuesDominate |
python | pytest-dev__pytest | bench/xunit.py | {
"start": 82,
"end": 236
} | class ____{i}:
@classmethod
def setup_class(cls): pass
def test_1(self): pass
def test_2(self): pass
def test_3(self): pass
"""
)
| Test |
python | sqlalchemy__sqlalchemy | lib/sqlalchemy/engine/interfaces.py | {
"start": 4180,
"end": 4440
} | class ____(Protocol):
"""protocol representing a :pep:`249` database type.
.. versionadded:: 2.0
.. seealso::
`Type Objects <https://www.python.org/dev/peps/pep-0249/#type-objects>`_
- in :pep:`249`
""" # noqa: E501
| DBAPIType |
python | pyinstaller__pyinstaller | bootloader/waflib/Tools/glib2.py | {
"start": 11394,
"end": 12984
} | class ____(glib_gresource_base):
run_str = glib_gresource_base.base_cmd + ' --target=${TGT} ${SRC}'
shell = True
@conf
def find_glib_genmarshal(conf):
conf.find_program('glib-genmarshal', var='GLIB_GENMARSHAL')
@conf
def find_glib_mkenums(conf):
if not conf.env.PERL:
conf.find_program('perl', var='PERL')
conf.find_program('glib-mkenums', interpreter='PERL', var='GLIB_MKENUMS')
@conf
def find_glib_compile_schemas(conf):
conf.find_program('glib-compile-schemas', var='GLIB_COMPILE_SCHEMAS')
def getstr(varname):
return getattr(Options.options, varname, getattr(conf.env, varname, ''))
gsettingsschemadir = getstr('GSETTINGSSCHEMADIR')
if not gsettingsschemadir:
datadir = getstr('DATADIR')
if not datadir:
prefix = conf.env.PREFIX
datadir = os.path.join(prefix, 'share')
gsettingsschemadir = os.path.join(datadir, 'glib-2.0', 'schemas')
conf.env.GSETTINGSSCHEMADIR = gsettingsschemadir
@conf
def find_glib_compile_resources(conf):
conf.find_program('glib-compile-resources', var='GLIB_COMPILE_RESOURCES')
def configure(conf):
conf.find_glib_genmarshal()
conf.find_glib_mkenums()
conf.find_glib_compile_schemas(mandatory=False)
conf.find_glib_compile_resources(mandatory=False)
def options(opt):
gr = opt.add_option_group('Installation directories')
gr.add_option(
'--gsettingsschemadir',
help='GSettings schema location [DATADIR/glib-2.0/schemas]',
default='',
dest='GSETTINGSSCHEMADIR'
)
| glib_gresource_bundle |
python | walkccc__LeetCode | solutions/2770. Maximum Number of Jumps to Reach the Last Index/2770.py | {
"start": 0,
"end": 362
} | class ____:
def maximumJumps(self, nums: list[int], target: int) -> int:
n = len(nums)
# dp[i] := the maximum number of jumps to reach i from 0
dp = [-1] * n
dp[0] = 0
for j in range(1, n):
for i in range(j):
if dp[i] != -1 and abs(nums[j] - nums[i]) <= target:
dp[j] = max(dp[j], dp[i] + 1)
return dp[-1]
| Solution |
python | automl__auto-sklearn | test/test_pipeline/components/data_preprocessing/test_scaling.py | {
"start": 202,
"end": 2419
} | class ____(unittest.TestCase):
def _test_helper(self, Preprocessor, dataset=None, make_sparse=False):
X_train, Y_train, X_test, Y_test = get_dataset(
dataset=dataset,
make_sparse=make_sparse,
)
dataset_properties = {"sparse": make_sparse}
original_X_train = X_train.copy()
configuration_space = Preprocessor(
dataset_properties
).get_hyperparameter_search_space(dataset_properties=dataset_properties)
default = configuration_space.get_default_configuration()
preprocessor = Preprocessor(dataset_properties, random_state=1)
preprocessor.set_hyperparameters(default)
preprocessor = preprocessor.choice
transformer = preprocessor.fit(X_train, Y_train)
return transformer.transform(X_train), original_X_train
def test_boston_is_not_scaled(self):
data = sklearn.datasets.load_boston()["data"]
self.assertGreaterEqual(np.max(data), 100)
def test_default_configuration(self):
transformations = []
for i in range(2):
transformation, original = self._test_helper(
RescalingChoice, dataset="boston"
)
# The maximum is around 1.95 for the transformed array...
self.assertAlmostEqual(np.mean(transformation), 0, places=5)
self.assertAlmostEqual(np.std(transformation), 1, places=5)
self.assertFalse((original == transformation).all())
transformations.append(transformation)
if len(transformations) > 1:
self.assertTrue((transformations[-1] == transformations[-2]).all())
def test_default_configuration_with_sparse_data(self):
preprocessing = self._test_helper(
RescalingChoice, dataset="boston", make_sparse=True
)
transformation, original = preprocessing
self.assertEqual(original.getnnz(), transformation.getnnz())
self.assertTrue(~np.allclose(original.data, transformation.data))
@unittest.skip("Does not work at the moment.")
def test_preprocessing_dtype(self):
super(ScalingComponentTest, self)._test_helper(RescalingChoice)
| ScalingComponentTest |
python | airbytehq__airbyte | airbyte-integrations/connectors/source-github/source_github/github_schema.py | {
"start": 1587208,
"end": 1587386
} | class ____(sgqlc.types.Union):
"""An object that is a member of an enterprise."""
__schema__ = github_schema
__types__ = (EnterpriseUserAccount, User)
| EnterpriseMember |
python | pytorch__pytorch | torch/utils/hooks.py | {
"start": 222,
"end": 3193
} | class ____:
r"""
A handle which provides the capability to remove a hook.
Args:
hooks_dict (dict): A dictionary of hooks, indexed by hook ``id``.
extra_dict (Union[dict, List[dict]]): An additional dictionary or list of
dictionaries whose keys will be deleted when the same keys are
removed from ``hooks_dict``.
"""
id: int
next_id: int = 0
def __init__(self, hooks_dict: Any, *, extra_dict: Any = None) -> None:
self.hooks_dict_ref = weakref.ref(hooks_dict)
self.id = RemovableHandle.next_id
RemovableHandle.next_id += 1
self.extra_dict_ref: tuple = ()
if isinstance(extra_dict, dict):
self.extra_dict_ref = (weakref.ref(extra_dict),)
elif isinstance(extra_dict, list):
self.extra_dict_ref = tuple(weakref.ref(d) for d in extra_dict)
def remove(self) -> None:
hooks_dict = self.hooks_dict_ref()
if hooks_dict is not None and self.id in hooks_dict:
del hooks_dict[self.id]
for ref in self.extra_dict_ref:
extra_dict = ref()
if extra_dict is not None and self.id in extra_dict:
del extra_dict[self.id]
def __getstate__(self):
if self.extra_dict_ref is None:
return (self.hooks_dict_ref(), self.id)
else:
return (self.hooks_dict_ref(), self.id, tuple(ref() for ref in self.extra_dict_ref))
def __setstate__(self, state) -> None:
if state[0] is None:
# create a dead reference
self.hooks_dict_ref = weakref.ref(OrderedDict())
else:
self.hooks_dict_ref = weakref.ref(state[0])
self.id = state[1]
RemovableHandle.next_id = max(RemovableHandle.next_id, self.id + 1)
if len(state) < 3 or state[2] is None:
self.extra_dict_ref = ()
else:
self.extra_dict_ref = tuple(weakref.ref(d) for d in state[2])
def __enter__(self) -> "RemovableHandle":
return self
def __exit__(self, type: Any, value: Any, tb: Any) -> None:
self.remove()
def unserializable_hook(f):
"""
Mark a function as an unserializable hook with this decorator.
This suppresses warnings that would otherwise arise if you attempt
to serialize a tensor that has a hook.
"""
f.__torch_unserializable__ = True
return f
def warn_if_has_hooks(tensor) -> None:
if tensor._backward_hooks:
for k in tensor._backward_hooks:
hook = tensor._backward_hooks[k]
if not hasattr(hook, "__torch_unserializable__"):
warnings.warn(f"backward hook {repr(hook)} on tensor will not be "
"serialized. If this is expected, you can "
"decorate the function with @torch.utils.hooks.unserializable_hook "
"to suppress this warning", stacklevel=2)
| RemovableHandle |
python | huggingface__transformers | src/transformers/models/edgetam/modular_edgetam.py | {
"start": 6235,
"end": 6285
} | class ____(Sam2Attention):
pass
| EdgeTamAttention |
python | kamyu104__LeetCode-Solutions | Python/remove-letter-to-equalize-frequency.py | {
"start": 645,
"end": 1036
} | class ____(object):
def equalFrequency(self, word):
"""
:type word: str
:rtype: bool
"""
cnt = collections.Counter(collections.Counter(word))
for c in word:
cnt[c] -= 1
if len(collections.Counter(c for c in cnt.itervalues() if c)) == 1:
return True
cnt[c] += 1
return False
| Solution2 |
python | numba__numba | numba/tests/test_buffer_protocol.py | {
"start": 6536,
"end": 8810
} | class ____(MemoryLeakMixin, TestCase):
"""
Test memoryview-specific attributes and operations.
"""
def _arrays(self):
arr = np.arange(12)
yield arr
arr = arr.reshape((3, 4))
yield arr
yield arr.T
yield arr[::2]
arr.setflags(write=False)
yield arr
arr = np.zeros(())
assert arr.ndim == 0
yield arr
def test_ndim(self):
for arr in self._arrays():
m = memoryview(arr)
self.assertPreciseEqual(ndim_usecase(m), arr.ndim)
def test_shape(self):
for arr in self._arrays():
m = memoryview(arr)
self.assertPreciseEqual(shape_usecase(m), arr.shape)
def test_strides(self):
for arr in self._arrays():
m = memoryview(arr)
self.assertPreciseEqual(strides_usecase(m), arr.strides)
def test_itemsize(self):
for arr in self._arrays():
m = memoryview(arr)
self.assertPreciseEqual(itemsize_usecase(m), arr.itemsize)
def test_nbytes(self):
for arr in self._arrays():
m = memoryview(arr)
self.assertPreciseEqual(nbytes_usecase(m), arr.size * arr.itemsize)
def test_readonly(self):
for arr in self._arrays():
m = memoryview(arr)
self.assertIs(readonly_usecase(m), not arr.flags.writeable)
m = memoryview(b"xyz")
self.assertIs(readonly_usecase(m), True)
m = memoryview(bytearray(b"xyz"))
self.assertIs(readonly_usecase(m), False)
def test_contiguous(self):
m = memoryview(bytearray(b"xyz"))
self.assertIs(contiguous_usecase(m), True)
self.assertIs(c_contiguous_usecase(m), True)
self.assertIs(f_contiguous_usecase(m), True)
for arr in self._arrays():
m = memoryview(arr)
# Note `arr.flags.contiguous` is wrong (it mimics c_contiguous)
self.assertIs(contiguous_usecase(m),
arr.flags.f_contiguous or arr.flags.c_contiguous)
self.assertIs(c_contiguous_usecase(m), arr.flags.c_contiguous)
self.assertIs(f_contiguous_usecase(m), arr.flags.f_contiguous)
if __name__ == '__main__':
unittest.main()
| TestMemoryView |
python | sqlalchemy__sqlalchemy | test/base/test_utils.py | {
"start": 14625,
"end": 16183
} | class ____(fixtures.TestBase):
def test_memoized_property(self):
val = [20]
class Foo:
@util.memoized_property
def bar(self):
v = val[0]
val[0] += 1
return v
ne_(Foo.bar, None)
f1 = Foo()
assert "bar" not in f1.__dict__
eq_(f1.bar, 20)
eq_(f1.bar, 20)
eq_(val[0], 21)
eq_(f1.__dict__["bar"], 20)
def test_memoized_instancemethod(self):
val = [20]
class Foo:
@util.memoized_instancemethod
def bar(self):
v = val[0]
val[0] += 1
return v
assert inspect.ismethod(Foo().bar)
ne_(Foo.bar, None)
f1 = Foo()
assert "bar" not in f1.__dict__
eq_(f1.bar(), 20)
eq_(f1.bar(), 20)
eq_(val[0], 21)
def test_memoized_slots(self):
canary = mock.Mock()
class Foob(util.MemoizedSlots):
__slots__ = ("foo_bar", "gogo")
def _memoized_method_gogo(self):
canary.method()
return "gogo"
def _memoized_attr_foo_bar(self):
canary.attr()
return "foobar"
f1 = Foob()
assert_raises(AttributeError, setattr, f1, "bar", "bat")
eq_(f1.foo_bar, "foobar")
eq_(f1.foo_bar, "foobar")
eq_(f1.gogo(), "gogo")
eq_(f1.gogo(), "gogo")
eq_(canary.mock_calls, [mock.call.attr(), mock.call.method()])
| MemoizedAttrTest |
python | ansible__ansible | test/lib/ansible_test/_internal/ci/__init__.py | {
"start": 1857,
"end": 2579
} | class ____(AuthHelper, metaclass=abc.ABCMeta):
"""Authentication helper which generates a key pair on demand."""
def __init__(self) -> None:
super().__init__(pathlib.Path('~/.ansible/test/ansible-core-ci').expanduser())
def sign_request(self, request: dict[str, object], context: AuthContext) -> None:
if not self.private_key_file.exists():
self.generate_key_pair()
super().sign_request(request, context)
def generate_key_pair(self) -> None:
"""Generate key pair."""
self.private_key_file.parent.mkdir(parents=True, exist_ok=True)
raw_command(['ssh-keygen', '-q', '-f', str(self.private_key_file), '-N', ''], capture=True)
| GeneratingAuthHelper |
python | great-expectations__great_expectations | contrib/great_expectations_semantic_types_expectations/great_expectations_semantic_types_expectations/expectations/expect_column_values_to_be_valid_uuid.py | {
"start": 881,
"end": 2633
} | class ____(ColumnMapMetricProvider):
# This is the id string that will be used to reference your metric.
condition_metric_name = "column_values.valid_uuid"
# This method implements the core logic for the PandasExecutionEngine
@column_condition_partial(engine=PandasExecutionEngine)
def _pandas(cls, column, **kwargs):
return column.apply(lambda x: is_valid_uuid(x))
# This method defines the business logic for evaluating your metric when using a SqlAlchemyExecutionEngine
@column_condition_partial(engine=SqlAlchemyExecutionEngine)
def _sqlalchemy(cls, column, _dialect, **kwargs):
"""
Please note that there is a stricter version to verify GUID, as can be seen in the following link:
https://www.techtarget.com/searchwindowsserver/definition/GUID-global-unique-identifier#:~:text=RFC%204122%20specification.-,How%20does%20GUID%20work%3F,-GUIDs%20are%20constructed
However, since the UUID package doesn't seem to enforce it, the chosen regex was the less stricter.
For future purposes, the stricter pattern can be found here as well, commented out.
"""
# regex_pattern = '^(urn:uuid:)?\{?[A-Fa-f0-9]{8}-?[A-Fa-f0-9]{4}-?[1-5][A-Fa-f0-9]{3}-?[89ABab][A-Fa-f0-9]{3}-?[A-Fa-f0-9]{12}\}?$'
regex_pattern = "^(urn:uuid:)?\\{?[0-9a-fA-F]{8}(-?[0-9a-fA-F]{4}){3}-?[0-9a-fA-F]{12}\\}?$"
return column.regexp_match(regex_pattern)
# This method defines the business logic for evaluating your metric when using a SparkDFExecutionEngine
# @column_condition_partial(engine=SparkDFExecutionEngine)
# def _spark(cls, column, **kwargs):
# raise NotImplementedError
# This class defines the Expectation itself
| ColumnValuesToBeValidUUID |
python | dagster-io__dagster | python_modules/dagster-graphql/dagster_graphql/schema/roots/mutation.py | {
"start": 6871,
"end": 7212
} | class ____(graphene.Union):
"""The output from deleting a run."""
class Meta:
types = (
GrapheneDeletePipelineRunSuccess,
GrapheneUnauthorizedError,
GraphenePythonError,
GrapheneRunNotFoundError,
)
name = "DeletePipelineRunResult"
| GrapheneDeletePipelineRunResult |
python | dagster-io__dagster | python_modules/libraries/dagster-shared/dagster_shared_tests/test_record.py | {
"start": 11256,
"end": 11380
} | class ____:
def __init__(self, s: str):
self.s = s
@record_custom(field_to_new_mapping={"foo_str": "foo"})
| Complex |
python | jazzband__django-polymorphic | src/polymorphic/tests/models.py | {
"start": 5787,
"end": 6118
} | class ____(PolymorphicModel):
# Also test whether foreign keys receive the manager:
field1 = models.CharField(max_length=30) # needed as MyManager uses it
fk = models.ForeignKey(
ParentModelWithManager, on_delete=models.CASCADE, related_name="childmodel_set"
)
objects = MyManager()
| ChildModelWithManager |
python | conda__conda | conda/activate.py | {
"start": 33505,
"end": 35170
} | class ____(_Activator):
pathsep_join = ":".join
sep = "/"
path_conversion = staticmethod(win_path_to_unix if on_win else _path_identity)
script_extension = ".sh"
tempfile_extension = None # output to stdout
command_join = "\n"
needs_line_ending_fix = True
# Using `unset %s` would cause issues for people running
# with shell flag -u set (error on unset).
unset_var_tmpl = "export %s=''" # unset %s
export_var_tmpl = "export %s='%s'"
path_var_tmpl = "export %s=\"$(cygpath '%s')\"" if on_win else export_var_tmpl
set_var_tmpl = "%s='%s'"
run_script_tmpl = ". \"`cygpath '%s'`\"" if on_win else '. "%s"'
hook_source_path = Path(
CONDA_PACKAGE_ROOT,
"shell",
"etc",
"profile.d",
"conda.sh",
)
inline_hook_source = True
def _update_prompt(self, set_vars, conda_prompt_modifier):
ps1 = os.getenv("PS1", "")
if "POWERLINE_COMMAND" in ps1:
# Defer to powerline (https://github.com/powerline/powerline) if it's in use.
return
current_prompt_modifier = os.getenv("CONDA_PROMPT_MODIFIER")
if current_prompt_modifier:
ps1 = re.sub(re.escape(current_prompt_modifier), r"", ps1)
# Because we're using single-quotes to set shell variables, we need to handle the
# proper escaping of single quotes that are already part of the string.
# Best solution appears to be https://stackoverflow.com/a/1250279
ps1 = ps1.replace("'", "'\"'\"'")
set_vars.update(
{
"PS1": conda_prompt_modifier + ps1,
}
)
| PosixActivator |
python | Pylons__pyramid | tests/test_config/test_init.py | {
"start": 287,
"end": 42905
} | class ____(unittest.TestCase):
def _makeOne(self, *arg, **kw):
from pyramid.config import Configurator
config = Configurator(*arg, **kw)
return config
def _getViewCallable(
self,
config,
ctx_iface=None,
request_iface=None,
name='',
exception_view=False,
):
from pyramid.interfaces import (
IExceptionViewClassifier,
IView,
IViewClassifier,
)
if exception_view: # pragma: no cover
classifier = IExceptionViewClassifier
else:
classifier = IViewClassifier
return config.registry.adapters.lookup(
(classifier, request_iface, ctx_iface),
IView,
name=name,
default=None,
)
def _registerEventListener(self, config, event_iface=None):
if event_iface is None: # pragma: no cover
from zope.interface import Interface
event_iface = Interface
L = []
def subscriber(*event):
L.extend(event)
config.registry.registerHandler(subscriber, (event_iface,))
return L
def _makeRequest(self, config):
request = DummyRequest()
request.registry = config.registry
return request
def test_ctor_no_registry(self):
import sys
from pyramid.config import Configurator
from pyramid.interfaces import IRendererFactory, ISettings
config = Configurator()
this_pkg = sys.modules['tests.test_config']
self.assertTrue(config.registry.getUtility(ISettings))
self.assertEqual(config.package, this_pkg)
config.commit()
self.assertTrue(config.registry.getUtility(IRendererFactory, 'json'))
self.assertTrue(config.registry.getUtility(IRendererFactory, 'string'))
def test_begin(self):
from pyramid.config import Configurator
config = Configurator()
manager = DummyThreadLocalManager()
config.manager = manager
config.begin()
self.assertEqual(
manager.pushed, {'registry': config.registry, 'request': None}
)
self.assertEqual(manager.popped, False)
def test_begin_with_request(self):
from pyramid.config import Configurator
config = Configurator()
request = object()
manager = DummyThreadLocalManager()
config.manager = manager
config.begin(request=request)
self.assertEqual(
manager.pushed, {'registry': config.registry, 'request': request}
)
self.assertEqual(manager.popped, False)
def test_begin_overrides_request(self):
from pyramid.config import Configurator
config = Configurator()
manager = DummyThreadLocalManager()
req = object()
# set it up for auto-propagation
pushed = {'registry': config.registry, 'request': None}
manager.pushed = pushed
config.manager = manager
config.begin(req)
self.assertTrue(manager.pushed is not pushed)
self.assertEqual(manager.pushed['request'], req)
self.assertEqual(manager.pushed['registry'], config.registry)
def test_begin_propagates_request_for_same_registry(self):
from pyramid.config import Configurator
config = Configurator()
manager = DummyThreadLocalManager()
req = object()
pushed = {'registry': config.registry, 'request': req}
manager.pushed = pushed
config.manager = manager
config.begin()
self.assertTrue(manager.pushed is not pushed)
self.assertEqual(manager.pushed['request'], req)
self.assertEqual(manager.pushed['registry'], config.registry)
def test_begin_does_not_propagate_request_for_diff_registry(self):
from pyramid.config import Configurator
config = Configurator()
manager = DummyThreadLocalManager()
req = object()
pushed = {'registry': object(), 'request': req}
manager.pushed = pushed
config.manager = manager
config.begin()
self.assertTrue(manager.pushed is not pushed)
self.assertEqual(manager.pushed['request'], None)
self.assertEqual(manager.pushed['registry'], config.registry)
def test_end(self):
from pyramid.config import Configurator
config = Configurator()
manager = DummyThreadLocalManager()
pushed = manager.pushed
config.manager = manager
config.end()
self.assertEqual(manager.pushed, pushed)
self.assertEqual(manager.popped, True)
def test_context_manager(self):
from pyramid.config import Configurator
config = Configurator()
manager = DummyThreadLocalManager()
config.manager = manager
view = lambda r: None
with config as ctx:
self.assertTrue(config is ctx)
self.assertEqual(
manager.pushed, {'registry': config.registry, 'request': None}
)
self.assertFalse(manager.popped)
config.add_view(view)
self.assertTrue(manager.popped)
config.add_view(view) # did not raise a conflict because of commit
config.commit()
def test_ctor_with_package_registry(self):
import sys
from pyramid.config import Configurator
pkg = sys.modules['pyramid']
config = Configurator(package=pkg)
self.assertEqual(config.package, pkg)
def test_ctor_noreg_custom_settings(self):
from pyramid.interfaces import ISettings
settings = {'reload_templates': True, 'mysetting': True}
config = self._makeOne(settings=settings)
settings = config.registry.getUtility(ISettings)
self.assertEqual(settings['reload_templates'], True)
self.assertEqual(settings['debug_authorization'], False)
self.assertEqual(settings['mysetting'], True)
def test_ctor_noreg_debug_logger_None_default(self):
from pyramid.interfaces import IDebugLogger
config = self._makeOne()
logger = config.registry.getUtility(IDebugLogger)
self.assertEqual(logger.name, 'tests.test_config')
def test_ctor_noreg_debug_logger_non_None(self):
from pyramid.interfaces import IDebugLogger
logger = object()
config = self._makeOne(debug_logger=logger)
result = config.registry.getUtility(IDebugLogger)
self.assertEqual(logger, result)
def test_ctor_security_policy(self):
from pyramid.interfaces import ISecurityPolicy
policy = object()
config = self._makeOne(security_policy=policy)
config.commit()
result = config.registry.getUtility(ISecurityPolicy)
self.assertEqual(policy, result)
def test_ctor_authentication_policy(self):
from pyramid.interfaces import IAuthenticationPolicy
policy = object()
config = self._makeOne(authentication_policy=policy)
config.commit()
result = config.registry.getUtility(IAuthenticationPolicy)
self.assertEqual(policy, result)
def test_ctor_authorization_policy_only(self):
policy = object()
config = self._makeOne(authorization_policy=policy)
self.assertRaises(ConfigurationExecutionError, config.commit)
def test_ctor_no_root_factory(self):
from pyramid.interfaces import IRootFactory
config = self._makeOne()
self.assertEqual(config.registry.queryUtility(IRootFactory), None)
config.commit()
self.assertEqual(config.registry.queryUtility(IRootFactory), None)
def test_ctor_with_root_factory(self):
from pyramid.interfaces import IRootFactory
factory = object()
config = self._makeOne(root_factory=factory)
self.assertEqual(config.registry.queryUtility(IRootFactory), None)
config.commit()
self.assertEqual(config.registry.queryUtility(IRootFactory), factory)
def test_ctor_alternate_renderers(self):
from pyramid.interfaces import IRendererFactory
renderer = object()
config = self._makeOne(renderers=[('yeah', renderer)])
config.commit()
self.assertEqual(
config.registry.getUtility(IRendererFactory, 'yeah'), renderer
)
def test_ctor_default_renderers(self):
from pyramid.interfaces import IRendererFactory
from pyramid.renderers import json_renderer_factory
config = self._makeOne()
self.assertEqual(
config.registry.getUtility(IRendererFactory, 'json'),
json_renderer_factory,
)
def test_ctor_default_permission(self):
from pyramid.interfaces import IDefaultPermission
config = self._makeOne(default_permission='view')
config.commit()
self.assertEqual(
config.registry.getUtility(IDefaultPermission), 'view'
)
def test_ctor_session_factory(self):
from pyramid.interfaces import ISessionFactory
factory = object()
config = self._makeOne(session_factory=factory)
self.assertEqual(config.registry.queryUtility(ISessionFactory), None)
config.commit()
self.assertEqual(config.registry.getUtility(ISessionFactory), factory)
def test_ctor_default_view_mapper(self):
from pyramid.interfaces import IViewMapperFactory
mapper = object()
config = self._makeOne(default_view_mapper=mapper)
config.commit()
self.assertEqual(
config.registry.getUtility(IViewMapperFactory), mapper
)
def test_ctor_httpexception_view_default(self):
from pyramid.httpexceptions import default_exceptionresponse_view
from pyramid.interfaces import IExceptionResponse
config = self._makeOne()
view = self._getViewCallable(
config, ctx_iface=IExceptionResponse, request_iface=IRequest
)
self.assertTrue(view.__wraps__ is default_exceptionresponse_view)
def test_ctor_exceptionresponse_view_None(self):
from pyramid.interfaces import IExceptionResponse
config = self._makeOne(exceptionresponse_view=None)
view = self._getViewCallable(
config, ctx_iface=IExceptionResponse, request_iface=IRequest
)
self.assertTrue(view is None)
def test_ctor_exceptionresponse_view_custom(self):
from pyramid.interfaces import IExceptionResponse
def exceptionresponse_view(context, request): # pragma: no cover
pass
config = self._makeOne(exceptionresponse_view=exceptionresponse_view)
view = self._getViewCallable(
config, ctx_iface=IExceptionResponse, request_iface=IRequest
)
self.assertTrue(view.__wraps__ is exceptionresponse_view)
def test_ctor_with_introspection(self):
config = self._makeOne(introspection=False)
self.assertEqual(config.introspection, False)
def test_ctor_default_webob_response_adapter_registered(self):
from webob import Response as WebobResponse
response = WebobResponse()
from pyramid.interfaces import IResponse
config = self._makeOne(autocommit=True)
result = config.registry.queryAdapter(response, IResponse)
self.assertEqual(result, response)
def test_with_package_module(self):
from . import test_init
config = self._makeOne()
newconfig = config.with_package(test_init)
import tests.test_config
self.assertEqual(newconfig.package, tests.test_config)
def test_with_package_package(self):
from tests import test_config
config = self._makeOne()
newconfig = config.with_package(test_config)
self.assertEqual(newconfig.package, test_config)
def test_with_package(self):
import tests
config = self._makeOne()
config.basepath = 'basepath'
config.info = 'info'
config.includepath = ('spec',)
config.autocommit = True
config.route_prefix = 'prefix'
newconfig = config.with_package(tests)
self.assertEqual(newconfig.package, tests)
self.assertEqual(newconfig.registry, config.registry)
self.assertEqual(newconfig.autocommit, True)
self.assertEqual(newconfig.route_prefix, 'prefix')
self.assertEqual(newconfig.info, 'info')
self.assertEqual(newconfig.basepath, 'basepath')
self.assertEqual(newconfig.includepath, ('spec',))
def test_maybe_dotted_string_success(self):
import tests.test_config
config = self._makeOne()
result = config.maybe_dotted('tests.test_config')
self.assertEqual(result, tests.test_config)
def test_maybe_dotted_string_fail(self):
config = self._makeOne()
self.assertRaises(ImportError, config.maybe_dotted, 'cant.be.found')
def test_maybe_dotted_notstring_success(self):
import tests.test_config
config = self._makeOne()
result = config.maybe_dotted(tests.test_config)
self.assertEqual(result, tests.test_config)
def test_absolute_asset_spec_already_absolute(self):
import tests.test_config
config = self._makeOne(package=tests.test_config)
result = config.absolute_asset_spec('already:absolute')
self.assertEqual(result, 'already:absolute')
def test_absolute_asset_spec_notastring(self):
import tests.test_config
config = self._makeOne(package=tests.test_config)
result = config.absolute_asset_spec(None)
self.assertEqual(result, None)
def test_absolute_asset_spec_relative(self):
import tests.test_config
config = self._makeOne(package=tests.test_config)
result = config.absolute_asset_spec('files')
self.assertEqual(result, 'tests.test_config:files')
def test__fix_registry_has_listeners(self):
reg = DummyRegistry()
config = self._makeOne(reg)
config._fix_registry()
self.assertEqual(reg.has_listeners, True)
def test__fix_registry_notify(self):
reg = DummyRegistry()
config = self._makeOne(reg)
config._fix_registry()
self.assertEqual(reg.notify(1), None)
self.assertEqual(reg.events, (1,))
def test__fix_registry_queryAdapterOrSelf(self):
from zope.interface import Interface, implementer
class IFoo(Interface):
pass
@implementer(IFoo)
class Foo:
pass
class Bar:
pass
adaptation = ()
foo = Foo()
bar = Bar()
reg = DummyRegistry(adaptation)
config = self._makeOne(reg)
config._fix_registry()
self.assertTrue(reg.queryAdapterOrSelf(foo, IFoo) is foo)
self.assertTrue(reg.queryAdapterOrSelf(bar, IFoo) is adaptation)
def test__fix_registry_registerSelfAdapter(self):
reg = DummyRegistry()
config = self._makeOne(reg)
config._fix_registry()
reg.registerSelfAdapter('required', 'provided', name='abc')
self.assertEqual(len(reg.adapters), 1)
args, kw = reg.adapters[0]
self.assertEqual(args[0]('abc'), 'abc')
self.assertEqual(
kw,
{
'info': '',
'provided': 'provided',
'required': 'required',
'name': 'abc',
'event': True,
},
)
def test__fix_registry_adds__lock(self):
reg = DummyRegistry()
config = self._makeOne(reg)
config._fix_registry()
self.assertTrue(hasattr(reg, '_lock'))
def test__fix_registry_adds_clear_view_lookup_cache(self):
reg = DummyRegistry()
config = self._makeOne(reg)
self.assertFalse(hasattr(reg, '_clear_view_lookup_cache'))
config._fix_registry()
self.assertFalse(hasattr(reg, '_view_lookup_cache'))
reg._clear_view_lookup_cache()
self.assertEqual(reg._view_lookup_cache, {})
def test_setup_registry_calls_fix_registry(self):
reg = DummyRegistry()
config = self._makeOne(reg)
config.add_view = lambda *arg, **kw: False
config._add_tween = lambda *arg, **kw: False
config.setup_registry()
self.assertEqual(reg.has_listeners, True)
def test_setup_registry_registers_default_exceptionresponse_views(self):
from webob.exc import WSGIHTTPException
from pyramid.interfaces import IExceptionResponse
from pyramid.view import default_exceptionresponse_view
reg = DummyRegistry()
config = self._makeOne(reg)
views = []
config.add_view = lambda *arg, **kw: views.append((arg, kw))
config.add_default_view_predicates = lambda *arg: None
config._add_tween = lambda *arg, **kw: False
config.setup_registry()
self.assertEqual(
views[0],
(
(default_exceptionresponse_view,),
{'context': IExceptionResponse},
),
)
self.assertEqual(
views[1],
(
(default_exceptionresponse_view,),
{'context': WSGIHTTPException},
),
)
def test_setup_registry_registers_default_view_predicates(self):
reg = DummyRegistry()
config = self._makeOne(reg)
vp_called = []
config.add_view = lambda *arg, **kw: None
config.add_default_view_predicates = lambda *arg: vp_called.append(
True
)
config._add_tween = lambda *arg, **kw: False
config.setup_registry()
self.assertTrue(vp_called)
def test_setup_registry_registers_default_webob_iresponse_adapter(self):
from webob import Response
from pyramid.interfaces import IResponse
config = self._makeOne()
config.setup_registry()
response = Response()
self.assertTrue(
config.registry.queryAdapter(response, IResponse) is response
)
def test_setup_registry_explicit_notfound_trumps_iexceptionresponse(self):
from zope.interface import implementedBy
from pyramid.httpexceptions import HTTPNotFound
from pyramid.registry import Registry
from pyramid.renderers import null_renderer
reg = Registry()
config = self._makeOne(reg, autocommit=True)
config.setup_registry() # registers IExceptionResponse default view
def myview(context, request):
return 'OK'
config.add_view(myview, context=HTTPNotFound, renderer=null_renderer)
request = self._makeRequest(config)
view = self._getViewCallable(
config,
ctx_iface=implementedBy(HTTPNotFound),
request_iface=IRequest,
)
result = view(None, request)
self.assertEqual(result, 'OK')
def test_setup_registry_custom_settings(self):
from pyramid.interfaces import ISettings
from pyramid.registry import Registry
settings = {'reload_templates': True, 'mysetting': True}
reg = Registry()
config = self._makeOne(reg)
config.setup_registry(settings=settings)
settings = reg.getUtility(ISettings)
self.assertEqual(settings['reload_templates'], True)
self.assertEqual(settings['debug_authorization'], False)
self.assertEqual(settings['mysetting'], True)
def test_setup_registry_debug_logger_None_default(self):
from pyramid.interfaces import IDebugLogger
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
config.setup_registry()
logger = reg.getUtility(IDebugLogger)
self.assertEqual(logger.name, 'tests.test_config')
def test_setup_registry_debug_logger_non_None(self):
from pyramid.interfaces import IDebugLogger
from pyramid.registry import Registry
logger = object()
reg = Registry()
config = self._makeOne(reg)
config.setup_registry(debug_logger=logger)
result = reg.getUtility(IDebugLogger)
self.assertEqual(logger, result)
def test_setup_registry_debug_logger_name(self):
from pyramid.interfaces import IDebugLogger
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
config.setup_registry(debug_logger='foo')
result = reg.getUtility(IDebugLogger)
self.assertEqual(result.name, 'foo')
def test_setup_registry_authentication_policy(self):
from pyramid.interfaces import IAuthenticationPolicy
from pyramid.registry import Registry
policy = object()
reg = Registry()
config = self._makeOne(reg)
config.setup_registry(authentication_policy=policy)
config.commit()
result = reg.getUtility(IAuthenticationPolicy)
self.assertEqual(policy, result)
def test_setup_registry_authentication_policy_dottedname(self):
from pyramid.interfaces import IAuthenticationPolicy
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
config.setup_registry(authentication_policy='tests.test_config')
config.commit()
result = reg.getUtility(IAuthenticationPolicy)
import tests.test_config
self.assertEqual(result, tests.test_config)
def test_setup_registry_authorization_policy_dottedname(self):
from pyramid.interfaces import IAuthorizationPolicy
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
dummy = object()
config.setup_registry(
authentication_policy=dummy,
authorization_policy='tests.test_config',
)
config.commit()
result = reg.getUtility(IAuthorizationPolicy)
import tests.test_config
self.assertEqual(result, tests.test_config)
def test_setup_registry_authorization_policy_only(self):
from pyramid.registry import Registry
policy = object()
reg = Registry()
config = self._makeOne(reg)
config.setup_registry(authorization_policy=policy)
config = self.assertRaises(ConfigurationExecutionError, config.commit)
def test_setup_registry_no_default_root_factory(self):
from pyramid.interfaces import IRootFactory
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
config.setup_registry()
config.commit()
self.assertEqual(reg.queryUtility(IRootFactory), None)
def test_setup_registry_dottedname_root_factory(self):
from pyramid.interfaces import IRootFactory
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
import tests.test_config
config.setup_registry(root_factory='tests.test_config')
self.assertEqual(reg.queryUtility(IRootFactory), None)
config.commit()
self.assertEqual(reg.getUtility(IRootFactory), tests.test_config)
def test_setup_registry_locale_negotiator_dottedname(self):
from pyramid.interfaces import ILocaleNegotiator
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
import tests.test_config
config.setup_registry(locale_negotiator='tests.test_config')
self.assertEqual(reg.queryUtility(ILocaleNegotiator), None)
config.commit()
utility = reg.getUtility(ILocaleNegotiator)
self.assertEqual(utility, tests.test_config)
def test_setup_registry_locale_negotiator(self):
from pyramid.interfaces import ILocaleNegotiator
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
negotiator = object()
config.setup_registry(locale_negotiator=negotiator)
self.assertEqual(reg.queryUtility(ILocaleNegotiator), None)
config.commit()
utility = reg.getUtility(ILocaleNegotiator)
self.assertEqual(utility, negotiator)
def test_setup_registry_request_factory(self):
from pyramid.interfaces import IRequestFactory
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
factory = object()
config.setup_registry(request_factory=factory)
self.assertEqual(reg.queryUtility(IRequestFactory), None)
config.commit()
utility = reg.getUtility(IRequestFactory)
self.assertEqual(utility, factory)
def test_setup_registry_response_factory(self):
from pyramid.interfaces import IResponseFactory
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
factory = lambda r: object()
config.setup_registry(response_factory=factory)
self.assertEqual(reg.queryUtility(IResponseFactory), None)
config.commit()
utility = reg.getUtility(IResponseFactory)
self.assertEqual(utility, factory)
def test_setup_registry_request_factory_dottedname(self):
from pyramid.interfaces import IRequestFactory
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
import tests.test_config
config.setup_registry(request_factory='tests.test_config')
self.assertEqual(reg.queryUtility(IRequestFactory), None)
config.commit()
utility = reg.getUtility(IRequestFactory)
self.assertEqual(utility, tests.test_config)
def test_setup_registry_alternate_renderers(self):
from pyramid.interfaces import IRendererFactory
from pyramid.registry import Registry
renderer = object()
reg = Registry()
config = self._makeOne(reg)
config.setup_registry(renderers=[('yeah', renderer)])
config.commit()
self.assertEqual(reg.getUtility(IRendererFactory, 'yeah'), renderer)
def test_setup_registry_default_permission(self):
from pyramid.interfaces import IDefaultPermission
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
config.setup_registry(default_permission='view')
config.commit()
self.assertEqual(reg.getUtility(IDefaultPermission), 'view')
def test_setup_registry_includes(self):
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
settings = {
'pyramid.includes': """tests.test_config.dummy_include
tests.test_config.dummy_include2"""
}
config.setup_registry(settings=settings)
self.assertTrue(reg.included)
self.assertTrue(reg.also_included)
def test_setup_registry_includes_spaces(self):
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
settings = {
'pyramid.includes': """tests.test_config.dummy_include tests.\
test_config.dummy_include2"""
}
config.setup_registry(settings=settings)
self.assertTrue(reg.included)
self.assertTrue(reg.also_included)
def test_setup_registry_tweens(self):
from pyramid.interfaces import ITweens
from pyramid.registry import Registry
reg = Registry()
config = self._makeOne(reg)
settings = {'pyramid.tweens': 'tests.test_config.dummy_tween_factory'}
config.setup_registry(settings=settings)
config.commit()
tweens = config.registry.getUtility(ITweens)
self.assertEqual(
tweens.explicit,
[('tests.test_config.dummy_tween_factory', dummy_tween_factory)],
)
def test_introspector_decorator(self):
inst = self._makeOne()
default = inst.introspector
self.assertTrue(hasattr(default, 'add'))
self.assertEqual(inst.introspector, inst.registry.introspector)
introspector = object()
inst.introspector = introspector
new = inst.introspector
self.assertTrue(new is introspector)
self.assertEqual(inst.introspector, inst.registry.introspector)
del inst.introspector
default = inst.introspector
self.assertFalse(default is new)
self.assertTrue(hasattr(default, 'add'))
def test_make_wsgi_app(self):
import pyramid.config
from pyramid.interfaces import IApplicationCreated
from pyramid.router import Router
manager = DummyThreadLocalManager()
config = self._makeOne()
subscriber = self._registerEventListener(config, IApplicationCreated)
config.manager = manager
app = config.make_wsgi_app()
self.assertEqual(app.__class__, Router)
self.assertEqual(manager.pushed['registry'], config.registry)
self.assertEqual(manager.pushed['request'], None)
self.assertTrue(manager.popped)
self.assertEqual(pyramid.config.global_registries.last, app.registry)
self.assertEqual(len(subscriber), 1)
self.assertTrue(IApplicationCreated.providedBy(subscriber[0]))
pyramid.config.global_registries.empty()
def test_include_with_dotted_name(self):
from tests import test_config
config = self._makeOne()
config.include('tests.test_config.dummy_include')
after = config.action_state
actions = after.actions
self.assertEqual(len(actions), 1)
action = after.actions[0]
self.assertEqual(action['discriminator'], 'discrim')
self.assertEqual(action['callable'], None)
self.assertEqual(action['args'], test_config)
def test_include_with_python_callable(self):
from tests import test_config
config = self._makeOne()
config.include(dummy_include)
after = config.action_state
actions = after.actions
self.assertEqual(len(actions), 1)
action = actions[0]
self.assertEqual(action['discriminator'], 'discrim')
self.assertEqual(action['callable'], None)
self.assertEqual(action['args'], test_config)
def test_include_with_module_defaults_to_includeme(self):
from tests import test_config
config = self._makeOne()
config.include('tests.test_config')
after = config.action_state
actions = after.actions
self.assertEqual(len(actions), 1)
action = actions[0]
self.assertEqual(action['discriminator'], 'discrim')
self.assertEqual(action['callable'], None)
self.assertEqual(action['args'], test_config)
def test_include_with_module_defaults_to_includeme_missing(self):
from pyramid.exceptions import ConfigurationError
config = self._makeOne()
self.assertRaises(ConfigurationError, config.include, 'tests')
def test_include_with_route_prefix(self):
root_config = self._makeOne(autocommit=True)
def dummy_subapp(config):
self.assertEqual(config.route_prefix, 'root')
root_config.include(dummy_subapp, route_prefix='root')
def test_include_with_nested_route_prefix(self):
root_config = self._makeOne(autocommit=True, route_prefix='root')
def dummy_subapp2(config):
self.assertEqual(config.route_prefix, 'root/nested')
def dummy_subapp3(config):
self.assertEqual(config.route_prefix, 'root/nested/nested2')
config.include(dummy_subapp4)
def dummy_subapp4(config):
self.assertEqual(config.route_prefix, 'root/nested/nested2')
def dummy_subapp(config):
self.assertEqual(config.route_prefix, 'root/nested')
config.include(dummy_subapp2)
config.include(dummy_subapp3, route_prefix='nested2')
root_config.include(dummy_subapp, route_prefix='nested')
def test_include_with_missing_source_file(self):
import inspect
from pyramid.exceptions import ConfigurationError
config = self._makeOne()
class DummyInspect:
def getmodule(self, c):
return inspect.getmodule(c)
def getsourcefile(self, c):
return None
config.inspect = DummyInspect()
try:
config.include('tests.test_config.dummy_include')
except ConfigurationError as e:
self.assertEqual(
e.args[0],
"No source file for module 'tests.test_config' (.py "
"file must exist, refusing to use orphan .pyc or .pyo file).",
)
else: # pragma: no cover
raise AssertionError
def test_include_constant_root_package(self):
import tests
from tests import test_config
config = self._makeOne(root_package=tests)
results = {}
def include(config):
results['package'] = config.package
results['root_package'] = config.root_package
config.include(include)
self.assertEqual(results['root_package'], tests)
self.assertEqual(results['package'], test_config)
def test_include_threadlocals_active(self):
from pyramid.threadlocal import get_current_registry
stack = []
def include(config):
stack.append(get_current_registry())
config = self._makeOne()
config.include(include)
self.assertTrue(stack[0] is config.registry)
def test_scan_integration(self):
from zope.interface import alsoProvides
from pyramid.view import render_view_to_response
from tests.test_config.pkgs import scannable as package
config = self._makeOne(autocommit=True)
config.scan(package)
ctx = DummyContext()
req = DummyRequest()
alsoProvides(req, IRequest)
req.registry = config.registry
req.method = 'GET'
result = render_view_to_response(ctx, req, '')
self.assertEqual(result, 'grokked')
req.method = 'POST'
result = render_view_to_response(ctx, req, '')
self.assertEqual(result, 'grokked_post')
result = render_view_to_response(ctx, req, 'grokked_class')
self.assertEqual(result, 'grokked_class')
result = render_view_to_response(ctx, req, 'grokked_instance')
self.assertEqual(result, 'grokked_instance')
result = render_view_to_response(ctx, req, 'oldstyle_grokked_class')
self.assertEqual(result, 'oldstyle_grokked_class')
req.method = 'GET'
result = render_view_to_response(ctx, req, 'another')
self.assertEqual(result, 'another_grokked')
req.method = 'POST'
result = render_view_to_response(ctx, req, 'another')
self.assertEqual(result, 'another_grokked_post')
result = render_view_to_response(ctx, req, 'another_grokked_class')
self.assertEqual(result, 'another_grokked_class')
result = render_view_to_response(ctx, req, 'another_grokked_instance')
self.assertEqual(result, 'another_grokked_instance')
result = render_view_to_response(
ctx, req, 'another_oldstyle_grokked_class'
)
self.assertEqual(result, 'another_oldstyle_grokked_class')
result = render_view_to_response(ctx, req, 'stacked1')
self.assertEqual(result, 'stacked')
result = render_view_to_response(ctx, req, 'stacked2')
self.assertEqual(result, 'stacked')
result = render_view_to_response(ctx, req, 'another_stacked1')
self.assertEqual(result, 'another_stacked')
result = render_view_to_response(ctx, req, 'another_stacked2')
self.assertEqual(result, 'another_stacked')
result = render_view_to_response(ctx, req, 'stacked_class1')
self.assertEqual(result, 'stacked_class')
result = render_view_to_response(ctx, req, 'stacked_class2')
self.assertEqual(result, 'stacked_class')
result = render_view_to_response(ctx, req, 'another_stacked_class1')
self.assertEqual(result, 'another_stacked_class')
result = render_view_to_response(ctx, req, 'another_stacked_class2')
self.assertEqual(result, 'another_stacked_class')
# NB: on Jython, a class without an __init__ apparently accepts
# any number of arguments without raising a TypeError, so the next
# assertion may fail there. We don't support Jython at the moment,
# this is just a note to a future self.
self.assertRaises(
TypeError, render_view_to_response, ctx, req, 'basemethod'
)
result = render_view_to_response(ctx, req, 'method1')
self.assertEqual(result, 'method1')
result = render_view_to_response(ctx, req, 'method2')
self.assertEqual(result, 'method2')
result = render_view_to_response(ctx, req, 'stacked_method1')
self.assertEqual(result, 'stacked_method')
result = render_view_to_response(ctx, req, 'stacked_method2')
self.assertEqual(result, 'stacked_method')
result = render_view_to_response(ctx, req, 'subpackage_init')
self.assertEqual(result, 'subpackage_init')
result = render_view_to_response(ctx, req, 'subpackage_notinit')
self.assertEqual(result, 'subpackage_notinit')
result = render_view_to_response(ctx, req, 'subsubpackage_init')
self.assertEqual(result, 'subsubpackage_init')
result = render_view_to_response(ctx, req, 'pod_notinit')
self.assertEqual(result, None)
def test_scan_integration_with_ignore(self):
from zope.interface import alsoProvides
from pyramid.view import render_view_to_response
from tests.test_config.pkgs import scannable as package
config = self._makeOne(autocommit=True)
config.scan(package, ignore='tests.test_config.pkgs.scannable.another')
ctx = DummyContext()
req = DummyRequest()
alsoProvides(req, IRequest)
req.registry = config.registry
req.method = 'GET'
result = render_view_to_response(ctx, req, '')
self.assertEqual(result, 'grokked')
# ignored
v = render_view_to_response(ctx, req, 'another_stacked_class2')
self.assertEqual(v, None)
def test_scan_integration_dottedname_package(self):
from zope.interface import alsoProvides
from pyramid.view import render_view_to_response
config = self._makeOne(autocommit=True)
config.scan('tests.test_config.pkgs.scannable')
ctx = DummyContext()
req = DummyRequest()
alsoProvides(req, IRequest)
req.registry = config.registry
req.method = 'GET'
result = render_view_to_response(ctx, req, '')
self.assertEqual(result, 'grokked')
def test_scan_integration_with_extra_kw(self):
config = self._makeOne(autocommit=True)
config.scan('tests.test_config.pkgs.scanextrakw', a=1, categories=None)
self.assertEqual(config.a, 1)
def test_scan_integration_with_onerror(self):
# fancy sys.path manipulation here to appease "setup.py test" which
# fails miserably when it can't import something in the package
import sys
try:
here = os.path.dirname(__file__)
path = os.path.join(here, 'path')
sys.path.append(path)
config = self._makeOne(autocommit=True)
class FooException(Exception):
pass
def onerror(name):
raise FooException
self.assertRaises(
FooException, config.scan, 'scanerror', onerror=onerror
)
finally:
sys.path.remove(path)
def test_scan_integration_conflict(self):
from pyramid.config import Configurator
from tests.test_config.pkgs import selfscan
c = Configurator()
c.scan(selfscan)
c.scan(selfscan)
try:
c.commit()
except ConfigurationConflictError as why:
def scanconflicts(e):
conflicts = e._conflicts.values()
for conflict in conflicts:
for confinst in conflict:
yield confinst.src
which = list(scanconflicts(why))
self.assertEqual(len(which), 4)
self.assertTrue("@view_config(renderer='string')" in which)
self.assertTrue(
"@view_config(name='two', renderer='string')" in which
)
def test_hook_zca(self):
from zope.component import getSiteManager
def foo():
'123'
try:
config = self._makeOne()
config.hook_zca()
config.begin()
sm = getSiteManager()
self.assertEqual(sm, config.registry)
finally:
getSiteManager.reset()
def test_unhook_zca(self):
from zope.component import getSiteManager
def foo():
'123'
try:
getSiteManager.sethook(foo)
config = self._makeOne()
config.unhook_zca()
sm = getSiteManager()
self.assertNotEqual(sm, '123')
finally:
getSiteManager.reset()
def test___getattr__missing_when_directives_exist(self):
config = self._makeOne()
directives = {}
config.registry._directives = directives
self.assertRaises(AttributeError, config.__getattr__, 'wontexist')
def test___getattr__missing_when_directives_dont_exist(self):
config = self._makeOne()
self.assertRaises(AttributeError, config.__getattr__, 'wontexist')
def test___getattr__matches(self):
config = self._makeOne()
def foo(config): # pragma: no cover
pass
directives = {'foo': (foo, True)}
config.registry._directives = directives
foo_meth = config.foo
self.assertTrue(getattr(foo_meth, '__func__').__docobj__ is foo)
def test___getattr__matches_no_action_wrap(self):
config = self._makeOne()
def foo(config): # pragma: no cover
pass
directives = {'foo': (foo, False)}
config.registry._directives = directives
foo_meth = config.foo
self.assertTrue(getattr(foo_meth, '__func__') is foo)
| ConfiguratorTests |
python | sympy__sympy | sympy/vector/coordsysrect.py | {
"start": 978,
"end": 36894
} | class ____(Basic):
"""
Represents a coordinate system in 3-D space.
"""
def __new__(cls, name, transformation=None, parent=None, location=None,
rotation_matrix=None, vector_names=None, variable_names=None):
"""
The orientation/location parameters are necessary if this system
is being defined at a certain orientation or location wrt another.
Parameters
==========
name : str
The name of the new CoordSys3D instance.
transformation : Lambda, Tuple, str
Transformation defined by transformation equations or chosen
from predefined ones.
location : Vector
The position vector of the new system's origin wrt the parent
instance.
rotation_matrix : SymPy ImmutableMatrix
The rotation matrix of the new coordinate system with respect
to the parent. In other words, the output of
new_system.rotation_matrix(parent).
parent : CoordSys3D
The coordinate system wrt which the orientation/location
(or both) is being defined.
vector_names, variable_names : iterable(optional)
Iterables of 3 strings each, with custom names for base
vectors and base scalars of the new system respectively.
Used for simple str printing.
"""
name = str(name)
Vector = sympy.vector.Vector
Point = sympy.vector.Point
if not isinstance(name, str):
raise TypeError("name should be a string")
if transformation is not None:
if (location is not None) or (rotation_matrix is not None):
raise ValueError("specify either `transformation` or "
"`location`/`rotation_matrix`")
if isinstance(transformation, (Tuple, tuple, list)):
if isinstance(transformation[0], MatrixBase):
rotation_matrix = transformation[0]
location = transformation[1]
else:
transformation = Lambda(transformation[0],
transformation[1])
elif isinstance(transformation, Callable):
x1, x2, x3 = symbols('x1 x2 x3', cls=Dummy)
transformation = Lambda((x1, x2, x3),
transformation(x1, x2, x3))
elif isinstance(transformation, str):
transformation = Str(transformation)
elif isinstance(transformation, (Str, Lambda)):
pass
else:
raise TypeError("transformation: "
"wrong type {}".format(type(transformation)))
# If orientation information has been provided, store
# the rotation matrix accordingly
if rotation_matrix is None:
rotation_matrix = ImmutableDenseMatrix(eye(3))
else:
if not isinstance(rotation_matrix, MatrixBase):
raise TypeError("rotation_matrix should be an Immutable" +
"Matrix instance")
rotation_matrix = rotation_matrix.as_immutable()
# If location information is not given, adjust the default
# location as Vector.zero
if parent is not None:
if not isinstance(parent, CoordSys3D):
raise TypeError("parent should be a " +
"CoordSys3D/None")
if location is None:
location = Vector.zero
else:
if not isinstance(location, Vector):
raise TypeError("location should be a Vector")
# Check that location does not contain base
# scalars
for x in location.free_symbols:
if isinstance(x, BaseScalar):
raise ValueError("location should not contain" +
" BaseScalars")
origin = parent.origin.locate_new(name + '.origin',
location)
else:
location = Vector.zero
origin = Point(name + '.origin')
if transformation is None:
transformation = Tuple(rotation_matrix, location)
if isinstance(transformation, Tuple):
lambda_transformation = CoordSys3D._compose_rotation_and_translation(
transformation[0],
transformation[1],
parent
)
r, l = transformation
l = l._projections
lambda_lame = CoordSys3D._get_lame_coeff('cartesian')
lambda_inverse = lambda x, y, z: r.inv()*Matrix(
[x-l[0], y-l[1], z-l[2]])
elif isinstance(transformation, Str):
trname = transformation.name
lambda_transformation = CoordSys3D._get_transformation_lambdas(trname)
if parent is not None:
if parent.lame_coefficients() != (S.One, S.One, S.One):
raise ValueError('Parent for pre-defined coordinate '
'system should be Cartesian.')
lambda_lame = CoordSys3D._get_lame_coeff(trname)
lambda_inverse = CoordSys3D._set_inv_trans_equations(trname)
elif isinstance(transformation, Lambda):
if not CoordSys3D._check_orthogonality(transformation):
raise ValueError("The transformation equation does not "
"create orthogonal coordinate system")
lambda_transformation = transformation
lambda_lame = CoordSys3D._calculate_lame_coeff(lambda_transformation)
lambda_inverse = None
else:
lambda_transformation = lambda x, y, z: transformation(x, y, z)
lambda_lame = CoordSys3D._get_lame_coeff(transformation)
lambda_inverse = None
if variable_names is None:
if isinstance(transformation, Lambda):
variable_names = ["x1", "x2", "x3"]
elif isinstance(transformation, Str):
if transformation.name == 'spherical':
variable_names = ["r", "theta", "phi"]
elif transformation.name == 'cylindrical':
variable_names = ["r", "theta", "z"]
else:
variable_names = ["x", "y", "z"]
else:
variable_names = ["x", "y", "z"]
if vector_names is None:
vector_names = ["i", "j", "k"]
# All systems that are defined as 'roots' are unequal, unless
# they have the same name.
# Systems defined at same orientation/position wrt the same
# 'parent' are equal, irrespective of the name.
# This is true even if the same orientation is provided via
# different methods like Axis/Body/Space/Quaternion.
# However, coincident systems may be seen as unequal if
# positioned/oriented wrt different parents, even though
# they may actually be 'coincident' wrt the root system.
if parent is not None:
obj = super().__new__(
cls, Str(name), transformation, parent)
else:
obj = super().__new__(
cls, Str(name), transformation)
obj._name = name
# Initialize the base vectors
_check_strings('vector_names', vector_names)
vector_names = list(vector_names)
latex_vects = [(r'\mathbf{\hat{%s}_{%s}}' % (x, name)) for
x in vector_names]
pretty_vects = ['%s_%s' % (x, name) for x in vector_names]
obj._vector_names = vector_names
v1 = BaseVector(0, obj, pretty_vects[0], latex_vects[0])
v2 = BaseVector(1, obj, pretty_vects[1], latex_vects[1])
v3 = BaseVector(2, obj, pretty_vects[2], latex_vects[2])
obj._base_vectors = (v1, v2, v3)
# Initialize the base scalars
_check_strings('variable_names', vector_names)
variable_names = list(variable_names)
latex_scalars = [(r"\mathbf{{%s}_{%s}}" % (x, name)) for
x in variable_names]
pretty_scalars = ['%s_%s' % (x, name) for x in variable_names]
obj._variable_names = variable_names
obj._vector_names = vector_names
x1 = BaseScalar(0, obj, pretty_scalars[0], latex_scalars[0])
x2 = BaseScalar(1, obj, pretty_scalars[1], latex_scalars[1])
x3 = BaseScalar(2, obj, pretty_scalars[2], latex_scalars[2])
obj._base_scalars = (x1, x2, x3)
obj._transformation = transformation
obj._transformation_lambda = lambda_transformation
obj._lame_coefficients = lambda_lame(x1, x2, x3)
obj._transformation_from_parent_lambda = lambda_inverse
setattr(obj, variable_names[0], x1)
setattr(obj, variable_names[1], x2)
setattr(obj, variable_names[2], x3)
setattr(obj, vector_names[0], v1)
setattr(obj, vector_names[1], v2)
setattr(obj, vector_names[2], v3)
# Assign params
obj._parent = parent
if obj._parent is not None:
obj._root = obj._parent._root
else:
obj._root = obj
obj._parent_rotation_matrix = rotation_matrix
obj._origin = origin
# Return the instance
return obj
def _sympystr(self, printer):
return self._name
def __iter__(self):
return iter(self.base_vectors())
def _eval_simplify(self, **kwargs):
return self
@staticmethod
def _check_orthogonality(equations):
"""
Helper method for _connect_to_cartesian. It checks if
set of transformation equations create orthogonal curvilinear
coordinate system
Parameters
==========
equations : Lambda
Lambda of transformation equations
"""
x1, x2, x3 = symbols("x1, x2, x3", cls=Dummy)
equations = equations(x1, x2, x3)
v1 = Matrix([diff(equations[0], x1),
diff(equations[1], x1), diff(equations[2], x1)])
v2 = Matrix([diff(equations[0], x2),
diff(equations[1], x2), diff(equations[2], x2)])
v3 = Matrix([diff(equations[0], x3),
diff(equations[1], x3), diff(equations[2], x3)])
if any(simplify(i[0] + i[1] + i[2]) == 0 for i in (v1, v2, v3)):
return False
else:
if simplify(v1.dot(v2)) == 0 and simplify(v2.dot(v3)) == 0 \
and simplify(v3.dot(v1)) == 0:
return True
else:
return False
@staticmethod
def _set_inv_trans_equations(curv_coord_name):
"""
Store information about inverse transformation equations for
pre-defined coordinate systems.
Parameters
==========
curv_coord_name : str
Name of coordinate system
"""
if curv_coord_name == 'cartesian':
return lambda x, y, z: (x, y, z)
if curv_coord_name == 'spherical':
return lambda x, y, z: (
sqrt(x**2 + y**2 + z**2),
acos(z/sqrt(x**2 + y**2 + z**2)),
atan2(y, x)
)
if curv_coord_name == 'cylindrical':
return lambda x, y, z: (
sqrt(x**2 + y**2),
atan2(y, x),
z
)
raise ValueError('Wrong set of parameters.'
'Type of coordinate system is defined')
def _calculate_inv_trans_equations(self):
"""
Helper method for set_coordinate_type. It calculates inverse
transformation equations for given transformations equations.
"""
x1, x2, x3 = symbols("x1, x2, x3", cls=Dummy, reals=True)
x, y, z = symbols("x, y, z", cls=Dummy)
equations = self._transformation(x1, x2, x3)
solved = solve([equations[0] - x,
equations[1] - y,
equations[2] - z], (x1, x2, x3), dict=True)[0]
solved = solved[x1], solved[x2], solved[x3]
self._transformation_from_parent_lambda = \
lambda x1, x2, x3: tuple(i.subs(list(zip((x, y, z), (x1, x2, x3)))) for i in solved)
@staticmethod
def _get_lame_coeff(curv_coord_name):
"""
Store information about Lame coefficients for pre-defined
coordinate systems.
Parameters
==========
curv_coord_name : str
Name of coordinate system
"""
if isinstance(curv_coord_name, str):
if curv_coord_name == 'cartesian':
return lambda x, y, z: (S.One, S.One, S.One)
if curv_coord_name == 'spherical':
return lambda r, theta, phi: (S.One, r, r*sin(theta))
if curv_coord_name == 'cylindrical':
return lambda r, theta, h: (S.One, r, S.One)
raise ValueError('Wrong set of parameters.'
' Type of coordinate system is not defined')
return CoordSys3D._calculate_lame_coefficients(curv_coord_name)
@staticmethod
def _calculate_lame_coeff(equations):
"""
It calculates Lame coefficients
for given transformations equations.
Parameters
==========
equations : Lambda
Lambda of transformation equations.
"""
return lambda x1, x2, x3: (
sqrt(diff(equations(x1, x2, x3)[0], x1)**2 +
diff(equations(x1, x2, x3)[1], x1)**2 +
diff(equations(x1, x2, x3)[2], x1)**2),
sqrt(diff(equations(x1, x2, x3)[0], x2)**2 +
diff(equations(x1, x2, x3)[1], x2)**2 +
diff(equations(x1, x2, x3)[2], x2)**2),
sqrt(diff(equations(x1, x2, x3)[0], x3)**2 +
diff(equations(x1, x2, x3)[1], x3)**2 +
diff(equations(x1, x2, x3)[2], x3)**2)
)
def _inverse_rotation_matrix(self):
"""
Returns inverse rotation matrix.
"""
return simplify(self._parent_rotation_matrix**-1)
@staticmethod
def _get_transformation_lambdas(curv_coord_name):
"""
Store information about transformation equations for pre-defined
coordinate systems.
Parameters
==========
curv_coord_name : str
Name of coordinate system
"""
if isinstance(curv_coord_name, str):
if curv_coord_name == 'cartesian':
return lambda x, y, z: (x, y, z)
if curv_coord_name == 'spherical':
return lambda r, theta, phi: (
r*sin(theta)*cos(phi),
r*sin(theta)*sin(phi),
r*cos(theta)
)
if curv_coord_name == 'cylindrical':
return lambda r, theta, h: (
r*cos(theta),
r*sin(theta),
h
)
raise ValueError('Wrong set of parameters.'
'Type of coordinate system is defined')
@classmethod
def _rotation_trans_equations(cls, matrix, equations):
"""
Returns the transformation equations obtained from rotation matrix.
Parameters
==========
matrix : Matrix
Rotation matrix
equations : tuple
Transformation equations
"""
return tuple(matrix * Matrix(equations))
@property
def origin(self):
return self._origin
def base_vectors(self):
return self._base_vectors
def base_scalars(self):
return self._base_scalars
def lame_coefficients(self):
return self._lame_coefficients
def transformation_to_parent(self):
return self._transformation_lambda(*self.base_scalars())
def transformation_from_parent(self):
if self._parent is None:
raise ValueError("no parent coordinate system, use "
"`transformation_from_parent_function()`")
return self._transformation_from_parent_lambda(
*self._parent.base_scalars())
def transformation_from_parent_function(self):
return self._transformation_from_parent_lambda
def rotation_matrix(self, other):
"""
Returns the direction cosine matrix(DCM), also known as the
'rotation matrix' of this coordinate system with respect to
another system.
If v_a is a vector defined in system 'A' (in matrix format)
and v_b is the same vector defined in system 'B', then
v_a = A.rotation_matrix(B) * v_b.
A SymPy Matrix is returned.
Parameters
==========
other : CoordSys3D
The system which the DCM is generated to.
Examples
========
>>> from sympy.vector import CoordSys3D
>>> from sympy import symbols
>>> q1 = symbols('q1')
>>> N = CoordSys3D('N')
>>> A = N.orient_new_axis('A', q1, N.i)
>>> N.rotation_matrix(A)
Matrix([
[1, 0, 0],
[0, cos(q1), -sin(q1)],
[0, sin(q1), cos(q1)]])
"""
from sympy.vector.functions import _path
if not isinstance(other, CoordSys3D):
raise TypeError(str(other) +
" is not a CoordSys3D")
# Handle special cases
if other == self:
return eye(3)
elif other == self._parent:
return self._parent_rotation_matrix
elif other._parent == self:
return other._parent_rotation_matrix.T
# Else, use tree to calculate position
rootindex, path = _path(self, other)
result = eye(3)
for i in range(rootindex):
result *= path[i]._parent_rotation_matrix
for i in range(rootindex + 1, len(path)):
result *= path[i]._parent_rotation_matrix.T
return result
@cacheit
def position_wrt(self, other):
"""
Returns the position vector of the origin of this coordinate
system with respect to another Point/CoordSys3D.
Parameters
==========
other : Point/CoordSys3D
If other is a Point, the position of this system's origin
wrt it is returned. If its an instance of CoordSyRect,
the position wrt its origin is returned.
Examples
========
>>> from sympy.vector import CoordSys3D
>>> N = CoordSys3D('N')
>>> N1 = N.locate_new('N1', 10 * N.i)
>>> N.position_wrt(N1)
(-10)*N.i
"""
return self.origin.position_wrt(other)
def scalar_map(self, other):
"""
Returns a dictionary which expresses the coordinate variables
(base scalars) of this frame in terms of the variables of
otherframe.
Parameters
==========
otherframe : CoordSys3D
The other system to map the variables to.
Examples
========
>>> from sympy.vector import CoordSys3D
>>> from sympy import Symbol
>>> A = CoordSys3D('A')
>>> q = Symbol('q')
>>> B = A.orient_new_axis('B', q, A.k)
>>> A.scalar_map(B)
{A.x: B.x*cos(q) - B.y*sin(q), A.y: B.x*sin(q) + B.y*cos(q), A.z: B.z}
"""
origin_coords = tuple(self.position_wrt(other).to_matrix(other))
relocated_scalars = [x - origin_coords[i]
for i, x in enumerate(other.base_scalars())]
vars_matrix = (self.rotation_matrix(other) *
Matrix(relocated_scalars))
return {x: trigsimp(vars_matrix[i])
for i, x in enumerate(self.base_scalars())}
def locate_new(self, name, position, vector_names=None,
variable_names=None):
"""
Returns a CoordSys3D with its origin located at the given
position wrt this coordinate system's origin.
Parameters
==========
name : str
The name of the new CoordSys3D instance.
position : Vector
The position vector of the new system's origin wrt this
one.
vector_names, variable_names : iterable(optional)
Iterables of 3 strings each, with custom names for base
vectors and base scalars of the new system respectively.
Used for simple str printing.
Examples
========
>>> from sympy.vector import CoordSys3D
>>> A = CoordSys3D('A')
>>> B = A.locate_new('B', 10 * A.i)
>>> B.origin.position_wrt(A.origin)
10*A.i
"""
if variable_names is None:
variable_names = self._variable_names
if vector_names is None:
vector_names = self._vector_names
return CoordSys3D(name, location=position,
vector_names=vector_names,
variable_names=variable_names,
parent=self)
def orient_new(self, name, orienters, location=None,
vector_names=None, variable_names=None):
"""
Creates a new CoordSys3D oriented in the user-specified way
with respect to this system.
Please refer to the documentation of the orienter classes
for more information about the orientation procedure.
Parameters
==========
name : str
The name of the new CoordSys3D instance.
orienters : iterable/Orienter
An Orienter or an iterable of Orienters for orienting the
new coordinate system.
If an Orienter is provided, it is applied to get the new
system.
If an iterable is provided, the orienters will be applied
in the order in which they appear in the iterable.
location : Vector(optional)
The location of the new coordinate system's origin wrt this
system's origin. If not specified, the origins are taken to
be coincident.
vector_names, variable_names : iterable(optional)
Iterables of 3 strings each, with custom names for base
vectors and base scalars of the new system respectively.
Used for simple str printing.
Examples
========
>>> from sympy.vector import CoordSys3D
>>> from sympy import symbols
>>> q0, q1, q2, q3 = symbols('q0 q1 q2 q3')
>>> N = CoordSys3D('N')
Using an AxisOrienter
>>> from sympy.vector import AxisOrienter
>>> axis_orienter = AxisOrienter(q1, N.i + 2 * N.j)
>>> A = N.orient_new('A', (axis_orienter, ))
Using a BodyOrienter
>>> from sympy.vector import BodyOrienter
>>> body_orienter = BodyOrienter(q1, q2, q3, '123')
>>> B = N.orient_new('B', (body_orienter, ))
Using a SpaceOrienter
>>> from sympy.vector import SpaceOrienter
>>> space_orienter = SpaceOrienter(q1, q2, q3, '312')
>>> C = N.orient_new('C', (space_orienter, ))
Using a QuaternionOrienter
>>> from sympy.vector import QuaternionOrienter
>>> q_orienter = QuaternionOrienter(q0, q1, q2, q3)
>>> D = N.orient_new('D', (q_orienter, ))
"""
if variable_names is None:
variable_names = self._variable_names
if vector_names is None:
vector_names = self._vector_names
if isinstance(orienters, Orienter):
if isinstance(orienters, AxisOrienter):
final_matrix = orienters.rotation_matrix(self)
else:
final_matrix = orienters.rotation_matrix()
# TODO: trigsimp is needed here so that the matrix becomes
# canonical (scalar_map also calls trigsimp; without this, you can
# end up with the same CoordinateSystem that compares differently
# due to a differently formatted matrix). However, this is
# probably not so good for performance.
final_matrix = trigsimp(final_matrix)
else:
final_matrix = Matrix(eye(3))
for orienter in orienters:
if isinstance(orienter, AxisOrienter):
final_matrix *= orienter.rotation_matrix(self)
else:
final_matrix *= orienter.rotation_matrix()
return CoordSys3D(name, rotation_matrix=final_matrix,
vector_names=vector_names,
variable_names=variable_names,
location=location,
parent=self)
def orient_new_axis(self, name, angle, axis, location=None,
vector_names=None, variable_names=None):
"""
Axis rotation is a rotation about an arbitrary axis by
some angle. The angle is supplied as a SymPy expr scalar, and
the axis is supplied as a Vector.
Parameters
==========
name : string
The name of the new coordinate system
angle : Expr
The angle by which the new system is to be rotated
axis : Vector
The axis around which the rotation has to be performed
location : Vector(optional)
The location of the new coordinate system's origin wrt this
system's origin. If not specified, the origins are taken to
be coincident.
vector_names, variable_names : iterable(optional)
Iterables of 3 strings each, with custom names for base
vectors and base scalars of the new system respectively.
Used for simple str printing.
Examples
========
>>> from sympy.vector import CoordSys3D
>>> from sympy import symbols
>>> q1 = symbols('q1')
>>> N = CoordSys3D('N')
>>> B = N.orient_new_axis('B', q1, N.i + 2 * N.j)
"""
if variable_names is None:
variable_names = self._variable_names
if vector_names is None:
vector_names = self._vector_names
orienter = AxisOrienter(angle, axis)
return self.orient_new(name, orienter,
location=location,
vector_names=vector_names,
variable_names=variable_names)
def orient_new_body(self, name, angle1, angle2, angle3,
rotation_order, location=None,
vector_names=None, variable_names=None):
"""
Body orientation takes this coordinate system through three
successive simple rotations.
Body fixed rotations include both Euler Angles and
Tait-Bryan Angles, see https://en.wikipedia.org/wiki/Euler_angles.
Parameters
==========
name : string
The name of the new coordinate system
angle1, angle2, angle3 : Expr
Three successive angles to rotate the coordinate system by
rotation_order : string
String defining the order of axes for rotation
location : Vector(optional)
The location of the new coordinate system's origin wrt this
system's origin. If not specified, the origins are taken to
be coincident.
vector_names, variable_names : iterable(optional)
Iterables of 3 strings each, with custom names for base
vectors and base scalars of the new system respectively.
Used for simple str printing.
Examples
========
>>> from sympy.vector import CoordSys3D
>>> from sympy import symbols
>>> q1, q2, q3 = symbols('q1 q2 q3')
>>> N = CoordSys3D('N')
A 'Body' fixed rotation is described by three angles and
three body-fixed rotation axes. To orient a coordinate system D
with respect to N, each sequential rotation is always about
the orthogonal unit vectors fixed to D. For example, a '123'
rotation will specify rotations about N.i, then D.j, then
D.k. (Initially, D.i is same as N.i)
Therefore,
>>> D = N.orient_new_body('D', q1, q2, q3, '123')
is same as
>>> D = N.orient_new_axis('D', q1, N.i)
>>> D = D.orient_new_axis('D', q2, D.j)
>>> D = D.orient_new_axis('D', q3, D.k)
Acceptable rotation orders are of length 3, expressed in XYZ or
123, and cannot have a rotation about about an axis twice in a row.
>>> B = N.orient_new_body('B', q1, q2, q3, '123')
>>> B = N.orient_new_body('B', q1, q2, 0, 'ZXZ')
>>> B = N.orient_new_body('B', 0, 0, 0, 'XYX')
"""
orienter = BodyOrienter(angle1, angle2, angle3, rotation_order)
return self.orient_new(name, orienter,
location=location,
vector_names=vector_names,
variable_names=variable_names)
def orient_new_space(self, name, angle1, angle2, angle3,
rotation_order, location=None,
vector_names=None, variable_names=None):
"""
Space rotation is similar to Body rotation, but the rotations
are applied in the opposite order.
Parameters
==========
name : string
The name of the new coordinate system
angle1, angle2, angle3 : Expr
Three successive angles to rotate the coordinate system by
rotation_order : string
String defining the order of axes for rotation
location : Vector(optional)
The location of the new coordinate system's origin wrt this
system's origin. If not specified, the origins are taken to
be coincident.
vector_names, variable_names : iterable(optional)
Iterables of 3 strings each, with custom names for base
vectors and base scalars of the new system respectively.
Used for simple str printing.
See Also
========
CoordSys3D.orient_new_body : method to orient via Euler
angles
Examples
========
>>> from sympy.vector import CoordSys3D
>>> from sympy import symbols
>>> q1, q2, q3 = symbols('q1 q2 q3')
>>> N = CoordSys3D('N')
To orient a coordinate system D with respect to N, each
sequential rotation is always about N's orthogonal unit vectors.
For example, a '123' rotation will specify rotations about
N.i, then N.j, then N.k.
Therefore,
>>> D = N.orient_new_space('D', q1, q2, q3, '312')
is same as
>>> B = N.orient_new_axis('B', q1, N.i)
>>> C = B.orient_new_axis('C', q2, N.j)
>>> D = C.orient_new_axis('D', q3, N.k)
"""
orienter = SpaceOrienter(angle1, angle2, angle3, rotation_order)
return self.orient_new(name, orienter,
location=location,
vector_names=vector_names,
variable_names=variable_names)
def orient_new_quaternion(self, name, q0, q1, q2, q3, location=None,
vector_names=None, variable_names=None):
"""
Quaternion orientation orients the new CoordSys3D with
Quaternions, defined as a finite rotation about lambda, a unit
vector, by some amount theta.
This orientation is described by four parameters:
q0 = cos(theta/2)
q1 = lambda_x sin(theta/2)
q2 = lambda_y sin(theta/2)
q3 = lambda_z sin(theta/2)
Quaternion does not take in a rotation order.
Parameters
==========
name : string
The name of the new coordinate system
q0, q1, q2, q3 : Expr
The quaternions to rotate the coordinate system by
location : Vector(optional)
The location of the new coordinate system's origin wrt this
system's origin. If not specified, the origins are taken to
be coincident.
vector_names, variable_names : iterable(optional)
Iterables of 3 strings each, with custom names for base
vectors and base scalars of the new system respectively.
Used for simple str printing.
Examples
========
>>> from sympy.vector import CoordSys3D
>>> from sympy import symbols
>>> q0, q1, q2, q3 = symbols('q0 q1 q2 q3')
>>> N = CoordSys3D('N')
>>> B = N.orient_new_quaternion('B', q0, q1, q2, q3)
"""
orienter = QuaternionOrienter(q0, q1, q2, q3)
return self.orient_new(name, orienter,
location=location,
vector_names=vector_names,
variable_names=variable_names)
def create_new(self, name, transformation, variable_names=None, vector_names=None):
"""
Returns a CoordSys3D which is connected to self by transformation.
Parameters
==========
name : str
The name of the new CoordSys3D instance.
transformation : Lambda, Tuple, str
Transformation defined by transformation equations or chosen
from predefined ones.
vector_names, variable_names : iterable(optional)
Iterables of 3 strings each, with custom names for base
vectors and base scalars of the new system respectively.
Used for simple str printing.
Examples
========
>>> from sympy.vector import CoordSys3D
>>> a = CoordSys3D('a')
>>> b = a.create_new('b', transformation='spherical')
>>> b.transformation_to_parent()
(b.r*sin(b.theta)*cos(b.phi), b.r*sin(b.phi)*sin(b.theta), b.r*cos(b.theta))
>>> b.transformation_from_parent()
(sqrt(a.x**2 + a.y**2 + a.z**2), acos(a.z/sqrt(a.x**2 + a.y**2 + a.z**2)), atan2(a.y, a.x))
"""
return CoordSys3D(name, parent=self, transformation=transformation,
variable_names=variable_names, vector_names=vector_names)
def __init__(self, name, location=None, rotation_matrix=None,
parent=None, vector_names=None, variable_names=None,
latex_vects=None, pretty_vects=None, latex_scalars=None,
pretty_scalars=None, transformation=None):
# Dummy initializer for setting docstring
pass
__init__.__doc__ = __new__.__doc__
@staticmethod
def _compose_rotation_and_translation(rot, translation, parent):
r = lambda x, y, z: CoordSys3D._rotation_trans_equations(rot, (x, y, z))
if parent is None:
return r
dx, dy, dz = [translation.dot(i) for i in parent.base_vectors()]
t = lambda x, y, z: (
x + dx,
y + dy,
z + dz,
)
return lambda x, y, z: t(*r(x, y, z))
def _check_strings(arg_name, arg):
errorstr = arg_name + " must be an iterable of 3 string-types"
if len(arg) != 3:
raise ValueError(errorstr)
for s in arg:
if not isinstance(s, str):
raise TypeError(errorstr)
# Delayed import to avoid cyclic import problems:
from sympy.vector.vector import BaseVector
| CoordSys3D |
python | walkccc__LeetCode | solutions/1237. Find Positive Integer Solution for a Given Equation/1237.py | {
"start": 0,
"end": 354
} | class ____:
def findSolution(self, customfunction: 'CustomFunction', z: int) -> list[list[int]]:
ans = []
x = 1
y = 1000
while x <= 1000 and y >= 1:
f = customfunction.f(x, y)
if f < z:
x += 1
elif f > z:
y -= 1
else:
ans.append([x, y])
x += 1
y -= 1
return ans
| Solution |
python | mozilla__bleach | bleach/_vendor/html5lib/treewalkers/base.py | {
"start": 4819,
"end": 7476
} | class ____(TreeWalker):
def getNodeDetails(self, node):
raise NotImplementedError
def getFirstChild(self, node):
raise NotImplementedError
def getNextSibling(self, node):
raise NotImplementedError
def getParentNode(self, node):
raise NotImplementedError
def __iter__(self):
currentNode = self.tree
while currentNode is not None:
details = self.getNodeDetails(currentNode)
type, details = details[0], details[1:]
hasChildren = False
if type == DOCTYPE:
yield self.doctype(*details)
elif type == TEXT:
for token in self.text(*details):
yield token
elif type == ELEMENT:
namespace, name, attributes, hasChildren = details
if (not namespace or namespace == namespaces["html"]) and name in voidElements:
for token in self.emptyTag(namespace, name, attributes,
hasChildren):
yield token
hasChildren = False
else:
yield self.startTag(namespace, name, attributes)
elif type == COMMENT:
yield self.comment(details[0])
elif type == ENTITY:
yield self.entity(details[0])
elif type == DOCUMENT:
hasChildren = True
else:
yield self.unknown(details[0])
if hasChildren:
firstChild = self.getFirstChild(currentNode)
else:
firstChild = None
if firstChild is not None:
currentNode = firstChild
else:
while currentNode is not None:
details = self.getNodeDetails(currentNode)
type, details = details[0], details[1:]
if type == ELEMENT:
namespace, name, attributes, hasChildren = details
if (namespace and namespace != namespaces["html"]) or name not in voidElements:
yield self.endTag(namespace, name)
if self.tree is currentNode:
currentNode = None
break
nextSibling = self.getNextSibling(currentNode)
if nextSibling is not None:
currentNode = nextSibling
break
else:
currentNode = self.getParentNode(currentNode)
| NonRecursiveTreeWalker |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.