title stringlengths 2 169 | diff stringlengths 235 19.5k | body stringlengths 0 30.5k | url stringlengths 48 84 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 | updated_at stringlengths 20 20 | diff_len float64 101 3.99k | repo_name stringclasses 83
values | __index_level_0__ int64 15 52.7k |
|---|---|---|---|---|---|---|---|---|---|---|
Don't attempt to coerce JS strings to numbers | diff --git a/test/test_utils.py b/test/test_utils.py
index 962fd8d753f..c2d1e4fb17a 100644
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -994,6 +994,12 @@ def test_js_to_json_edgecases(self):
on = js_to_json('{42:4.2e1}')
self.assertEqual(json.loads(on), {'42': 42.0})
+ on = js_to_json('{ "0x40": "0x40" }')
+ self.assertEqual(json.loads(on), {'0x40': '0x40'})
+
+ on = js_to_json('{ "040": "040" }')
+ self.assertEqual(json.loads(on), {'040': '040'})
+
def test_js_to_json_malformed(self):
self.assertEqual(js_to_json('42a1'), '42"a1"')
self.assertEqual(js_to_json('42a-1'), '42"a"-1')
diff --git a/youtube_dl/utils.py b/youtube_dl/utils.py
index 01d9c036214..737e2810e22 100644
--- a/youtube_dl/utils.py
+++ b/youtube_dl/utils.py
@@ -4088,12 +4088,12 @@ def fix_kv(m):
'\\\n': '',
'\\x': '\\u00',
}.get(m.group(0), m.group(0)), v[1:-1])
-
- for regex, base in INTEGER_TABLE:
- im = re.match(regex, v)
- if im:
- i = int(im.group(1), base)
- return '"%d":' % i if v.endswith(':') else '%d' % i
+ else:
+ for regex, base in INTEGER_TABLE:
+ im = re.match(regex, v)
+ if im:
+ i = int(im.group(1), base)
+ return '"%d":' % i if v.endswith(':') else '%d' % i
return '"%s"' % v
| ### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) sections
- [x] [Searched](https://github.com/ytdl-org/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [x] Bug fix
- [ ] Improvement
- [ ] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
The current logic in `js_to_json` tries to rewrite octal/hex numbers to decimal. However, when the logic actually happens the `"` or `'` have already been trimmed off. This causes what were originally strings, that happen to look like octal/hex numbers, to get rewritten to decimal and returned as a number rather than a string.
In practive something like:
```js
{
"0x40": "foo",
"040": "bar",
}
```
would get rewritten as:
```json
{
64: "foo",
32: "bar
}
```
This is problematic since this isn't valid JSON as you cannot have non-string keys. | https://api.github.com/repos/ytdl-org/youtube-dl/pulls/26851 | 2020-10-09T23:09:34Z | 2020-10-17T17:10:42Z | 2020-10-17T17:10:42Z | 2020-12-02T16:37:31Z | 450 | ytdl-org/youtube-dl | 50,667 |
Fix Dockerfile.gpu to include required python3.8-tk | diff --git a/Dockerfile.gpu b/Dockerfile.gpu
index f940119f79..b18c58d447 100755
--- a/Dockerfile.gpu
+++ b/Dockerfile.gpu
@@ -7,6 +7,7 @@ RUN add-apt-repository ppa:deadsnakes/ppa -y
RUN apt-get update
RUN apt install python3.8 -y
RUN apt install python3.8-distutils -y
+RUN apt install python3.8-tk -y
RUN apt install curl -y
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python3.8 get-pip.py
@@ -24,4 +25,4 @@ RUN jupyter serverextension enable --py jupyter_http_over_ws
RUN alias python=python3.8
RUN echo "alias python=python3.8" >> /root/.bashrc
WORKDIR "/notebooks"
-CMD ["jupyter-notebook", "--allow-root" ,"--port=8888" ,"--no-browser" ,"--ip=0.0.0.0"]
\ No newline at end of file
+CMD ["jupyter-notebook", "--allow-root" ,"--port=8888" ,"--no-browser" ,"--ip=0.0.0.0"]
diff --git a/INSTALL.md b/INSTALL.md
index ef0836e05e..2a94d4f246 100755
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -233,7 +233,9 @@ INFO 1. Install Docker
deepfakes-gpu
1. Open a new terminal to interact with the project
- docker exec faceswap-gpu python /srv/faceswap.py gui
+ docker exec -it deepfakes-gpu /bin/bash
+ # Launch deepfakes gui (Answer 3 for NVIDIA at the prompt)
+ python3.8 /srv/faceswap.py gui
```
A successful setup log, without docker.
| Also fixed incorrect instructions in INSTALL.md documentation for docker | https://api.github.com/repos/deepfakes/faceswap/pulls/1118 | 2021-01-13T21:59:07Z | 2021-01-23T15:55:07Z | 2021-01-23T15:55:07Z | 2021-01-23T15:55:07Z | 445 | deepfakes/faceswap | 18,853 |
Fix typo in changelog. | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 30cb4d4de12..fb32ba8fd1e 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -30,7 +30,7 @@ More details about these changes can be found on our GitHub repo.
* Fedora 29+ is now supported by certbot-auto. Since Python 2.x is on a deprecation
path in Fedora, certbot-auto will install and use Python 3.x on Fedora 29+.
-* CLI flag `--http-port` has been added for Nginx plugin exclusively, and replaces
+* CLI flag `--https-port` has been added for Nginx plugin exclusively, and replaces
`--tls-sni-01-port`. It defines the HTTPS port the Nginx plugin will use while
setting up a new SSL vhost. By default the HTTPS port is 443.
| Fixes a typo in the changelog pointed out at https://community.letsencrypt.org/t/certbot-0-33-0-release/90298/4 so it's correct going forward. | https://api.github.com/repos/certbot/certbot/pulls/6910 | 2019-04-03T21:47:39Z | 2019-04-03T22:16:44Z | 2019-04-03T22:16:44Z | 2019-04-03T22:17:36Z | 206 | certbot/certbot | 2,759 |
CI: make travis run the doctests | diff --git a/continuous_integration/test_script.sh b/continuous_integration/test_script.sh
index f25f45e4b222e..faf106d63f629 100644
--- a/continuous_integration/test_script.sh
+++ b/continuous_integration/test_script.sh
@@ -12,8 +12,8 @@ python -c "import scipy; print('scipy %s' % scipy.__version__)"
python setup.py build_ext --inplace
if [[ "$COVERAGE" == "true" ]]; then
- export WITH_COVERAGE="--with-coverage"
+ make test-coverage
else
- export WITH_COVERAGE=""
+ make test-code
fi
-nosetests -s -v $WITH_COVERAGE sklearn
+make test-doc
| Reuse the Makefile to avoid duplication of the list of documentation folders to test. Unfortunately there is no simple way to make nose recursively introspect non-package folders.
| https://api.github.com/repos/scikit-learn/scikit-learn/pulls/3189 | 2014-05-23T09:18:12Z | 2014-05-23T09:35:11Z | 2014-05-23T09:35:10Z | 2014-06-13T12:28:32Z | 164 | scikit-learn/scikit-learn | 46,413 |
Remove unneeded mypy dependencies | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index c2f4b1684e..3561df4f90 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -43,10 +43,8 @@ repos:
- id: mypy
exclude: ^docs/conf.py
additional_dependencies:
- - types-dataclasses >= 0.1.3
- types-PyYAML
- tomli >= 0.2.6, < 2.0.0
- - types-typed-ast >= 1.4.1
- click >= 8.1.0, != 8.1.4
- packaging >= 22.0
- platformdirs >= 2.1.0
| Black no longer uses typed-ast or the dataclasses backport, so these should both be unnecessary now! | https://api.github.com/repos/psf/black/pulls/3783 | 2023-07-11T09:11:40Z | 2023-07-11T14:21:37Z | 2023-07-11T14:21:37Z | 2023-07-11T14:21:41Z | 182 | psf/black | 24,461 |
[NFC] polish code style | diff --git a/colossalai/fx/passes/passes_for_gpt2_test.py b/colossalai/fx/passes/passes_for_gpt2_test.py
index f98fcd686ea4..abc1a089e9a9 100644
--- a/colossalai/fx/passes/passes_for_gpt2_test.py
+++ b/colossalai/fx/passes/passes_for_gpt2_test.py
@@ -1,14 +1,15 @@
+import inspect
+from typing import Any, Callable, Dict, List, Optional
+
import torch
-from torch.fx.graph_module import GraphModule
-from typing import Callable, List, Dict, Any, Optional
-from torch.fx._compatibility import compatibility
from packaging import version
+from torch.fx._compatibility import compatibility
+from torch.fx.graph_module import GraphModule
+from torch.fx.node import Node
+
+from colossalai.fx.passes.adding_split_node_pass import balanced_split_pass, pipe_split
from colossalai.fx.passes.meta_info_prop import TensorMetadata
-import inspect
-from typing import List
from colossalai.fx.passes.split_module import Partition
-from colossalai.fx.passes.adding_split_node_pass import pipe_split, balanced_split_pass
-from torch.fx.node import Node
def customized_split_pass_for_gpt2(gm: torch.fx.GraphModule, pp_size: int, partition_list: List[int]):
| ## 📌 Checklist before creating the PR
- [ ] I have created an issue for this PR for traceability
- [ ] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`
- [ ] I have added relevant tags if possible for us to better distinguish different PRs
## 🚨 Issue number
> Link this PR to your issue with words like fixed to automatically close the linked issue upon merge
>
> e.g. `fixed #1234`, `closed #1234`, `resolved #1234`
## 📝 What does this PR do?
> Summarize your work here.
> if you have any plots/diagrams/screenshots/tables, please attach them here.
## 💥 Checklist before requesting a review
- [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
- [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
- [ ] I have performed a self-review of my code
- [ ] I have added thorough tests.
- [ ] I have added docstrings for all the functions/methods I implemented
## ⭐️ Do you enjoy contributing to Colossal-AI?
- [ ] 🌝 Yes, I do.
- [ ] 🌚 No, I don't.
Tell us more if you don't enjoy contributing to Colossal-AI.
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/3273 | 2023-03-28T02:42:46Z | 2023-03-28T02:43:28Z | 2023-03-28T02:43:28Z | 2023-03-28T04:29:25Z | 301 | hpcaitech/ColossalAI | 11,358 |
List more private and link-local IP networks | diff --git a/homeassistant/util/network.py b/homeassistant/util/network.py
index 87077a0eb0a2..7d0d6e99639c 100644
--- a/homeassistant/util/network.py
+++ b/homeassistant/util/network.py
@@ -14,14 +14,21 @@
# RFC6890 - Address allocation for Private Internets
PRIVATE_NETWORKS = (
- ip_network("fd00::/8"),
ip_network("10.0.0.0/8"),
ip_network("172.16.0.0/12"),
ip_network("192.168.0.0/16"),
+ ip_network("fd00::/8"),
+ ip_network("::ffff:10.0.0.0/104"),
+ ip_network("::ffff:172.16.0.0/108"),
+ ip_network("::ffff:192.168.0.0/112"),
)
# RFC6890 - Link local ranges
-LINK_LOCAL_NETWORK = ip_network("169.254.0.0/16")
+LINK_LOCAL_NETWORKS = (
+ ip_network("169.254.0.0/16"),
+ ip_network("fe80::/10"),
+ ip_network("::ffff:169.254.0.0/112"),
+)
def is_loopback(address: IPv4Address | IPv6Address) -> bool:
@@ -30,18 +37,18 @@ def is_loopback(address: IPv4Address | IPv6Address) -> bool:
def is_private(address: IPv4Address | IPv6Address) -> bool:
- """Check if an address is a private address."""
+ """Check if an address is a unique local non-loopback address."""
return any(address in network for network in PRIVATE_NETWORKS)
def is_link_local(address: IPv4Address | IPv6Address) -> bool:
- """Check if an address is link local."""
- return address in LINK_LOCAL_NETWORK
+ """Check if an address is link-local (local but not necessarily unique)."""
+ return any(address in network for network in LINK_LOCAL_NETWORKS)
def is_local(address: IPv4Address | IPv6Address) -> bool:
- """Check if an address is loopback or private."""
- return is_loopback(address) or is_private(address)
+ """Check if an address is on a local network."""
+ return is_loopback(address) or is_private(address) or is_link_local(address)
def is_invalid(address: IPv4Address | IPv6Address) -> bool:
diff --git a/tests/util/test_network.py b/tests/util/test_network.py
index 4f372e5e1a7d..7339b6dc51d6 100644
--- a/tests/util/test_network.py
+++ b/tests/util/test_network.py
@@ -30,7 +30,9 @@ def test_is_private():
def test_is_link_local():
"""Test link local addresses."""
assert network_util.is_link_local(ip_address("169.254.12.3"))
+ assert network_util.is_link_local(ip_address("fe80::1234:5678:abcd"))
assert not network_util.is_link_local(ip_address("127.0.0.1"))
+ assert not network_util.is_link_local(ip_address("::1"))
def test_is_invalid():
@@ -43,7 +45,13 @@ def test_is_local():
"""Test local addresses."""
assert network_util.is_local(ip_address("192.168.0.1"))
assert network_util.is_local(ip_address("127.0.0.1"))
+ assert network_util.is_local(ip_address("fd12:3456:789a:1::1"))
+ assert network_util.is_local(ip_address("fe80::1234:5678:abcd"))
+ assert network_util.is_local(ip_address("::ffff:192.168.0.1"))
assert not network_util.is_local(ip_address("208.5.4.2"))
+ assert not network_util.is_local(ip_address("198.51.100.1"))
+ assert not network_util.is_local(ip_address("2001:DB8:FA1::1"))
+ assert not network_util.is_local(ip_address("::ffff:208.5.4.2"))
def test_is_ip_address():
| <!--
You are amazing! Thanks for contributing to our project!
Please, DO NOT DELETE ANY TEXT from this template! (unless instructed).
-->
## Proposed change
<!--
Describe the big picture of your changes here to communicate to the
maintainers why we should accept this pull request. If it fixes a bug
or resolves a feature request, be sure to link to that issue in the
additional information section.
-->
The list of private networks was missing IPv4 ones mapped to IPv6, and no IPv6 addresses were recognized as link-local. This made local-only accounts unusable on IPv6-capable networks with no IPv6 DHCP server.
## Type of change
<!--
What type of change does your PR introduce to Home Assistant?
NOTE: Please, check only 1! box!
If your PR requires multiple boxes to be checked, you'll most likely need to
split it into multiple PRs. This makes things easier and faster to code review.
-->
- [ ] Dependency upgrade
- [x] Bugfix (non-breaking change which fixes an issue)
- [ ] New integration (thank you!)
- [ ] New feature (which adds functionality to an existing integration)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [ ] Code quality improvements to existing code or addition of tests
## Checklist
<!--
Put an `x` in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
-->
- [x] The code change is tested and works locally.
- [x] Local tests pass. **Your PR cannot be merged unless tests pass**
- [x] There is no commented out code in this PR.
- [x] I have followed the [development checklist][dev-checklist]
- [x] The code has been formatted using Black (`black --fast homeassistant tests`)
- [x] Tests have been added to verify that the new code works.
The integration reached or maintains the following [Integration Quality Scale][quality-scale]:
<!--
The Integration Quality Scale scores an integration on the code quality
and user experience. Each level of the quality scale consists of a list
of requirements. We highly recommend getting your integration scored!
-->
- [ ] No score or internal
- [ ] 🥈 Silver
- [ ] 🥇 Gold
- [ ] 🏆 Platinum
<!--
This project is very active and we have a high turnover of pull requests.
Unfortunately, the number of incoming pull requests is higher than what our
reviewers can review and merge so there is a long backlog of pull requests
waiting for review. You can help here!
By reviewing another pull request, you will help raise the code quality of
that pull request and the final review will be faster. This way the general
pace of pull request reviews will go up and your wait time will go down.
When picking a pull request to review, try to choose one that hasn't yet
been reviewed.
Thanks for helping out!
-->
To help with the load of incoming pull requests:
- [ ] I have reviewed two other [open pull requests][prs] in this repository.
[prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure
<!--
Thank you for contributing <3
Below, some useful links you could explore:
-->
[dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html
[manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html
[quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html
[docs-repository]: https://github.com/home-assistant/home-assistant.io
| https://api.github.com/repos/home-assistant/core/pulls/74064 | 2022-06-27T18:08:50Z | 2022-06-28T05:00:44Z | 2022-06-28T05:00:44Z | 2022-06-29T07:01:52Z | 921 | home-assistant/core | 39,261 |
Modify some expressions in quickstart_en.md | diff --git a/doc/doc_en/quickstart_en.md b/doc/doc_en/quickstart_en.md
index a5c0881de3..b897f3a3f4 100644
--- a/doc/doc_en/quickstart_en.md
+++ b/doc/doc_en/quickstart_en.md
@@ -1,15 +1,15 @@
-# Quick start of Chinese OCR model
+# Quick Start of Chinese OCR Model
-## 1. Prepare for the environment
+## 1. Environment Preparation
Please refer to [quick installation](./installation_en.md) to configure the PaddleOCR operating environment.
-* Note: Support the use of PaddleOCR through whl package installation,pelease refer [PaddleOCR Package](./whl_en.md).
+* Note: Support the use of PaddleOCR through whl package installation,please refer [PaddleOCR Package](./whl_en.md).
-## 2.inference models
+## 2. Inference Models
-The detection and recognition models on the mobile and server sides are as follows. For more models (including multiple languages), please refer to [PP-OCR v2.0 series model list](../doc_ch/models_list.md)
+The detection and recognition models on the mobile and server sides are as follows. For more models (including multiple languages), please refer to [PP-OCR v2.0 series model list](../doc_en/models_list.md)
| Model introduction | Model name | Recommended scene | Detection model | Direction Classifier | Recognition model |
| ------------ | --------------- | ----------------|---- | ---------- | -------- |
@@ -62,9 +62,9 @@ After decompression, the file structure should be as follows:
└── inference.pdmodel
```
-## 3. Single image or image set prediction
+## 3. Single Image or Image Set Prediction
-* The following code implements text detection、angle class and recognition process. When performing prediction, you need to specify the path of a single image or image set through the parameter `image_dir`, the parameter `det_model_dir` specifies the path to detect the inference model, the parameter `rec_model_dir` specifies the path to identify the inference model, the parameter `use_angle_cls` specifies whether to use the direction classifier, the parameter `cls_model_dir` specifies the path to identify the direction classifier model, the parameter `use_space_char` specifies whether to predict the space char. The visual results are saved to the `./inference_results` folder by default.
+* The following code implements text detection、angle class and recognition process. When performing prediction, you need to specify the path of a single image or image set through the parameter `image_dir`, the parameter `det_model_dir` specifies the path to the detection inference model, the parameter `rec_model_dir` specifies the path to the recognition inference model, the parameter `use_angle_cls` specifies whether to use the direction classifier, the parameter `cls_model_dir` specifies the path to the direction classifier model, the parameter `use_space_char` specifies whether to predict the space char. The visual results are saved to the `./inference_results` folder by default.
@@ -93,8 +93,7 @@ python3 tools/infer/predict_system.py --image_dir="./doc/imgs/11.jpg" --det_mode
- If you want to use the recognition model which does not support space char recognition, please update the source code to the latest version and add parameters `--use_space_char=False`.
- If you do not want to use direction classifier, please update the source code to the latest version and add parameters `--use_angle_cls=False`.
-
-For more text detection and recognition tandem reasoning, please refer to the document tutorial
+For more text detection and recognition tandem inferring, please refer to the document tutorial
: [Inference with Python inference engine](./inference_en.md)。
In addition, the tutorial also provides other deployment methods for the Chinese OCR model:
| Modify some expressions in quickstart_en.md | https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/2784 | 2021-05-13T06:20:07Z | 2021-05-13T11:19:16Z | 2021-05-13T11:19:16Z | 2021-05-13T11:19:16Z | 845 | PaddlePaddle/PaddleOCR | 42,510 |
AppTest `from_function` args | diff --git a/lib/streamlit/runtime/scriptrunner/script_runner.py b/lib/streamlit/runtime/scriptrunner/script_runner.py
index 203749e05fc1..c36bbd6d6548 100644
--- a/lib/streamlit/runtime/scriptrunner/script_runner.py
+++ b/lib/streamlit/runtime/scriptrunner/script_runner.py
@@ -510,7 +510,7 @@ def _run_script(self, rerun_data: RerunData) -> None:
# ...
# ```
# in their scripts.
- module = _new_module("__main__")
+ module = self._new_module("__main__")
# Install the fake module as the __main__ module. This allows
# the pickle module to work inside the user's code, since it now
@@ -624,6 +624,10 @@ def _on_script_finished(
if config.get_option("runner.postScriptGC"):
gc.collect(2)
+ def _new_module(self, name: str) -> types.ModuleType:
+ """Create a new module with the given name."""
+ return types.ModuleType(name)
+
class ScriptControlException(BaseException):
"""Base exception for ScriptRunner."""
@@ -674,11 +678,6 @@ def _clean_problem_modules() -> None:
pass
-def _new_module(name: str) -> types.ModuleType:
- """Create a new module with the given name."""
- return types.ModuleType(name)
-
-
# The reason this is not a decorator is because we want to make it clear at the
# calling location that this function is being used.
def _log_if_error(fn: Callable[[], None]) -> None:
diff --git a/lib/streamlit/testing/v1/app_test.py b/lib/streamlit/testing/v1/app_test.py
index ea1496a485c3..9743b22038fd 100644
--- a/lib/streamlit/testing/v1/app_test.py
+++ b/lib/streamlit/testing/v1/app_test.py
@@ -13,7 +13,6 @@
# limitations under the License.
from __future__ import annotations
-import ast
import hashlib
import inspect
import pathlib
@@ -142,12 +141,21 @@ class AppTest:
dict-like syntax to set ``query_params`` values for the simulated app.
"""
- def __init__(self, script_path: str, *, default_timeout: float):
+ def __init__(
+ self,
+ script_path: str,
+ *,
+ default_timeout: float,
+ args=None,
+ kwargs=None,
+ ):
self._script_path = script_path
self.default_timeout = default_timeout
self.session_state = SafeSessionState(SessionState(), lambda: None)
self.query_params: dict[str, Any] = {}
self.secrets: dict[str, Any] = {}
+ self.args = args
+ self.kwargs = kwargs
tree = ElementTree()
tree._runner = self
@@ -180,17 +188,30 @@ def from_string(cls, script: str, *, default_timeout: float = 3) -> AppTest:
executed via ``.run()``.
"""
+ return cls._from_string(script, default_timeout=default_timeout)
+
+ @classmethod
+ def _from_string(
+ cls, script: str, *, default_timeout: float = 3, args=None, kwargs=None
+ ) -> AppTest:
hasher = hashlib.md5(bytes(script, "utf-8"), **HASHLIB_KWARGS)
script_name = hasher.hexdigest()
path = pathlib.Path(TMP_DIR.name, script_name)
aligned_script = textwrap.dedent(script)
path.write_text(aligned_script)
- return AppTest(str(path), default_timeout=default_timeout)
+ return AppTest(
+ str(path), default_timeout=default_timeout, args=args, kwargs=kwargs
+ )
@classmethod
def from_function(
- cls, script: Callable[[], None], *, default_timeout: float = 3
+ cls,
+ script: Callable[..., Any],
+ *,
+ default_timeout: float = 3,
+ args=None,
+ kwargs=None,
) -> AppTest:
"""
Create an instance of ``AppTest`` to simulate an app page defined\
@@ -210,6 +231,12 @@ def from_function(
Default time in seconds before a script run is timed out. Can be
overridden for individual ``.run()`` calls.
+ args: tuple
+ An optional tuple of args to pass to the script function.
+
+ kwargs: dict
+ An optional dict of kwargs to pass to the script function.
+
Returns
-------
AppTest
@@ -217,14 +244,12 @@ def from_function(
executed via ``.run()``.
"""
- # TODO: Simplify this using `ast.unparse()` once we drop 3.8 support
source_lines, _ = inspect.getsourcelines(script)
source = textwrap.dedent("".join(source_lines))
- module = ast.parse(source)
- fn_def = module.body[0]
- body_lines = source_lines[fn_def.lineno :]
- body = textwrap.dedent("".join(body_lines))
- return cls.from_string(body, default_timeout=default_timeout)
+ module = source + f"\n{script.__name__}(*__args, **__kwargs)"
+ return cls._from_string(
+ module, default_timeout=default_timeout, args=args, kwargs=kwargs
+ )
@classmethod
def from_file(cls, script_path: str, *, default_timeout: float = 3) -> AppTest:
@@ -299,7 +324,9 @@ def _run(
new_secrets._secrets = self.secrets
st.secrets = new_secrets
- script_runner = LocalScriptRunner(self._script_path, self.session_state)
+ script_runner = LocalScriptRunner(
+ self._script_path, self.session_state, args=self.args, kwargs=self.kwargs
+ )
self._tree = script_runner.run(widget_state, self.query_params, timeout)
self._tree._runner = self
# Last event is SHUTDOWN, so the corresponding data includes query string
diff --git a/lib/streamlit/testing/v1/local_script_runner.py b/lib/streamlit/testing/v1/local_script_runner.py
index 545a6d2bc959..f2ec8ca60024 100644
--- a/lib/streamlit/testing/v1/local_script_runner.py
+++ b/lib/streamlit/testing/v1/local_script_runner.py
@@ -15,6 +15,7 @@
import os
import time
+import types
from typing import Any
from urllib import parse
@@ -37,6 +38,8 @@ def __init__(
self,
script_path: str,
session_state: SafeSessionState,
+ args=None,
+ kwargs=None,
):
"""Initializes the ScriptRunner for the given script_path."""
@@ -45,6 +48,8 @@ def __init__(
self.forward_msg_queue = ForwardMsgQueue()
self.script_path = script_path
self.session_state = session_state
+ self.args = args if args is not None else tuple()
+ self.kwargs = kwargs if kwargs is not None else dict()
super().__init__(
session_id="test session id",
@@ -135,6 +140,12 @@ def _on_script_finished(
# are marked as active.
runtime.get_instance().media_file_mgr.remove_orphaned_files()
+ def _new_module(self, name: str) -> types.ModuleType:
+ module = types.ModuleType(name)
+ module.__dict__["__args"] = self.args
+ module.__dict__["__kwargs"] = self.kwargs
+ return module
+
def require_widgets_deltas(runner: LocalScriptRunner, timeout: float = 3) -> None:
"""Wait for the given ScriptRunner to emit a completion event. If the timeout
diff --git a/lib/tests/streamlit/testing/app_test_test.py b/lib/tests/streamlit/testing/app_test_test.py
index c8866fb8481a..8727adbdc500 100644
--- a/lib/tests/streamlit/testing/app_test_test.py
+++ b/lib/tests/streamlit/testing/app_test_test.py
@@ -205,3 +205,15 @@ def button_one_clicked(cont):
assert at.markdown.len == 2
assert at.info[0].value == "Hi!"
assert at.markdown.values == ["FooBar", "BarFoo"]
+
+
+def test_from_function_kwargs():
+ def script(foo, baz):
+ import streamlit as st
+
+ st.text(foo)
+ st.text(baz)
+ return foo
+
+ at = AppTest.from_function(script, args=("bar",), kwargs={"baz": "baz"}).run()
+ assert at.text.values == ["bar", "baz"]
|
## Describe your changes
Allows `AppTest.from_function` to work with a function that takes arguments, and pass arguments to it.
The new implementation also allows it to work for functions that return values or otherwise do things that are not valid as top level expressions.
## GitHub Issue Link (if applicable)
#7900
## Testing Plan
Basic unit test added
---
**Contribution License Agreement**
By submitting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.
| https://api.github.com/repos/streamlit/streamlit/pulls/8183 | 2024-02-20T20:41:52Z | 2024-02-21T19:28:47Z | 2024-02-21T19:28:47Z | 2024-02-21T19:59:57Z | 1,950 | streamlit/streamlit | 21,709 |
bitbank safeOrder2 | diff --git a/js/bitbank.js b/js/bitbank.js
index 7439e80d7ab8..17a348efcdc0 100644
--- a/js/bitbank.js
+++ b/js/bitbank.js
@@ -431,15 +431,15 @@ module.exports = class bitbank extends Exchange {
symbol = market['symbol'];
}
const timestamp = this.safeInteger (order, 'ordered_at');
- const price = this.safeNumber (order, 'price');
- const amount = this.safeNumber (order, 'start_amount');
- const filled = this.safeNumber (order, 'executed_amount');
- const remaining = this.safeNumber (order, 'remaining_amount');
- const average = this.safeNumber (order, 'average_price');
+ const price = this.safeString (order, 'price');
+ const amount = this.safeString (order, 'start_amount');
+ const filled = this.safeString (order, 'executed_amount');
+ const remaining = this.safeString (order, 'remaining_amount');
+ const average = this.safeString (order, 'average_price');
const status = this.parseOrderStatus (this.safeString (order, 'status'));
const type = this.safeStringLower (order, 'type');
const side = this.safeStringLower (order, 'side');
- return this.safeOrder ({
+ return this.safeOrder2 ({
'id': id,
'clientOrderId': undefined,
'datetime': this.iso8601 (timestamp),
| https://api.github.com/repos/ccxt/ccxt/pulls/10347 | 2021-10-27T09:40:19Z | 2021-10-28T10:12:05Z | 2021-10-28T10:12:05Z | 2021-10-28T10:12:05Z | 327 | ccxt/ccxt | 12,990 | |
ref(createProject): convert test from jsx to tsx | diff --git a/static/app/views/projectInstall/createProject.spec.jsx b/static/app/views/projectInstall/createProject.spec.tsx
similarity index 85%
rename from static/app/views/projectInstall/createProject.spec.jsx
rename to static/app/views/projectInstall/createProject.spec.tsx
index 09c46722ec3da0..aae6c9328e578b 100644
--- a/static/app/views/projectInstall/createProject.spec.jsx
+++ b/static/app/views/projectInstall/createProject.spec.tsx
@@ -1,4 +1,4 @@
-import {fireEvent, render, screen, userEvent} from 'sentry-test/reactTestingLibrary';
+import {render, screen, userEvent} from 'sentry-test/reactTestingLibrary';
import {openCreateTeamModal} from 'sentry/actionCreators/modal';
import TeamStore from 'sentry/stores/teamStore';
@@ -8,8 +8,13 @@ jest.mock('sentry/actionCreators/modal');
describe('CreateProject', function () {
const organization = TestStubs.Organization();
+ const teamNoAccess = TestStubs.Team({
+ slug: 'test',
+ id: '1',
+ name: 'test',
+ hasAccess: false,
+ });
- const teamNoAccess = {slug: 'test', id: '1', name: 'test', hasAccess: false};
const teamWithAccess = {...teamNoAccess, hasAccess: true};
beforeEach(() => {
@@ -29,11 +34,10 @@ describe('CreateProject', function () {
});
it('should block if you have access to no teams', function () {
- const wrapper = render(<CreateProject />, {
+ const {container} = render(<CreateProject />, {
context: TestStubs.routerContext([{organization: {id: '1', slug: 'testOrg'}}]),
});
-
- expect(wrapper.container).toSnapshot();
+ expect(container).toSnapshot();
});
it('can create a new team', async function () {
@@ -46,9 +50,9 @@ describe('CreateProject', function () {
});
it('should fill in project name if its empty when platform is chosen', async function () {
- const wrapper = render(<CreateProject />, {
- router: {location: {query: {}}},
+ const {container} = render(<CreateProject />, {
context: TestStubs.routerContext([{organization: {id: '1', slug: 'testOrg'}}]),
+ organization,
});
await userEvent.click(screen.getByTestId('platform-apple-ios'));
@@ -64,15 +68,16 @@ describe('CreateProject', function () {
await userEvent.click(screen.getByTestId('platform-apple-ios'));
expect(screen.getByPlaceholderText('project-name')).toHaveValue('another');
- expect(wrapper.container).toSnapshot();
+ expect(container).toSnapshot();
});
- describe('Issue Alerts Options', () => {
+ describe('Issue Alerts Options', function () {
beforeEach(() => {
TeamStore.loadUserTeams([teamWithAccess]);
MockApiClient.addMockResponse({
url: `/projects/${organization.slug}/rule-conditions/`,
+ // @ts-ignore TODO: fix this type
body: TestStubs.MOCK_RESP_VERBOSE,
});
});
@@ -81,7 +86,7 @@ describe('CreateProject', function () {
MockApiClient.clearMockResponses();
});
- it('should enabled the submit button if and only if all the required information has been filled', async () => {
+ it('should enabled the submit button if and only if all the required information has been filled', async function () {
render(<CreateProject />);
const createProjectButton = screen.getByRole('button', {name: 'Create Project'});
@@ -102,7 +107,7 @@ describe('CreateProject', function () {
await userEvent.type(screen.getByTestId('range-input'), '2712');
expect(createProjectButton).toBeEnabled();
- fireEvent.change(screen.getByTestId('range-input'), {target: {value: ''}});
+ await userEvent.clear(screen.getByTestId('range-input'));
expect(createProjectButton).toBeDisabled();
await userEvent.click(screen.getByText("I'll create my own alerts later"));
diff --git a/static/app/views/projectInstall/issueAlertOptions.tsx b/static/app/views/projectInstall/issueAlertOptions.tsx
index 4969ee91582e71..3dec213d46fc3d 100644
--- a/static/app/views/projectInstall/issueAlertOptions.tsx
+++ b/static/app/views/projectInstall/issueAlertOptions.tsx
@@ -296,6 +296,7 @@ class IssueAlertOptions extends AsyncComponent<Props, State> {
const issueAlertOptionsChoices = this.getIssueAlertsChoices(
this.state.conditions?.length > 0
);
+
return (
<Fragment>
<PageHeadingWithTopMargins withMargins>
| just convert the test from `.jsx` to `.tsx` | https://api.github.com/repos/getsentry/sentry/pulls/49304 | 2023-05-17T07:39:37Z | 2023-05-23T07:49:49Z | 2023-05-23T07:49:49Z | 2023-06-07T12:00:55Z | 1,064 | getsentry/sentry | 44,177 |
[Eval] Add navigation bar | diff --git a/fastchat/eval/webpage/index.html b/fastchat/eval/webpage/index.html
index 1e07d73135..c2e3cf020b 100644
--- a/fastchat/eval/webpage/index.html
+++ b/fastchat/eval/webpage/index.html
@@ -10,6 +10,26 @@
</head>
<body>
+ <nav class="navbar navbar-expand-lg navbar-dark bg-dark">
+ <a class="navbar-brand" href="#">🏔️ Vicuna Evaluation Examples</a>
+ <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarNav" aria-controls="navbarNav" aria-expanded="false" aria-label="Toggle navigation">
+ <span class="navbar-toggler-icon"></span>
+ </button>
+ <div class="collapse navbar-collapse" id="navbarNav">
+ <ul class="navbar-nav mr-auto">
+ <li class="nav-item">
+ <a class="nav-link" href="https://chat.lmsys.org/">Demo</a>
+ </li>
+ <li class="nav-item">
+ <a class="nav-link" href="https://vicuna.lmsys.org">Blog</a>
+ </li>
+ <li class="nav-item">
+ <a class="nav-link" href="https://github.com/lm-sys/FastChat">Github</a>
+ </li>
+ </ul>
+ </div>
+ </nav>
+
<div class="container mt-5">
<h2 class="text-center mb-5">Who's GPT-4's favorite? Battles between State-of-the-Art Chatbots</h2>
diff --git a/fastchat/eval/webpage/styles.css b/fastchat/eval/webpage/styles.css
index ae1ab31f20..7b6d6fc69b 100644
--- a/fastchat/eval/webpage/styles.css
+++ b/fastchat/eval/webpage/styles.css
@@ -3,6 +3,12 @@ body {
background-color: #f8f9fa;
}
+.navbar-dark .navbar-nav .nav-link {
+ color: #f1cf68;
+ font-size: 1.1rem;
+ padding: 0.5rem 0.6rem;
+}
+
.card-header {
font-weight: bold;
}
| https://api.github.com/repos/lm-sys/FastChat/pulls/76 | 2023-03-30T09:31:03Z | 2023-03-30T09:32:45Z | 2023-03-30T09:32:45Z | 2023-03-30T09:32:48Z | 529 | lm-sys/FastChat | 41,087 | |
Update README.md | diff --git a/README.md b/README.md
index 7f6d9ec..b5ee9d7 100644
--- a/README.md
+++ b/README.md
@@ -619,7 +619,7 @@ True
* When `id` was called, Python created a `WTF` class object and passed it to the `id` function. The `id` function takes its `id` (its memory location), and throws away the object. The object is destroyed.
* When we do this twice in succession, Python allocates the same memory location to this second object as well. Since (in CPython) `id` uses the memory location as the object id, the id of the two objects is the same.
* So, the object's id is unique only for the lifetime of the object. After the object is destroyed, or before it is created, something else can have the same id.
-* But why did the `is` operator evaluated to `False`? Let's see with this snippet.
+* But why did the `is` operator evaluate to `False`? Let's see with this snippet.
```py
class WTF(object):
def __init__(self): print("I")
| Grammatical fix | https://api.github.com/repos/satwikkansal/wtfpython/pulls/290 | 2022-05-31T14:36:45Z | 2022-06-01T03:37:07Z | 2022-06-01T03:37:07Z | 2022-06-01T03:37:07Z | 262 | satwikkansal/wtfpython | 25,774 |
[3.8] bpo-37695: Correct unget_wch error message. (GH-14986) | diff --git a/Misc/NEWS.d/next/Library/2019-07-27-20-21-03.bpo-37695.QANdvg.rst b/Misc/NEWS.d/next/Library/2019-07-27-20-21-03.bpo-37695.QANdvg.rst
new file mode 100644
index 00000000000000..ca6c11641ed6a9
--- /dev/null
+++ b/Misc/NEWS.d/next/Library/2019-07-27-20-21-03.bpo-37695.QANdvg.rst
@@ -0,0 +1 @@
+Correct :func:`curses.unget_wch` error message. Patch by Anthony Sottile.
diff --git a/Modules/_cursesmodule.c b/Modules/_cursesmodule.c
index 2435e1c1295514..8fca7fcf1c181a 100644
--- a/Modules/_cursesmodule.c
+++ b/Modules/_cursesmodule.c
@@ -4176,7 +4176,7 @@ PyCurses_ConvertToWchar_t(PyObject *obj,
wchar_t buffer[2];
if (PyUnicode_AsWideChar(obj, buffer, 2) != 1) {
PyErr_Format(PyExc_TypeError,
- "expect bytes or str of length 1, or int, "
+ "expect str of length 1 or int, "
"got a str of length %zi",
PyUnicode_GET_LENGTH(obj));
return 0;
@@ -4203,7 +4203,7 @@ PyCurses_ConvertToWchar_t(PyObject *obj,
}
else {
PyErr_Format(PyExc_TypeError,
- "expect bytes or str of length 1, or int, got %s",
+ "expect str of length 1 or int, got %s",
Py_TYPE(obj)->tp_name);
return 0;
}
| (cherry picked from commit c9345e382c630ddcc2b148b30954640e0e435c8a)
Co-authored-by: Anthony Sottile <asottile@umich.edu>
<!-- issue-number: [bpo-37695](https://bugs.python.org/issue37695) -->
https://bugs.python.org/issue37695
<!-- /issue-number -->
| https://api.github.com/repos/python/cpython/pulls/15061 | 2019-07-31T20:24:42Z | 2019-07-31T20:45:00Z | 2019-07-31T20:45:00Z | 2019-07-31T21:03:08Z | 431 | python/cpython | 4,253 |
Issue 1629 has been closed, let's celebrate! | diff --git a/CHANGES.md b/CHANGES.md
index 0ca0b84f6d..65d037394a 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -6,7 +6,8 @@
- Fixed a rare but annoying formatting instability created by the combination of
optional trailing commas inserted by `Black` and optional parentheses looking at
- pre-existing "magic" trailing commas (#2126)
+ pre-existing "magic" trailing commas. This fixes issue #1629 and all of its many many
+ duplicates. (#2126)
- `Black` now processes one-line docstrings by stripping leading and trailing spaces,
and adding a padding space when needed to break up """". (#1740)
| https://api.github.com/repos/psf/black/pulls/2127 | 2021-04-25T20:52:47Z | 2021-04-25T21:52:23Z | 2021-04-25T21:52:23Z | 2021-04-25T21:55:37Z | 168 | psf/black | 24,129 | |
Refactoring: Delete whitespace spam on the end of the file | diff --git a/old_projects/fractal_charm.py b/old_projects/fractal_charm.py
index 7a86250815..964a049e81 100644
--- a/old_projects/fractal_charm.py
+++ b/old_projects/fractal_charm.py
@@ -108,45 +108,4 @@ class CircularFractalCreation(FractalCreation):
"max_order" : 5,
"fractal_kwargs" : {"height" : 6},
}
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
| https://api.github.com/repos/3b1b/manim/pulls/338 | 2018-11-05T12:44:27Z | 2018-11-05T18:08:55Z | 2018-11-05T18:08:55Z | 2018-11-05T18:08:55Z | 151 | 3b1b/manim | 18,279 | |
Tweak typos and configs | diff --git a/.travis.yml b/.travis.yml
index 22e60e374f107..5bc279c31ca43 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -25,8 +25,8 @@ after_script:
- bash <(curl -s https://codecov.io/bash)
deploy:
- provider: script
- script: bash scripts/deploy.sh
- on:
- tags: true
- python: "3.6"
+ provider: script
+ script: bash scripts/deploy.sh
+ on:
+ tags: true
+ python: "3.6"
diff --git a/README.md b/README.md
index 702cbdef7be4e..886045546c98a 100644
--- a/README.md
+++ b/README.md
@@ -5,8 +5,8 @@
<em>FastAPI framework, high performance, easy to learn, fast to code, ready for production</em>
</p>
<p align="center">
-<a href="https://travis-ci.org/tiangolo/fastapi" target="_blank">
- <img src="https://travis-ci.org/tiangolo/fastapi.svg?branch=master" alt="Build Status">
+<a href="https://travis-ci.com/tiangolo/fastapi" target="_blank">
+ <img src="https://travis-ci.com/tiangolo/fastapi.svg?branch=master" alt="Build Status">
</a>
<a href="https://codecov.io/gh/tiangolo/fastapi" target="_blank">
<img src="https://codecov.io/gh/tiangolo/fastapi/branch/master/graph/badge.svg" alt="Coverage">
@@ -407,7 +407,7 @@ Used by FastAPI / Starlette:
* <a href="http://www.uvicorn.org" target="_blank"><code>uvicorn</code></a> - for the server that loads and serves your application.
-You can install all of these with `pip3 install fastapi[all]`.
+You can install all of these with `pip install fastapi[all]`.
## License
diff --git a/docs/help-fastapi.md b/docs/help-fastapi.md
index 1c8f7310d7399..f029d0aa39f6d 100644
--- a/docs/help-fastapi.md
+++ b/docs/help-fastapi.md
@@ -56,7 +56,7 @@ You can:
## Tweet about **FastAPI**
-<a href="https://twitter.com/compose/tweet?text=I'm loving FastAPI because... https://github.com/tiangolo/fastapi cc @tiangolo" target="_blank">Tweet about **FastAPI**</a> and let me and others why you like it.
+<a href="https://twitter.com/compose/tweet?text=I'm loving FastAPI because... https://github.com/tiangolo/fastapi cc @tiangolo" target="_blank">Tweet about **FastAPI**</a> and let me and others know why you like it.
## Let me know how are you using **FastAPI**
diff --git a/docs/index.md b/docs/index.md
index 702cbdef7be4e..886045546c98a 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -5,8 +5,8 @@
<em>FastAPI framework, high performance, easy to learn, fast to code, ready for production</em>
</p>
<p align="center">
-<a href="https://travis-ci.org/tiangolo/fastapi" target="_blank">
- <img src="https://travis-ci.org/tiangolo/fastapi.svg?branch=master" alt="Build Status">
+<a href="https://travis-ci.com/tiangolo/fastapi" target="_blank">
+ <img src="https://travis-ci.com/tiangolo/fastapi.svg?branch=master" alt="Build Status">
</a>
<a href="https://codecov.io/gh/tiangolo/fastapi" target="_blank">
<img src="https://codecov.io/gh/tiangolo/fastapi/branch/master/graph/badge.svg" alt="Coverage">
@@ -407,7 +407,7 @@ Used by FastAPI / Starlette:
* <a href="http://www.uvicorn.org" target="_blank"><code>uvicorn</code></a> - for the server that loads and serves your application.
-You can install all of these with `pip3 install fastapi[all]`.
+You can install all of these with `pip install fastapi[all]`.
## License
diff --git a/mkdocs.yml b/mkdocs.yml
index 1bb9dc08f2a72..270a73d0dc365 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -5,8 +5,8 @@ site_url: https://fastapi.tiangolo.com/
theme:
name: 'material'
palette:
- primary: 'teal'
- accent: 'amber'
+ primary: 'teal'
+ accent: 'amber'
logo: 'img/icon-white.svg'
favicon: 'img/favicon.png'
@@ -102,10 +102,27 @@ nav:
- Release Notes: release-notes.md
markdown_extensions:
- - markdown.extensions.codehilite:
- guess_lang: false
- - markdown_include.include:
- base_path: docs
- - admonition
- - codehilite
- - extra
+ - toc:
+ permalink: true
+ - markdown.extensions.codehilite:
+ guess_lang: false
+ - markdown_include.include:
+ base_path: docs
+ - admonition
+ - codehilite
+ - extra
+
+extra:
+ social:
+ - type: 'github'
+ link: 'https://github.com/tiangolo/typer'
+ - type: 'twitter'
+ link: 'https://twitter.com/tiangolo'
+ - type: 'linkedin'
+ link: 'https://www.linkedin.com/in/tiangolo'
+ - type: 'rss'
+ link: 'https://dev.to/tiangolo'
+ - type: 'medium'
+ link: 'https://medium.com/@tiangolo'
+ - type: 'globe'
+ link: 'https://tiangolo.com'
diff --git a/pyproject.toml b/pyproject.toml
index 2bba24ca215e3..92a288d4a9be9 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -8,17 +8,17 @@ author = "Sebastián Ramírez"
author-email = "tiangolo@gmail.com"
home-page = "https://github.com/tiangolo/fastapi"
classifiers = [
- 'Intended Audience :: Information Technology',
- 'Intended Audience :: System Administrators',
- 'Operating System :: OS Independent',
- 'Programming Language :: Python :: 3',
- 'Programming Language :: Python',
- 'Topic :: Internet',
- 'Topic :: Software Development :: Libraries :: Application Frameworks',
- 'Topic :: Software Development :: Libraries :: Python Modules',
- 'Topic :: Software Development :: Libraries',
- 'Topic :: Software Development',
- 'Typing :: Typed',
+ "Intended Audience :: Information Technology",
+ "Intended Audience :: System Administrators",
+ "Operating System :: OS Independent",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python",
+ "Topic :: Internet",
+ "Topic :: Software Development :: Libraries :: Application Frameworks",
+ "Topic :: Software Development :: Libraries :: Python Modules",
+ "Topic :: Software Development :: Libraries",
+ "Topic :: Software Development",
+ "Typing :: Typed",
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Framework :: AsyncIO",
diff --git a/scripts/test.sh b/scripts/test.sh
index 6e08f18770236..e61c8e3b63a47 100755
--- a/scripts/test.sh
+++ b/scripts/test.sh
@@ -11,3 +11,5 @@ fi
export PYTHONPATH=./docs/src
pytest --cov=fastapi --cov=tests --cov=docs/src --cov-report=term-missing ${@}
bash ./scripts/lint.sh
+# Check README.md is up to date
+diff --brief docs/index.md README.md
| Tweak typos and configs, inherited from the process of building [Typer](https://github.com/tiangolo/typer). | https://api.github.com/repos/tiangolo/fastapi/pulls/837 | 2020-01-08T18:33:15Z | 2020-01-08T22:25:30Z | 2020-01-08T22:25:30Z | 2020-01-08T22:25:33Z | 1,900 | tiangolo/fastapi | 22,812 |
[Epidemic Sound] Add new extractor | diff --git a/youtube_dl/extractor/epidemicsound.py b/youtube_dl/extractor/epidemicsound.py
new file mode 100644
index 00000000000..1a52738aa6e
--- /dev/null
+++ b/youtube_dl/extractor/epidemicsound.py
@@ -0,0 +1,101 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from ..utils import (
+ float_or_none,
+ T,
+ traverse_obj,
+ txt_or_none,
+ unified_timestamp,
+ url_or_none,
+)
+
+
+class EpidemicSoundIE(InfoExtractor):
+ _VALID_URL = r'https?://(?:www\.)?epidemicsound\.com/track/(?P<id>[0-9a-zA-Z]+)'
+ _TESTS = [{
+ 'url': 'https://www.epidemicsound.com/track/yFfQVRpSPz/',
+ 'md5': 'd98ff2ddb49e8acab9716541cbc9dfac',
+ 'info_dict': {
+ 'id': '45014',
+ 'display_id': 'yFfQVRpSPz',
+ 'ext': 'mp3',
+ 'tags': ['foley', 'door', 'knock', 'glass', 'window', 'glass door knock'],
+ 'title': 'Door Knock Door 1',
+ 'duration': 1,
+ 'thumbnail': 'https://cdn.epidemicsound.com/curation-assets/commercial-release-cover-images/default-sfx/3000x3000.jpg',
+ 'timestamp': 1415320353,
+ 'upload_date': '20141107',
+ 'age_limit': None,
+ # check that the "best" format was found, since test file MD5 doesn't
+ # distinguish the formats
+ 'format': 'full',
+ },
+ }, {
+ 'url': 'https://www.epidemicsound.com/track/mj8GTTwsZd/',
+ 'md5': 'c82b745890f9baf18dc2f8d568ee3830',
+ 'info_dict': {
+ 'id': '148700',
+ 'display_id': 'mj8GTTwsZd',
+ 'ext': 'mp3',
+ 'tags': ['liquid drum n bass', 'energetic'],
+ 'title': 'Noplace',
+ 'duration': 237,
+ 'thumbnail': 'https://cdn.epidemicsound.com/curation-assets/commercial-release-cover-images/11138/3000x3000.jpg',
+ 'timestamp': 1694426482,
+ 'release_timestamp': 1700535606,
+ 'upload_date': '20230911',
+ 'age_limit': None,
+ 'format': 'full',
+ },
+ }]
+
+ def _real_extract(self, url):
+ video_id = self._match_id(url)
+ json_data = self._download_json('https://www.epidemicsound.com/json/track/' + video_id, video_id)
+
+ def fmt_or_none(f):
+ if not f.get('format'):
+ f['format'] = f.get('format_id')
+ elif not f.get('format_id'):
+ f['format_id'] = f['format']
+ if not (f['url'] and f['format']):
+ return
+ if f.get('format_note'):
+ f['format_note'] = 'track ID ' + f['format_note']
+ f['preference'] = -1 if f['format'] == 'full' else -2
+ return f
+
+ formats = traverse_obj(json_data, (
+ 'stems', T(dict.items), Ellipsis, {
+ 'format': (0, T(txt_or_none)),
+ 'format_note': (1, 's3TrackId', T(txt_or_none)),
+ 'format_id': (1, 'stemType', T(txt_or_none)),
+ 'url': (1, 'lqMp3Url', T(url_or_none)),
+ }, T(fmt_or_none)))
+
+ self._sort_formats(formats)
+
+ info = traverse_obj(json_data, {
+ 'id': ('id', T(txt_or_none)),
+ 'tags': ('metadataTags', Ellipsis, T(txt_or_none)),
+ 'title': ('title', T(txt_or_none)),
+ 'duration': ('length', T(float_or_none)),
+ 'timestamp': ('added', T(unified_timestamp)),
+ 'thumbnail': (('imageUrl', 'cover'), T(url_or_none)),
+ 'age_limit': ('isExplicit', T(lambda b: 18 if b else None)),
+ 'release_timestamp': ('releaseDate', T(unified_timestamp)),
+ }, get_all=False)
+
+ info.update(traverse_obj(json_data, {
+ 'categories': ('genres', Ellipsis, 'tag', T(txt_or_none)),
+ 'tags': ('metadataTags', Ellipsis, T(txt_or_none)),
+ }))
+
+ info.update({
+ 'display_id': video_id,
+ 'formats': formats,
+ })
+
+ return info
diff --git a/youtube_dl/extractor/extractors.py b/youtube_dl/extractor/extractors.py
index d9289e5bf1a..82221445fc2 100644
--- a/youtube_dl/extractor/extractors.py
+++ b/youtube_dl/extractor/extractors.py
@@ -357,6 +357,7 @@
from .elpais import ElPaisIE
from .embedly import EmbedlyIE
from .engadget import EngadgetIE
+from .epidemicsound import EpidemicSoundIE
from .eporner import EpornerIE
from .eroprofile import EroProfileIE
from .escapist import EscapistIE
| ## Please follow the guide below
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
- Use *Preview* tab to see how your *pull request* will actually look like
---
### Before submitting a *pull request* make sure you have:
- [x] [Searched](https://github.com/ytdl-org/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Read [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site)
- [x] Read [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) and adjusted the code to meet them
- [x] Covered the code with tests (note that PRs without tests will be REJECTED)
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [ ] Bug fix
- [x] Improvement
- [x] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
see #31462 | https://api.github.com/repos/ytdl-org/youtube-dl/pulls/32628 | 2023-11-06T11:05:16Z | 2023-12-06T01:17:57Z | 2023-12-06T01:17:57Z | 2023-12-06T01:18:25Z | 1,326 | ytdl-org/youtube-dl | 49,839 |
Fix the `dirty_unzip` rule | diff --git a/thefuck/rules/dirty_untar.py b/thefuck/rules/dirty_untar.py
index efbf1e328..25d02a640 100644
--- a/thefuck/rules/dirty_untar.py
+++ b/thefuck/rules/dirty_untar.py
@@ -19,7 +19,6 @@ def _is_tar_extract(cmd):
def _tar_file(cmd):
-
for c in cmd:
for ext in tar_extensions:
if c.endswith(ext):
diff --git a/thefuck/rules/dirty_unzip.py b/thefuck/rules/dirty_unzip.py
index 3e45ea375..8423a5ae0 100644
--- a/thefuck/rules/dirty_unzip.py
+++ b/thefuck/rules/dirty_unzip.py
@@ -5,8 +5,11 @@
def _is_bad_zip(file):
- with zipfile.ZipFile(file, 'r') as archive:
- return len(archive.namelist()) > 1
+ try:
+ with zipfile.ZipFile(file, 'r') as archive:
+ return len(archive.namelist()) > 1
+ except:
+ return False
def _zip_file(command):
@@ -24,8 +27,14 @@ def _zip_file(command):
@for_app('unzip')
def match(command):
- return ('-d' not in command.script
- and _is_bad_zip(_zip_file(command)))
+ if '-d' in command.script:
+ return False
+
+ zip_file = _zip_file(command)
+ if zip_file:
+ return _is_bad_zip(zip_file)
+ else:
+ return False
def get_new_command(command):
| https://api.github.com/repos/nvbn/thefuck/pulls/419 | 2015-12-29T17:47:29Z | 2015-12-29T21:58:30Z | 2015-12-29T21:58:30Z | 2016-01-04T11:44:46Z | 384 | nvbn/thefuck | 30,767 | |
core[patch]: add alternative_import to deprecated | diff --git a/libs/core/langchain_core/_api/deprecation.py b/libs/core/langchain_core/_api/deprecation.py
index eee686a1811cf1..b9a0c39e485fe5 100644
--- a/libs/core/langchain_core/_api/deprecation.py
+++ b/libs/core/langchain_core/_api/deprecation.py
@@ -39,6 +39,7 @@ def deprecated(
message: str = "",
name: str = "",
alternative: str = "",
+ alternative_import: str = "",
pending: bool = False,
obj_type: str = "",
addendum: str = "",
@@ -105,6 +106,7 @@ def deprecate(
_name: str = name,
_message: str = message,
_alternative: str = alternative,
+ _alternative_import: str = alternative_import,
_pending: bool = pending,
_addendum: str = addendum,
) -> T:
@@ -117,6 +119,7 @@ def emit_warning() -> None:
message=_message,
name=_name,
alternative=_alternative,
+ alternative_import=_alternative_import,
pending=_pending,
obj_type=_obj_type,
addendum=_addendum,
@@ -145,7 +148,9 @@ def warning_emitting_wrapper(*args: Any, **kwargs: Any) -> Any:
if not _obj_type:
_obj_type = "class"
wrapped = obj.__init__ # type: ignore
- _name = _name or obj.__name__
+ _name = _name or (
+ f"{obj.__module__}.{obj.__name__}" if obj.__module__ else obj.__name__
+ )
old_doc = obj.__doc__
def finalize(_: Any, new_doc: str) -> T:
@@ -271,6 +276,7 @@ def warn_deprecated(
message: str = "",
name: str = "",
alternative: str = "",
+ alternative_import: str = "",
pending: bool = False,
obj_type: str = "",
addendum: str = "",
@@ -307,6 +313,10 @@ def warn_deprecated(
"""
if pending and removal:
raise ValueError("A pending deprecation cannot have a scheduled removal")
+ if alternative and alternative_import:
+ raise ValueError("Cannot specify both alternative and alternative_import")
+ if alternative_import and "." not in alternative_import:
+ raise ValueError("alternative_import must be a fully qualified module path")
if not pending:
if not removal:
@@ -320,6 +330,7 @@ def warn_deprecated(
if not message:
message = ""
+ package = name.split(".")[0].replace("_", "-") if "." in name else "LangChain"
if obj_type:
message += f"The {obj_type} `{name}`"
@@ -329,12 +340,24 @@ def warn_deprecated(
if pending:
message += " will be deprecated in a future version"
else:
- message += f" was deprecated in LangChain {since}"
+ message += f" was deprecated in {package} {since}"
if removal:
message += f" and will be removed {removal}"
- if alternative:
+ if alternative_import:
+ alt_package = alternative_import.split(".")[0].replace("_", "-")
+ if alt_package == package:
+ message += f". Use {alternative_import} instead."
+ else:
+ alt_module, alt_name = alternative_import.rsplit(".", 1)
+ message += (
+ f". An updated version of the {obj_type} exists in the "
+ f"{alt_package} package and should be used instead. To use it run "
+ f"`pip install -U {alt_package}` and import as "
+ f"`from {alt_module} import {alt_name}`."
+ )
+ elif alternative:
message += f". Use {alternative} instead."
if addendum:
diff --git a/libs/core/tests/unit_tests/_api/test_deprecation.py b/libs/core/tests/unit_tests/_api/test_deprecation.py
index 238ba231fa36d1..dc7f40de133d8a 100644
--- a/libs/core/tests/unit_tests/_api/test_deprecation.py
+++ b/libs/core/tests/unit_tests/_api/test_deprecation.py
@@ -219,8 +219,8 @@ def deprecated_method(self) -> str:
assert len(warning_list) == 2
warning = warning_list[0].message
assert str(warning) == (
- "The class `DeprecatedClass` was deprecated in "
- "LangChain 2.0.0 and will be removed in 3.0.0"
+ "The class `tests.unit_tests._api.test_deprecation.DeprecatedClass` was "
+ "deprecated in tests 2.0.0 and will be removed in 3.0.0"
)
warning = warning_list[1].message
| https://api.github.com/repos/langchain-ai/langchain/pulls/15781 | 2024-01-09T21:54:39Z | 2024-01-09T22:45:29Z | 2024-01-09T22:45:29Z | 2024-01-09T22:55:16Z | 1,089 | langchain-ai/langchain | 43,483 | |
[Instagram] Add new [:tag] extractor, refactor [:user] | diff --git a/youtube_dl/extractor/extractors.py b/youtube_dl/extractor/extractors.py
index 3b1dfc451f2..485909328f8 100644
--- a/youtube_dl/extractor/extractors.py
+++ b/youtube_dl/extractor/extractors.py
@@ -489,7 +489,11 @@
from .inc import IncIE
from .indavideo import IndavideoEmbedIE
from .infoq import InfoQIE
-from .instagram import InstagramIE, InstagramUserIE
+from .instagram import (
+ InstagramIE,
+ InstagramUserIE,
+ InstagramTagIE,
+)
from .internazionale import InternazionaleIE
from .internetvideoarchive import InternetVideoArchiveIE
from .iprima import IPrimaIE
diff --git a/youtube_dl/extractor/instagram.py b/youtube_dl/extractor/instagram.py
index 7e0e838f05a..ffd87b55f6d 100644
--- a/youtube_dl/extractor/instagram.py
+++ b/youtube_dl/extractor/instagram.py
@@ -227,44 +227,37 @@ def get_count(key, kind):
}
-class InstagramUserIE(InfoExtractor):
- _VALID_URL = r'https?://(?:www\.)?instagram\.com/(?P<id>[^/]{2,})/?(?:$|[?#])'
- IE_DESC = 'Instagram user profile'
- IE_NAME = 'instagram:user'
- _TEST = {
- 'url': 'https://instagram.com/porsche',
- 'info_dict': {
- 'id': 'porsche',
- 'title': 'porsche',
- },
- 'playlist_count': 5,
- 'params': {
- 'extract_flat': True,
- 'skip_download': True,
- 'playlistend': 5,
- }
- }
+class InstagramPlaylistIE(InfoExtractor):
+ # A superclass for handling any kind of query based on GraphQL which
+ # results in a playlist.
+
+ _gis_tmpl = None # used to cache GIS request type
- _gis_tmpl = None
+ def _parse_graphql(self, webpage, item_id):
+ # Reads a webpage and returns its GraphQL data.
+ return self._parse_json(
+ self._search_regex(
+ r'sharedData\s*=\s*({.+?})\s*;\s*[<\n]', webpage, 'data'),
+ item_id)
- def _entries(self, data):
+ def _extract_graphql(self, data, url):
+ # Parses GraphQL queries containing videos and generates a playlist.
def get_count(suffix):
return int_or_none(try_get(
node, lambda x: x['edge_media_' + suffix]['count']))
- uploader_id = data['entry_data']['ProfilePage'][0]['graphql']['user']['id']
+ uploader_id = self._match_id(url)
csrf_token = data['config']['csrf_token']
rhx_gis = data.get('rhx_gis') or '3c7ca9dcefcf966d11dacf1f151335e8'
- self._set_cookie('instagram.com', 'ig_pr', '1')
-
cursor = ''
for page_num in itertools.count(1):
- variables = json.dumps({
- 'id': uploader_id,
+ variables = {
'first': 12,
'after': cursor,
- })
+ }
+ variables.update(self._query_vars_for(data))
+ variables = json.dumps(variables)
if self._gis_tmpl:
gis_tmpls = [self._gis_tmpl]
@@ -276,21 +269,26 @@ def get_count(suffix):
'%s:%s:%s' % (rhx_gis, csrf_token, std_headers['User-Agent']),
]
+ # try all of the ways to generate a GIS query, and not only use the
+ # first one that works, but cache it for future requests
for gis_tmpl in gis_tmpls:
try:
- media = self._download_json(
+ json_data = self._download_json(
'https://www.instagram.com/graphql/query/', uploader_id,
'Downloading JSON page %d' % page_num, headers={
'X-Requested-With': 'XMLHttpRequest',
'X-Instagram-GIS': hashlib.md5(
('%s:%s' % (gis_tmpl, variables)).encode('utf-8')).hexdigest(),
}, query={
- 'query_hash': '42323d64886122307be10013ad2dcc44',
+ 'query_hash': self._QUERY_HASH,
'variables': variables,
- })['data']['user']['edge_owner_to_timeline_media']
+ })
+ media = self._parse_timeline_from(json_data)
self._gis_tmpl = gis_tmpl
break
except ExtractorError as e:
+ # if it's an error caused by a bad query, and there are
+ # more GIS templates to try, ignore it and keep trying
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 403:
if gis_tmpl != gis_tmpls[-1]:
continue
@@ -348,14 +346,80 @@ def get_count(suffix):
break
def _real_extract(self, url):
- username = self._match_id(url)
-
- webpage = self._download_webpage(url, username)
+ user_or_tag = self._match_id(url)
+ webpage = self._download_webpage(url, user_or_tag)
+ data = self._parse_graphql(webpage, user_or_tag)
- data = self._parse_json(
- self._search_regex(
- r'sharedData\s*=\s*({.+?})\s*;\s*[<\n]', webpage, 'data'),
- username)
+ self._set_cookie('instagram.com', 'ig_pr', '1')
return self.playlist_result(
- self._entries(data), username, username)
+ self._extract_graphql(data, url), user_or_tag, user_or_tag)
+
+
+class InstagramUserIE(InstagramPlaylistIE):
+ _VALID_URL = r'https?://(?:www\.)?instagram\.com/(?P<id>[^/]{2,})/?(?:$|[?#])'
+ IE_DESC = 'Instagram user profile'
+ IE_NAME = 'instagram:user'
+ _TEST = {
+ 'url': 'https://instagram.com/porsche',
+ 'info_dict': {
+ 'id': 'porsche',
+ 'title': 'porsche',
+ },
+ 'playlist_count': 5,
+ 'params': {
+ 'extract_flat': True,
+ 'skip_download': True,
+ 'playlistend': 5,
+ }
+ }
+
+ _QUERY_HASH = '42323d64886122307be10013ad2dcc44',
+
+ @staticmethod
+ def _parse_timeline_from(data):
+ # extracts the media timeline data from a GraphQL result
+ return data['data']['user']['edge_owner_to_timeline_media']
+
+ @staticmethod
+ def _query_vars_for(data):
+ # returns a dictionary of variables to add to the timeline query based
+ # on the GraphQL of the original page
+ return {
+ 'id': data['entry_data']['ProfilePage'][0]['graphql']['user']['id']
+ }
+
+
+class InstagramTagIE(InstagramPlaylistIE):
+ _VALID_URL = r'https?://(?:www\.)?instagram\.com/explore/tags/(?P<id>[^/]+)'
+ IE_DESC = 'Instagram hashtag search'
+ IE_NAME = 'instagram:tag'
+ _TEST = {
+ 'url': 'https://instagram.com/explore/tags/lolcats',
+ 'info_dict': {
+ 'id': 'lolcats',
+ 'title': 'lolcats',
+ },
+ 'playlist_count': 50,
+ 'params': {
+ 'extract_flat': True,
+ 'skip_download': True,
+ 'playlistend': 50,
+ }
+ }
+
+ _QUERY_HASH = 'f92f56d47dc7a55b606908374b43a314',
+
+ @staticmethod
+ def _parse_timeline_from(data):
+ # extracts the media timeline data from a GraphQL result
+ return data['data']['hashtag']['edge_hashtag_to_media']
+
+ @staticmethod
+ def _query_vars_for(data):
+ # returns a dictionary of variables to add to the timeline query based
+ # on the GraphQL of the original page
+ return {
+ 'tag_name':
+ data['entry_data']['TagPage'][0]['graphql']['hashtag']['name']
+ }
| ### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [adding new extractor tutorial](https://github.com/rg3/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/rg3/youtube-dl#youtube-dl-coding-conventions) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [ ] Bug fix
- [ ] Improvement
- [X] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
This adds an extractor for Instagram videos. It downloads all videos that match a given hashtag as a playlist. It is heavily based on the Instagram user extractor.
I did consider refactoring common methods instead of copy-pasting, but decided it was more important to avoid touching the other code. This way is more difficult to maintain, but I hope is easier to review and avoids the risk of regressions in the other extractors.
If you would prefer a refactor -- either now or in a separate PR -- I would be happy to do it. | https://api.github.com/repos/ytdl-org/youtube-dl/pulls/18757 | 2019-01-06T00:35:39Z | 2019-01-20T09:10:47Z | 2019-01-20T09:10:47Z | 2019-01-20T22:42:28Z | 1,975 | ytdl-org/youtube-dl | 50,231 |
2.7.3 prep | diff --git a/acme/setup.py b/acme/setup.py
index 2b51aaf1636..fc149889c94 100644
--- a/acme/setup.py
+++ b/acme/setup.py
@@ -3,7 +3,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'cryptography>=3.2.1',
diff --git a/certbot-apache/setup.py b/certbot-apache/setup.py
index d6c1a884508..f95fb16c550 100644
--- a/certbot-apache/setup.py
+++ b/certbot-apache/setup.py
@@ -1,7 +1,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
# We specify the minimum acme and certbot version as the current plugin
diff --git a/certbot-compatibility-test/setup.py b/certbot-compatibility-test/setup.py
index 995645ef5a0..bb26ad54525 100644
--- a/certbot-compatibility-test/setup.py
+++ b/certbot-compatibility-test/setup.py
@@ -1,7 +1,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'certbot',
diff --git a/certbot-dns-cloudflare/setup.py b/certbot-dns-cloudflare/setup.py
index d4f72174721..eb6cabb3b4b 100644
--- a/certbot-dns-cloudflare/setup.py
+++ b/certbot-dns-cloudflare/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'cloudflare>=1.5.1',
diff --git a/certbot-dns-digitalocean/setup.py b/certbot-dns-digitalocean/setup.py
index 07f1082053a..b51250bc5a3 100644
--- a/certbot-dns-digitalocean/setup.py
+++ b/certbot-dns-digitalocean/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'python-digitalocean>=1.11', # 1.15.0 or newer is recommended for TTL support
diff --git a/certbot-dns-dnsimple/setup.py b/certbot-dns-dnsimple/setup.py
index 895b6871a90..95c732297d3 100644
--- a/certbot-dns-dnsimple/setup.py
+++ b/certbot-dns-dnsimple/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
# This version of lexicon is required to address the problem described in
diff --git a/certbot-dns-dnsmadeeasy/setup.py b/certbot-dns-dnsmadeeasy/setup.py
index 85bbbbe370b..1d6aa5f1ebb 100644
--- a/certbot-dns-dnsmadeeasy/setup.py
+++ b/certbot-dns-dnsmadeeasy/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'dns-lexicon>=3.14.1',
diff --git a/certbot-dns-gehirn/setup.py b/certbot-dns-gehirn/setup.py
index 0c7a419fde7..207d96ff4b5 100644
--- a/certbot-dns-gehirn/setup.py
+++ b/certbot-dns-gehirn/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'dns-lexicon>=3.14.1',
diff --git a/certbot-dns-google/setup.py b/certbot-dns-google/setup.py
index f2103916ec0..6fbd893c5db 100644
--- a/certbot-dns-google/setup.py
+++ b/certbot-dns-google/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'google-api-python-client>=1.6.5',
diff --git a/certbot-dns-linode/setup.py b/certbot-dns-linode/setup.py
index 0ce400e6fd1..bb77c2f3f55 100644
--- a/certbot-dns-linode/setup.py
+++ b/certbot-dns-linode/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'dns-lexicon>=3.14.1',
diff --git a/certbot-dns-luadns/setup.py b/certbot-dns-luadns/setup.py
index c59bc168ed2..870e6d83903 100644
--- a/certbot-dns-luadns/setup.py
+++ b/certbot-dns-luadns/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'dns-lexicon>=3.14.1',
diff --git a/certbot-dns-nsone/setup.py b/certbot-dns-nsone/setup.py
index 2c05e0cb135..2b231ba29a3 100644
--- a/certbot-dns-nsone/setup.py
+++ b/certbot-dns-nsone/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'dns-lexicon>=3.14.1',
diff --git a/certbot-dns-ovh/setup.py b/certbot-dns-ovh/setup.py
index 9b86b1f1dfb..e9d9e6f6d5d 100644
--- a/certbot-dns-ovh/setup.py
+++ b/certbot-dns-ovh/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'dns-lexicon>=3.15.1',
diff --git a/certbot-dns-rfc2136/setup.py b/certbot-dns-rfc2136/setup.py
index 007bd06387e..3013de12616 100644
--- a/certbot-dns-rfc2136/setup.py
+++ b/certbot-dns-rfc2136/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'dnspython>=1.15.0',
diff --git a/certbot-dns-route53/setup.py b/certbot-dns-route53/setup.py
index 6f6e040e8d6..97a227fb627 100644
--- a/certbot-dns-route53/setup.py
+++ b/certbot-dns-route53/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'boto3>=1.15.15',
diff --git a/certbot-dns-sakuracloud/setup.py b/certbot-dns-sakuracloud/setup.py
index b8f5660a656..b032f8f5d7e 100644
--- a/certbot-dns-sakuracloud/setup.py
+++ b/certbot-dns-sakuracloud/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
'dns-lexicon>=3.14.1',
diff --git a/certbot-nginx/setup.py b/certbot-nginx/setup.py
index 4f9c2c24aa9..f7c0b7d6803 100644
--- a/certbot-nginx/setup.py
+++ b/certbot-nginx/setup.py
@@ -1,7 +1,7 @@
from setuptools import find_packages
from setuptools import setup
-version = '2.7.1'
+version = '2.7.2'
install_requires = [
# We specify the minimum acme and certbot version as the current plugin
diff --git a/certbot/CHANGELOG.md b/certbot/CHANGELOG.md
index 07ed9a1409b..a8fda83411c 100644
--- a/certbot/CHANGELOG.md
+++ b/certbot/CHANGELOG.md
@@ -2,7 +2,18 @@
Certbot adheres to [Semantic Versioning](https://semver.org/).
-## 2.7.2 - master
+## 2.7.3 - master
+
+### Fixed
+
+* Fixed a bug where arguments with contained spaces weren't being handled correctly
+* Fixed a bug that caused the ACME account to not be properly restored on
+ renewal causing problems in setups where the user had multiple accounts with
+ the same ACME server.
+
+More details about these changes can be found on our GitHub repo.
+
+## 2.7.2 - 2023-10-19
### Fixed
diff --git a/certbot/certbot/__init__.py b/certbot/certbot/__init__.py
index 1dbc91774d8..9260092cfa6 100644
--- a/certbot/certbot/__init__.py
+++ b/certbot/certbot/__init__.py
@@ -3,7 +3,7 @@
import warnings
# version number like 1.2.3a0, must have at least 2 parts, like 1.2
-__version__ = '2.7.1'
+__version__ = '2.7.2'
if sys.version_info[:2] == (3, 7):
warnings.warn(
diff --git a/certbot/certbot/_internal/cli/helpful.py b/certbot/certbot/_internal/cli/helpful.py
index 6fea1b131a4..2f3b4554cd4 100644
--- a/certbot/certbot/_internal/cli/helpful.py
+++ b/certbot/certbot/_internal/cli/helpful.py
@@ -219,6 +219,8 @@ def update_result(settings_dict: Dict[str, Tuple[configargparse.Action, str]],
if '=' in arg:
arg = arg.split('=')[0]
+ elif ' ' in arg:
+ arg = arg.split(' ')[0]
if arg.startswith('--'):
args.append(arg)
diff --git a/certbot/certbot/_internal/renewal.py b/certbot/certbot/_internal/renewal.py
index a33dd7ee5ea..62e31f8653e 100644
--- a/certbot/certbot/_internal/renewal.py
+++ b/certbot/certbot/_internal/renewal.py
@@ -194,6 +194,7 @@ def restore_required_config_elements(config: configuration.NamespaceConfig,
"""
+ updated_values = {}
required_items = itertools.chain(
(("pref_challs", _restore_pref_challs),),
zip(BOOL_CONFIG_ITEMS, itertools.repeat(_restore_bool)),
@@ -202,7 +203,9 @@ def restore_required_config_elements(config: configuration.NamespaceConfig,
for item_name, restore_func in required_items:
if item_name in renewalparams and not config.set_by_user(item_name):
value = restore_func(item_name, renewalparams[item_name])
- setattr(config, item_name, value)
+ updated_values[item_name] = value
+ for key, value in updated_values.items():
+ setattr(config, key, value)
def _remove_deprecated_config_elements(renewalparams: Mapping[str, Any]) -> Dict[str, Any]:
diff --git a/certbot/certbot/_internal/tests/cli_test.py b/certbot/certbot/_internal/tests/cli_test.py
index e4505dbee7b..629aa03b450 100644
--- a/certbot/certbot/_internal/tests/cli_test.py
+++ b/certbot/certbot/_internal/tests/cli_test.py
@@ -594,6 +594,13 @@ def test_adjacent_short_args(self):
assert_set_by_user_with_value(namespace, 'text_mode', True)
assert_set_by_user_with_value(namespace, 'verbose_count', 1)
assert_set_by_user_with_value(namespace, 'email', 'foo@example.com')
+
+ def test_arg_with_contained_spaces(self):
+ # This can happen if a user specifies an arg like "-d foo.com" enclosed
+ # in double quotes, or as its own line in a docker-compose.yml file (as
+ # in #9811)
+ namespace = self.parse(['certonly', '-d foo.com'])
+ assert_set_by_user_with_value(namespace, 'domains', ['foo.com'])
if __name__ == '__main__':
sys.exit(pytest.main(sys.argv[1:] + [__file__])) # pragma: no cover
diff --git a/certbot/certbot/_internal/tests/renewal_test.py b/certbot/certbot/_internal/tests/renewal_test.py
index edc9eea35e8..239744692c6 100644
--- a/certbot/certbot/_internal/tests/renewal_test.py
+++ b/certbot/certbot/_internal/tests/renewal_test.py
@@ -253,6 +253,18 @@ def test_ancient_server_renewal_conf(self, mock_set_by_user):
self._call(self.config, {'server': constants.V1_URI})
assert self.config.server == constants.CLI_DEFAULTS['server']
+ def test_related_values(self):
+ # certbot.configuration.NamespaceConfig.set_by_user considers some values as related to each
+ # other and considers both set by the user if either is. This test ensures all renewal
+ # parameters are restored regardless of their restoration order or relation between values.
+ # See https://github.com/certbot/certbot/issues/9805 for more info.
+ renewalparams = {
+ 'server': 'https://example.org',
+ 'account': 'somehash',
+ }
+ self._call(self.config, renewalparams)
+ self.assertEqual(self.config.account, renewalparams['account'])
+
class DescribeResultsTest(unittest.TestCase):
"""Tests for certbot._internal.renewal._renew_describe_results."""
diff --git a/certbot/docs/cli-help.txt b/certbot/docs/cli-help.txt
index e795871284b..a5d98954a26 100644
--- a/certbot/docs/cli-help.txt
+++ b/certbot/docs/cli-help.txt
@@ -36,7 +36,7 @@ manage your account:
--agree-tos Agree to the ACME server's Subscriber Agreement
-m EMAIL Email address for important account notifications
-optional arguments:
+options:
-h, --help show this help message and exit
-c CONFIG_FILE, --config CONFIG_FILE
path to config file (default: /etc/letsencrypt/cli.ini
@@ -122,7 +122,7 @@ optional arguments:
case, and to know when to deprecate support for past
Python versions and flags. If you wish to hide this
information from the Let's Encrypt server, set this to
- "". (default: CertbotACMEClient/2.7.1 (certbot;
+ "". (default: CertbotACMEClient/2.7.2 (certbot;
OS_NAME OS_VERSION) Authenticator/XXX Installer/YYY
(SUBCOMMAND; flags: FLAGS) Py/major.minor.patchlevel).
The flags encoded in the user agent are: --duplicate,
| This PR should not be squashed to preserve the signed and tagged release commit.
This PR is the changes from https://github.com/certbot/certbot/pull/9815 plus the cherry picking for the 2.7.3 release. | https://api.github.com/repos/certbot/certbot/pulls/9817 | 2023-10-24T19:35:13Z | 2023-10-24T19:49:05Z | 2023-10-24T19:49:05Z | 2023-10-24T19:49:05Z | 3,841 | certbot/certbot | 2,522 |
Note about running on web server, not PC | diff --git a/README.rst b/README.rst
index 0dbe1cdefeb..d75b44a65bd 100644
--- a/README.rst
+++ b/README.rst
@@ -6,7 +6,7 @@ Anyone who has gone through the trouble of setting up a secure website knows wha
How you use Certbot depends on the configuration of your web server. The best way to get started is to use our `interactive guide <https://certbot.eff.org>`_. It generates instructions based on your configuration settings. In most cases, you’ll need `root or administrator access <https://certbot.eff.org/faq/#does-certbot-require-root-administrator-privileges>`_ to your web server to run Certbot.
-If you’re using a hosted service and don’t have direct access to your web server, you might not be able to use Certbot. Check with your hosting provider for documentation about uploading certificates or using certificates issued by Let’s Encrypt.
+Certbot is meant to be run directly on your web server, not on your personal computer. If you’re using a hosted service and don’t have direct access to your web server, you might not be able to use Certbot. Check with your hosting provider for documentation about uploading certificates or using certificates issued by Let’s Encrypt.
Certbot is a fully-featured, extensible client for the Let's
Encrypt CA (or any other CA that speaks the `ACME
diff --git a/docs/install.rst b/docs/install.rst
index f7504baa554..fc6abad7ada 100644
--- a/docs/install.rst
+++ b/docs/install.rst
@@ -9,6 +9,8 @@ Get Certbot
About Certbot
=============
+*Certbot is meant to be run directly on a web server*, normally by a system administrator. In most cases, running Certbot on your personal computer is not a useful option. The instructions below relate to installing and running Certbot on a server.
+
Certbot is packaged for many common operating systems and web servers. Check whether
``certbot`` (or ``letsencrypt``) is packaged for your web server's OS by visiting
certbot.eff.org_, where you will also find the correct installation instructions for
| Documentation change only: two reminders about where Certbot is meant to be used, to avoid any confusion caused by people installing locally on laptops.
Fixes #2224. | https://api.github.com/repos/certbot/certbot/pulls/6422 | 2018-10-16T12:09:57Z | 2018-10-17T21:09:00Z | 2018-10-17T21:09:00Z | 2018-10-17T21:09:05Z | 488 | certbot/certbot | 482 |
Added Manhattan distance algorithm | diff --git a/maths/manhattan_distance.py b/maths/manhattan_distance.py
new file mode 100644
index 000000000000..2711d4c8ccd6
--- /dev/null
+++ b/maths/manhattan_distance.py
@@ -0,0 +1,126 @@
+def manhattan_distance(point_a: list, point_b: list) -> float:
+ """
+ Expectts two list of numbers representing two points in the same
+ n-dimensional space
+
+ https://en.wikipedia.org/wiki/Taxicab_geometry
+
+ >>> manhattan_distance([1,1], [2,2])
+ 2.0
+ >>> manhattan_distance([1.5,1.5], [2,2])
+ 1.0
+ >>> manhattan_distance([1.5,1.5], [2.5,2])
+ 1.5
+ >>> manhattan_distance([-3, -3, -3], [0, 0, 0])
+ 9.0
+ >>> manhattan_distance([1,1], None)
+ Traceback (most recent call last):
+ ...
+ ValueError: Missing an input
+ >>> manhattan_distance([1,1], [2, 2, 2])
+ Traceback (most recent call last):
+ ...
+ ValueError: Both points must be in the same n-dimensional space
+ >>> manhattan_distance([1,"one"], [2, 2, 2])
+ Traceback (most recent call last):
+ ...
+ TypeError: Expected a list of numbers as input, found str
+ >>> manhattan_distance(1, [2, 2, 2])
+ Traceback (most recent call last):
+ ...
+ TypeError: Expected a list of numbers as input, found int
+ >>> manhattan_distance([1,1], "not_a_list")
+ Traceback (most recent call last):
+ ...
+ TypeError: Expected a list of numbers as input, found str
+ """
+
+ _validate_point(point_a)
+ _validate_point(point_b)
+ if len(point_a) != len(point_b):
+ raise ValueError("Both points must be in the same n-dimensional space")
+
+ return float(sum(abs(a - b) for a, b in zip(point_a, point_b)))
+
+
+def _validate_point(point: list[float]) -> None:
+ """
+ >>> _validate_point(None)
+ Traceback (most recent call last):
+ ...
+ ValueError: Missing an input
+ >>> _validate_point([1,"one"])
+ Traceback (most recent call last):
+ ...
+ TypeError: Expected a list of numbers as input, found str
+ >>> _validate_point(1)
+ Traceback (most recent call last):
+ ...
+ TypeError: Expected a list of numbers as input, found int
+ >>> _validate_point("not_a_list")
+ Traceback (most recent call last):
+ ...
+ TypeError: Expected a list of numbers as input, found str
+ """
+ if point:
+ if isinstance(point, list):
+ for item in point:
+ if not isinstance(item, (int, float)):
+ raise TypeError(
+ f"Expected a list of numbers as input, "
+ f"found {type(item).__name__}"
+ )
+ else:
+ raise TypeError(
+ f"Expected a list of numbers as input, found {type(point).__name__}"
+ )
+ else:
+ raise ValueError("Missing an input")
+
+
+def manhattan_distance_one_liner(point_a: list, point_b: list) -> float:
+ """
+ Version with one liner
+
+ >>> manhattan_distance_one_liner([1,1], [2,2])
+ 2.0
+ >>> manhattan_distance_one_liner([1.5,1.5], [2,2])
+ 1.0
+ >>> manhattan_distance_one_liner([1.5,1.5], [2.5,2])
+ 1.5
+ >>> manhattan_distance_one_liner([-3, -3, -3], [0, 0, 0])
+ 9.0
+ >>> manhattan_distance_one_liner([1,1], None)
+ Traceback (most recent call last):
+ ...
+ ValueError: Missing an input
+ >>> manhattan_distance_one_liner([1,1], [2, 2, 2])
+ Traceback (most recent call last):
+ ...
+ ValueError: Both points must be in the same n-dimensional space
+ >>> manhattan_distance_one_liner([1,"one"], [2, 2, 2])
+ Traceback (most recent call last):
+ ...
+ TypeError: Expected a list of numbers as input, found str
+ >>> manhattan_distance_one_liner(1, [2, 2, 2])
+ Traceback (most recent call last):
+ ...
+ TypeError: Expected a list of numbers as input, found int
+ >>> manhattan_distance_one_liner([1,1], "not_a_list")
+ Traceback (most recent call last):
+ ...
+ TypeError: Expected a list of numbers as input, found str
+ """
+
+ _validate_point(point_a)
+ _validate_point(point_b)
+ if len(point_a) != len(point_b):
+ raise ValueError("Both points must be in the same n-dimensional space")
+
+ return float(sum(abs(x - y) for x, y in zip(point_a, point_b)))
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
| ### Describe your change:
Added an algorithm to calculate Manhattan distance,
Added the algorithm twice, one in a mor traditional way and other using a one-liner.
Fixes: #7776
* [x] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [X] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| https://api.github.com/repos/TheAlgorithms/Python/pulls/7790 | 2022-10-28T16:38:29Z | 2022-10-30T09:00:48Z | 2022-10-30T09:00:48Z | 2022-10-30T14:10:35Z | 1,287 | TheAlgorithms/Python | 30,356 |
ref: Remove references to deprecated settings | diff --git a/src/sentry/sentry_metrics/configuration.py b/src/sentry/sentry_metrics/configuration.py
index a885712f379d6..cf09dcc501532 100644
--- a/src/sentry/sentry_metrics/configuration.py
+++ b/src/sentry/sentry_metrics/configuration.py
@@ -47,7 +47,6 @@ class IndexerStorage(Enum):
class MetricsIngestConfiguration:
db_backend: IndexerStorage
db_backend_options: Mapping[str, Any]
- input_topic: str
output_topic: Topic
use_case_id: UseCaseKey
internal_metrics_tag: str | None
@@ -80,7 +79,6 @@ def get_ingest_config(
MetricsIngestConfiguration(
db_backend=IndexerStorage.POSTGRES,
db_backend_options={},
- input_topic=settings.KAFKA_INGEST_METRICS,
output_topic=Topic.SNUBA_METRICS,
use_case_id=UseCaseKey.RELEASE_HEALTH,
internal_metrics_tag="release-health",
@@ -97,7 +95,6 @@ def get_ingest_config(
MetricsIngestConfiguration(
db_backend=IndexerStorage.POSTGRES,
db_backend_options={},
- input_topic=settings.KAFKA_INGEST_PERFORMANCE_METRICS,
output_topic=Topic.SNUBA_GENERIC_METRICS,
use_case_id=UseCaseKey.PERFORMANCE,
internal_metrics_tag="perf",
@@ -116,8 +113,7 @@ def get_ingest_config(
MetricsIngestConfiguration(
db_backend=IndexerStorage.MOCK,
db_backend_options={},
- input_topic="topic",
- output_topic="output-topic",
+ output_topic=Topic.SNUBA_METRICS,
use_case_id=use_case_key,
internal_metrics_tag="release-health",
writes_limiter_cluster_options={},
@@ -134,8 +130,7 @@ def get_ingest_config(
MetricsIngestConfiguration(
db_backend=IndexerStorage.MOCK,
db_backend_options={},
- input_topic="topic",
- output_topic="output-topic",
+ output_topic=Topic.SNUBA_GENERIC_METRICS,
use_case_id=use_case_key,
internal_metrics_tag="perf",
writes_limiter_cluster_options={},
| settings.KAFKA_INGEST_PERFORMANCE_METRICS and settings.KAFKA_INGEST_METRICS are deprecated.
The IngestConfiguration.input_topic was never used anywhere. | https://api.github.com/repos/getsentry/sentry/pulls/66532 | 2024-03-07T18:59:36Z | 2024-03-07T22:48:14Z | 2024-03-07T22:48:14Z | 2024-03-23T00:22:42Z | 479 | getsentry/sentry | 44,043 |
Fix bugs that Mixup does not work when device is cpu | diff --git a/timm/data/mixup.py b/timm/data/mixup.py
index 38477548a0..7e382c5233 100644
--- a/timm/data/mixup.py
+++ b/timm/data/mixup.py
@@ -214,7 +214,7 @@ def __call__(self, x, target):
lam = self._mix_pair(x)
else:
lam = self._mix_batch(x)
- target = mixup_target(target, self.num_classes, lam, self.label_smoothing)
+ target = mixup_target(target, self.num_classes, lam, self.label_smoothing, x.device)
return x, target
| `timm.data.Mixup` class does not work when device is cpu.
```python
from timm.data.mixup import Mixup
mixup_args = {
'mixup_alpha': 1.,
'cutmix_alpha': 0.,
'cutmix_minmax': None,
'prob': 1.0,
'switch_prob': 0.,
'mode': 'batch',
'label_smoothing': 0,
'num_classes': 4
}
mixup_fn = Mixup(**mixup_args)
x, labels = next(iter(loader))
x, labels = mixup_fn(x, labels)
```
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-22-ac462bac9f44> in <module>()
21 loader = torch.utils.data.DataLoader(train_dataset, batch_size=16, shuffle=True)
22 x, labels = next(iter(loader))
---> 23 x, labels = mixup_fn(x, labels)
24
25
2 frames
/usr/local/lib/python3.7/dist-packages/timm/data/mixup.py in __call__(self, x, target)
215 else:
216 lam = self._mix_batch(x)
--> 217 target = mixup_target(target, self.num_classes, lam, self.label_smoothing)
218 return x, target
219
/usr/local/lib/python3.7/dist-packages/timm/data/mixup.py in mixup_target(target, num_classes, lam, smoothing, device)
23 off_value = smoothing / num_classes
24 on_value = 1. - smoothing + off_value
---> 25 y1 = one_hot(target, num_classes, on_value=on_value, off_value=off_value, device=device)
26 y2 = one_hot(target.flip(0), num_classes, on_value=on_value, off_value=off_value, device=device)
27 return y1 * lam + y2 * (1. - lam)
/usr/local/lib/python3.7/dist-packages/timm/data/mixup.py in one_hot(x, num_classes, on_value, off_value, device)
17 def one_hot(x, num_classes, on_value=1., off_value=0., device='cuda'):
18 x = x.long().view(-1, 1)
---> 19 return torch.full((x.size()[0], num_classes), off_value, device=device).scatter_(1, x, on_value)
20
21
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking arugment for argument index in method wrapper_scatter__value)
```
So, I fixed `timm.data.Mixup` class. | https://api.github.com/repos/huggingface/pytorch-image-models/pulls/913 | 2021-10-12T15:00:16Z | 2021-10-12T21:09:56Z | 2021-10-12T21:09:56Z | 2021-10-12T21:09:56Z | 150 | huggingface/pytorch-image-models | 16,352 |
Support `st.exception` in testing framework | diff --git a/lib/streamlit/testing/element_tree.py b/lib/streamlit/testing/element_tree.py
index 404a45d67fb9..3dcd0c9d077b 100644
--- a/lib/streamlit/testing/element_tree.py
+++ b/lib/streamlit/testing/element_tree.py
@@ -26,6 +26,7 @@
from streamlit.proto.Button_pb2 import Button as ButtonProto
from streamlit.proto.Checkbox_pb2 import Checkbox as CheckboxProto
from streamlit.proto.Element_pb2 import Element as ElementProto
+from streamlit.proto.Exception_pb2 import Exception as ExceptionProto
from streamlit.proto.ForwardMsg_pb2 import ForwardMsg
from streamlit.proto.Heading_pb2 import Heading as HeadingProto
from streamlit.proto.Markdown_pb2 import Markdown as MarkdownProto
@@ -187,6 +188,30 @@ def __init__(self, proto: MarkdownProto, root: ElementTree):
self.type = "code"
+@dataclass
+class Exception(Element):
+ type: str
+ message: str
+ is_markdown: bool
+ stack_trace: list[str]
+ is_warning: bool
+
+ def __init__(self, proto: ExceptionProto, root: ElementTree):
+ self.key = None
+ self.root = root
+ self.proto = proto
+ self.type = "exception"
+
+ self.message = proto.message
+ self.is_markdown = proto.message_is_markdown
+ self.stack_trace = list(proto.stack_trace)
+ self.is_warning = proto.is_warning
+
+ @property
+ def value(self) -> str:
+ return self.message
+
+
@runtime_checkable
class Widget(Protocol):
id: str
@@ -730,6 +755,10 @@ def get(self, element_type: Literal["header"]) -> Sequence[Header]:
def get(self, element_type: Literal["subheader"]) -> Sequence[Subheader]:
...
+ @overload
+ def get(self, element_type: Literal["exception"]) -> Sequence[Exception]:
+ ...
+
@overload
def get(self, element_type: Literal["radio"]) -> Sequence[Radio[Any]]:
...
@@ -887,6 +916,8 @@ def parse_tree_from_messages(messages: list[ForwardMsg]) -> ElementTree:
new_node = Subheader(elt.heading, root=root)
else:
raise ValueError(f"Unknown heading type with tag {elt.heading.tag}")
+ elif elt.WhichOneof("type") == "exception":
+ new_node = Exception(elt.exception, root=root)
elif elt.WhichOneof("type") == "radio":
new_node = Radio(elt.radio, root=root)
elif elt.WhichOneof("type") == "checkbox":
diff --git a/lib/tests/streamlit/testing/element_tree_test.py b/lib/tests/streamlit/testing/element_tree_test.py
index eab021f392ac..1f92bcb1a78c 100644
--- a/lib/tests/streamlit/testing/element_tree_test.py
+++ b/lib/tests/streamlit/testing/element_tree_test.py
@@ -361,3 +361,31 @@ def test_value(self):
with pytest.raises(IndexError):
sr6.get("selectbox")[0].select_index(42).run()
+
+
+class ExceptionTest(InteractiveScriptTests):
+ def test_value(self):
+ script = self.script_from_string(
+ "exception.py",
+ """
+ import streamlit as st
+
+ st.exception(RuntimeError("foo"))
+ """,
+ )
+ sr = script.run()
+
+ assert sr.get("exception")[0].value == "foo"
+
+ def test_markdown(self):
+ script = self.script_from_string(
+ "exception2.py",
+ """
+ import streamlit as st
+
+ st.exception(st.errors.MarkdownFormattedException("# Oh no"))
+ """,
+ )
+ sr = script.run()
+
+ assert sr.get("exception")[0].is_markdown
|
## 📚 Context
- What kind of change does this PR introduce?
- [ ] Bugfix
- [x] Feature
- [ ] Refactoring
- [ ] Other, please describe:
## 🧠 Description of Changes
- [ ] This is a breaking API change
- [x] This is a visible (user-facing) change
## 🧪 Testing Done
- [ ] Screenshots included
- [x] Added/Updated unit tests
- [ ] Added/Updated e2e tests
| https://api.github.com/repos/streamlit/streamlit/pulls/6283 | 2023-03-09T18:37:37Z | 2023-03-10T00:07:49Z | 2023-03-10T00:07:49Z | 2023-03-10T00:07:53Z | 884 | streamlit/streamlit | 21,748 |
Fix Inconsistent definition of czstring in comments | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index 817246489..8b7017b9e 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -18755,7 +18755,7 @@ Distinguishing these alternatives prevents misunderstandings and bugs.
All we know is that it is supposed to be the nullptr or point to at least one character
void f1(zstring s); // s is a C-style string or the nullptr
- void f1(czstring s); // s is a C-style string that is not the nullptr
+ void f1(czstring s); // s is a C-style string constant or the nullptr
void f1(std::byte* s); // s is a pointer to a byte (C++17)
##### Note
| Comments in sections SL.str.3 and GSL.view disagree on whether czstring may be the nullptr.
This PR fixes the first comment definition in SL.str.3 that czstring `is a C-style string that is not the nullptr`
### SL.str.3: Use zstring or czstring to refer to a C-style, zero-terminated, sequence of characters
```
void f1(zstring s); // s is a C-style string or the nullptr
void f1(czstring s); // s is a C-style string that is not the nullptr
```
### GSL.view: Views
`zstring` // a `char*` supposed to be a C-style string; that is, a zero-terminated sequence of `char` or `nullptr`
`czstring` // a `const char*` supposed to be a C-style string; that is, a zero-terminated sequence of const `char` or `nullptr` | https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/976 | 2017-07-03T16:20:53Z | 2017-07-24T18:25:19Z | 2017-07-24T18:25:19Z | 2017-07-24T18:25:19Z | 192 | isocpp/CppCoreGuidelines | 15,392 |
Fix typos | diff --git a/rich/cells.py b/rich/cells.py
index 139b949f7..9354f9e31 100644
--- a/rich/cells.py
+++ b/rich/cells.py
@@ -60,7 +60,7 @@ def _get_codepoint_cell_size(codepoint: int) -> int:
"""Get the cell size of a character.
Args:
- character (str): A single character.
+ codepoint (int): Codepoint of a character.
Returns:
int: Number of cells (0, 1 or 2) occupied by that character.
diff --git a/rich/segment.py b/rich/segment.py
index 20ccedf8a..bbc6d3ea0 100644
--- a/rich/segment.py
+++ b/rich/segment.py
@@ -727,7 +727,7 @@ def __rich_console__(
console.print(Syntax(code, "python", line_numbers=True))
console.print()
console.print(
- "When you call [b]print()[/b], Rich [i]renders[/i] the object in to the the following:\n"
+ "When you call [b]print()[/b], Rich [i]renders[/i] the object in to the following:\n"
)
fragments = list(console.render(text))
console.print(fragments)
| ## Type of changes
- [ ] Bug fix
- [ ] New feature
- [ ] Documentation / docstrings
- [ ] Tests
- [x] Other
## Checklist
- [x] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [ ] I've added tests for new code.
- [x] I accept that @willmcgugan may be pedantic in the code review.
## Description
Please describe your changes here. If this fixes a bug, please link to the issue, if possible.
| https://api.github.com/repos/Textualize/rich/pulls/2631 | 2022-11-06T13:05:29Z | 2023-03-04T09:19:12Z | 2023-03-04T09:19:12Z | 2023-03-04T09:19:12Z | 303 | Textualize/rich | 48,403 |
ZeRO-1 empty grads fix + tests | diff --git a/deepspeed/runtime/zero/stage2.py b/deepspeed/runtime/zero/stage2.py
index faa6443bfef4..a3e09d50b50d 100755
--- a/deepspeed/runtime/zero/stage2.py
+++ b/deepspeed/runtime/zero/stage2.py
@@ -473,7 +473,7 @@ def initialize_optimizer_states(self):
if not self.cpu_offload:
for group in self.single_partition_of_fp32_groups:
- group.grad = None
+ group.grad = None #class init
return
@@ -497,7 +497,8 @@ def reduce_gradients(self, pipeline_parallel=False):
if not self.overlap_comm:
for i, group in enumerate(self.fp16_groups):
for param in group:
- self.reduce_ready_partitions_and_remove_grads(param, i)
+ if param.grad is not None:
+ self.reduce_ready_partitions_and_remove_grads(param, i)
# reduce any pending grads in either hook/non-hook case
self.overlapping_partition_gradients_reduce_epilogue()
@@ -974,7 +975,7 @@ def async_inplace_copy_grad_to_fp32_buffer_from_gpu(self, param):
src_tensor = param.grad.view(-1).narrow(0, source_offset, num_elements).float()
dest_tensor.copy_(src_tensor, non_blocking=True)
- param.grad = None
+ param.grad = None #offload only
def complete_grad_norm_calculation_for_cpu_offload(self, params):
total_norm = 0.0
@@ -1095,17 +1096,19 @@ def reduce_ipg_grads(self):
Multiple gradient reduction is currently not supported"
self.params_already_reduced[param_id] = True
- if not self.is_param_in_current_partition[param_id]:
- if self.overlap_comm and self.contiguous_gradients is False:
- # Clear grads of other partitions during the next reduction
- # to avoid clearing them before the reduction is complete.
- if self.previous_reduced_grads is None:
- self.previous_reduced_grads = []
- self.previous_reduced_grads.append(param)
- else:
- param.grad = None
- elif self.contiguous_gradients:
- self.copy_grads_in_partition(param)
+
+ if self.partition_gradients:
+ if not self.is_param_in_current_partition[param_id]:
+ if self.overlap_comm and self.contiguous_gradients is False:
+ # Clear grads of other partitions during the next reduction
+ # to avoid clearing them before the reduction is complete.
+ if self.previous_reduced_grads is None:
+ self.previous_reduced_grads = []
+ self.previous_reduced_grads.append(param)
+ else:
+ param.grad = None #only if self.partition_gradients
+ elif self.contiguous_gradients:
+ self.copy_grads_in_partition(param)
self.grads_in_ipg_bucket = []
self.params_in_ipg_bucket = []
@@ -1125,7 +1128,7 @@ def are_all_related_partitions_reduced(params_id):
for params_id in self.is_grad_computed[i][partition_id]:
if are_all_related_partitions_reduced(params_id):
- self.param_dict[params_id].grad = None
+ self.param_dict[params_id].grad = None # dead code
def flatten_and_print(self, message, tensors, start=0, n=5):
flatten_tensor = self.flatten(tensors)
@@ -1214,7 +1217,7 @@ def allreduce_bucket(self, bucket, allreduce_always_fp32=False, rank=None, log=N
def _clear_previous_reduced_grads(self):
if self.previous_reduced_grads is not None:
for param in self.previous_reduced_grads:
- param.grad = None
+ param.grad = None # overlap enabled
self.previous_reduced_grads = None
# if rank is specified do a reduction instead of an allreduce
@@ -1331,7 +1334,7 @@ def zero_grad(self, set_grads_to_None=True):
for group in self.fp16_groups:
for p in group:
if set_grads_to_None:
- p.grad = None
+ p.grad = None # epilogue and in step
else:
if p.grad is not None:
p.grad.detach_()
@@ -1457,7 +1460,7 @@ def get_flat_partition(self,
def free_grad_in_param_list(self, param_list):
for p in param_list:
- p.grad = None
+ p.grad = None # in step
def reset_cpu_buffers(self):
self.norm_for_param_grads = {}
@@ -1583,7 +1586,7 @@ def step(self, closure=None):
# get rid of the fp32 gradients. Not needed anymore
if not self.cpu_offload:
for group in self.single_partition_of_fp32_groups:
- group.grad = None
+ group.grad = None # in step
for fp16_partitions, fp32_partition in zip(self.parallel_partitioned_fp16_groups, self.single_partition_of_fp32_groups):
fp16_partitions[partition_id].data.copy_(fp32_partition.data)
diff --git a/tests/unit/simple_model.py b/tests/unit/simple_model.py
index 9c6062d79faa..15c40976b6a1 100755
--- a/tests/unit/simple_model.py
+++ b/tests/unit/simple_model.py
@@ -17,10 +17,7 @@ def __init__(self, hidden_dim, empty_grad=False):
def forward(self, x, y):
hidden_dim = x
- if self.empty_grad and torch.distributed.get_rank() == 0:
- hidden_dim = self.linear(hidden_dim) + self.linear2(hidden_dim)
- else:
- hidden_dim = self.linear(hidden_dim)
+ hidden_dim = self.linear(hidden_dim)
return self.cross_entropy_loss(hidden_dim, y)
diff --git a/tests/unit/test_fp16.py b/tests/unit/test_fp16.py
index b2e76f0b7b82..0c0ef3edd3a8 100755
--- a/tests/unit/test_fp16.py
+++ b/tests/unit/test_fp16.py
@@ -856,3 +856,38 @@ def _go(args):
model.step()
_go(args=args)
+
+
+@pytest.mark.parametrize('stage', [1, 2, 3])
+def test_zero_empty_grad(tmpdir, stage):
+ config_dict = {
+ "train_batch_size": 1,
+ "steps_per_print": 1,
+ "fp16": {
+ "enabled": True
+ },
+ "zero_optimization": {
+ "stage": stage
+ }
+ }
+ args = args_from_dict(tmpdir, config_dict)
+ hidden_dim = 10
+
+ model = SimpleModel(hidden_dim)
+
+ @distributed_test(world_size=[1])
+ def _go(args, model, hidden_dim):
+ optimizer = torch.optim.Adam(model.parameters())
+ model, _, _, _ = deepspeed.initialize(args=args,
+ model=model,
+ optimizer=optimizer)
+ data_loader = random_dataloader(model=model,
+ total_samples=50,
+ hidden_dim=hidden_dim,
+ device=model.device)
+ for n, batch in enumerate(data_loader):
+ loss = model(batch[0], batch[1])
+ model.backward(loss)
+ model.step()
+
+ _go(args=args, model=model, hidden_dim=hidden_dim)
| * Only reduce non-empty grads for stage 1
* Add unit test and ensure empty grads are triggered for all ranks
* Only empty grads if stage 2
* Add context comments to empty grads calls | https://api.github.com/repos/microsoft/DeepSpeed/pulls/1273 | 2021-08-02T17:29:43Z | 2021-08-02T18:00:26Z | 2021-08-02T18:00:26Z | 2021-08-02T18:00:29Z | 1,659 | microsoft/DeepSpeed | 10,587 |
Update binary_exponentiation.py | diff --git a/maths/binary_exponentiation.py b/maths/binary_exponentiation.py
index 05de939d1bde..7eeca89262a9 100644
--- a/maths/binary_exponentiation.py
+++ b/maths/binary_exponentiation.py
@@ -5,6 +5,12 @@
def binary_exponentiation(a: int, n: int) -> int:
+ """
+ >>> binary_exponentiation(3, 5)
+ 243
+ >>> binary_exponentiation(10, 3)
+ 1000
+ """
if n == 0:
return 1
@@ -17,6 +23,10 @@ def binary_exponentiation(a: int, n: int) -> int:
if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
try:
BASE = int(input("Enter Base : ").strip())
POWER = int(input("Enter Power : ").strip())
| ### Describe your change:
Added doctests.
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [X] Documentation change?
### Checklist:
* [X] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [X] This pull request is all my own work -- I have not plagiarized.
* [X] I know that pull requests will not be merged if they fail the automated tests.
* [X] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [X] All new Python files are placed inside an existing directory.
* [X] All filenames are in all lowercase characters with no spaces or dashes.
* [X] All functions and variable names follow Python naming conventions.
* [X] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [X] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [X] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| https://api.github.com/repos/TheAlgorithms/Python/pulls/10253 | 2023-10-10T23:16:56Z | 2023-10-19T12:15:23Z | 2023-10-19T12:15:23Z | 2023-10-19T12:15:23Z | 225 | TheAlgorithms/Python | 29,945 |
Fixed #33633 -- Skipped some test_utils tests on databases that don't support transactions. | diff --git a/tests/test_utils/test_testcase.py b/tests/test_utils/test_testcase.py
index ca142186a1950..eb6ca80036936 100644
--- a/tests/test_utils/test_testcase.py
+++ b/tests/test_utils/test_testcase.py
@@ -41,6 +41,7 @@ def test_disallowed_database_queries(self):
with self.assertRaisesMessage(DatabaseOperationForbidden, message):
Car.objects.using("other").get()
+ @skipUnlessDBFeature("supports_transactions")
def test_reset_sequences(self):
old_reset_sequences = self.reset_sequences
self.reset_sequences = True
@@ -61,6 +62,10 @@ def inner(self):
return inner
+# On databases with no transaction support (for instance, MySQL with the MyISAM
+# engine), setUpTestData() is called before each test, so there is no need to
+# clone class level test data.
+@skipUnlessDBFeature("supports_transactions")
class TestDataTests(TestCase):
# setUpTestData re-assignment are also wrapped in TestData.
jim_douglas = None
diff --git a/tests/test_utils/tests.py b/tests/test_utils/tests.py
index 6a4467fdcb963..fb19d6e464829 100644
--- a/tests/test_utils/tests.py
+++ b/tests/test_utils/tests.py
@@ -2126,6 +2126,7 @@ def test_override_staticfiles_dirs(self):
self.assertIn(expected_location, finder.locations)
+@skipUnlessDBFeature("supports_transactions")
class TestBadSetUpTestData(TestCase):
"""
An exception in setUpTestData() shouldn't leak a transaction which would
@@ -2160,6 +2161,7 @@ def test_failure_in_setUpTestData_should_rollback_transaction(self):
self.assertFalse(self._in_atomic_block)
+@skipUnlessDBFeature("supports_transactions")
class CaptureOnCommitCallbacksTests(TestCase):
databases = {"default", "other"}
callback_called = False
| The testing framework has some different behaviors for backends not supporting transactions. Mostly in `TestCase._fixture_setup` and `TestCase.setUpClass`. Skip them. | https://api.github.com/repos/django/django/pulls/15582 | 2022-04-12T08:57:25Z | 2022-04-12T13:35:51Z | 2022-04-12T13:35:51Z | 2022-04-12T14:11:51Z | 417 | django/django | 51,351 |
Remove extra backtick in ES.23 | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index 005cb5103..58cab9303 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -10717,7 +10717,7 @@ Use `={...}` if you really want an `initializer_list<T>`
`={}` gives copy initialization whereas `{}` gives direct initialization.
Like the distinction between copy-initialization and direct-initialization itself, this can lead to surprises.
-`{}` accepts `explicit` constructors; `={}` does not`. For example:
+`{}` accepts `explicit` constructors; `={}` does not. For example:
struct Z { explicit Z() {} };
| It's a minor typo, see title.
Thank you! | https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/1453 | 2019-06-23T14:51:21Z | 2019-06-25T17:30:38Z | 2019-06-25T17:30:38Z | 2019-06-25T17:30:38Z | 161 | isocpp/CppCoreGuidelines | 15,288 |
gh-110014: Remove PY_TIMEOUT_MAX from limited C API | diff --git a/Doc/data/stable_abi.dat b/Doc/data/stable_abi.dat
index 07c6d514d19549..c189c78238f40f 100644
--- a/Doc/data/stable_abi.dat
+++ b/Doc/data/stable_abi.dat
@@ -1,5 +1,4 @@
role,name,added,ifdef_note,struct_abi_kind
-var,PY_TIMEOUT_MAX,3.2,,
macro,PY_VECTORCALL_ARGUMENTS_OFFSET,3.12,,
function,PyAIter_Check,3.10,,
function,PyArg_Parse,3.2,,
diff --git a/Doc/whatsnew/3.13.rst b/Doc/whatsnew/3.13.rst
index 1de5479a924375..3a1b283a75bf2e 100644
--- a/Doc/whatsnew/3.13.rst
+++ b/Doc/whatsnew/3.13.rst
@@ -1290,3 +1290,6 @@ removed, although there is currently no date scheduled for their removal.
* :c:func:`PyThread_get_key_value`: use :c:func:`PyThread_tss_get`.
* :c:func:`PyThread_delete_key_value`: use :c:func:`PyThread_tss_delete`.
* :c:func:`PyThread_ReInitTLS`: no longer needed.
+
+* Remove undocumented ``PY_TIMEOUT_MAX`` constant from the limited C API.
+ (Contributed by Victor Stinner in :gh:`110014`.)
diff --git a/Include/cpython/pythread.h b/Include/cpython/pythread.h
index cd2aab72d52df3..03f710a9f7ef2e 100644
--- a/Include/cpython/pythread.h
+++ b/Include/cpython/pythread.h
@@ -2,6 +2,14 @@
# error "this header file must not be included directly"
#endif
+// PY_TIMEOUT_MAX is the highest usable value (in microseconds) of PY_TIMEOUT_T
+// type, and depends on the system threading API.
+//
+// NOTE: this isn't the same value as `_thread.TIMEOUT_MAX`. The _thread module
+// exposes a higher-level API, with timeouts expressed in seconds and
+// floating-point numbers allowed.
+PyAPI_DATA(const long long) PY_TIMEOUT_MAX;
+
#define PYTHREAD_INVALID_THREAD_ID ((unsigned long)-1)
#ifdef HAVE_PTHREAD_H
diff --git a/Include/pythread.h b/Include/pythread.h
index 2c2fd63d724286..0784f6b2e5391f 100644
--- a/Include/pythread.h
+++ b/Include/pythread.h
@@ -33,27 +33,18 @@ PyAPI_FUNC(int) PyThread_acquire_lock(PyThread_type_lock, int);
#define WAIT_LOCK 1
#define NOWAIT_LOCK 0
-/* PY_TIMEOUT_T is the integral type used to specify timeouts when waiting
- on a lock (see PyThread_acquire_lock_timed() below).
- PY_TIMEOUT_MAX is the highest usable value (in microseconds) of that
- type, and depends on the system threading API.
-
- NOTE: this isn't the same value as `_thread.TIMEOUT_MAX`. The _thread
- module exposes a higher-level API, with timeouts expressed in seconds
- and floating-point numbers allowed.
-*/
+// PY_TIMEOUT_T is the integral type used to specify timeouts when waiting
+// on a lock (see PyThread_acquire_lock_timed() below).
#define PY_TIMEOUT_T long long
-PyAPI_DATA(const long long) PY_TIMEOUT_MAX;
-
/* If microseconds == 0, the call is non-blocking: it returns immediately
even when the lock can't be acquired.
If microseconds > 0, the call waits up to the specified duration.
If microseconds < 0, the call waits until success (or abnormal failure)
- microseconds must be less than PY_TIMEOUT_MAX. Behaviour otherwise is
- undefined.
+ If *microseconds* is greater than PY_TIMEOUT_MAX, clamp the timeout to
+ PY_TIMEOUT_MAX microseconds.
If intr_flag is true and the acquire is interrupted by a signal, then the
call will return PY_LOCK_INTR. The caller may reattempt to acquire the
diff --git a/Lib/test/test_stable_abi_ctypes.py b/Lib/test/test_stable_abi_ctypes.py
index 6e9496d40da477..94f817f8e1d159 100644
--- a/Lib/test/test_stable_abi_ctypes.py
+++ b/Lib/test/test_stable_abi_ctypes.py
@@ -35,7 +35,6 @@ def test_windows_feature_macros(self):
SYMBOL_NAMES = (
- "PY_TIMEOUT_MAX",
"PyAIter_Check",
"PyArg_Parse",
"PyArg_ParseTuple",
diff --git a/Misc/NEWS.d/next/C API/2023-10-02-13-39-57.gh-issue-110014.gfQ4jU.rst b/Misc/NEWS.d/next/C API/2023-10-02-13-39-57.gh-issue-110014.gfQ4jU.rst
new file mode 100644
index 00000000000000..3a5ff7d43bbc01
--- /dev/null
+++ b/Misc/NEWS.d/next/C API/2023-10-02-13-39-57.gh-issue-110014.gfQ4jU.rst
@@ -0,0 +1,2 @@
+Remove undocumented ``PY_TIMEOUT_MAX`` constant from the limited C API.
+Patch by Victor Stinner.
diff --git a/Misc/stable_abi.toml b/Misc/stable_abi.toml
index 46e2307614e26d..8df3f85e61eec6 100644
--- a/Misc/stable_abi.toml
+++ b/Misc/stable_abi.toml
@@ -1843,10 +1843,6 @@
[function.PyThread_start_new_thread]
added = '3.2'
-# Not mentioned in PEP 384, was implemented as a macro in Python <= 3.12
-[data.PY_TIMEOUT_MAX]
- added = '3.2'
-
# The following were added in PC/python3.def in Python 3.3:
# 7800f75827b1be557be16f3b18f5170fbf9fae08
# 9c56409d3353b8cd4cfc19e0467bbe23fd34fc92
diff --git a/PC/python3dll.c b/PC/python3dll.c
index 75728c7d8057ed..2c1cc8098ce856 100755
--- a/PC/python3dll.c
+++ b/PC/python3dll.c
@@ -768,7 +768,6 @@ EXPORT_DATA(Py_FileSystemDefaultEncodeErrors)
EXPORT_DATA(Py_FileSystemDefaultEncoding)
EXPORT_DATA(Py_GenericAliasType)
EXPORT_DATA(Py_HasFileSystemDefaultEncoding)
-EXPORT_DATA(PY_TIMEOUT_MAX)
EXPORT_DATA(Py_UTF8Mode)
EXPORT_DATA(Py_Version)
EXPORT_DATA(PyBaseObject_Type)
| If the timeout is greater than PY_TIMEOUT_MAX,
PyThread_acquire_lock_timed() uses a timeout of PY_TIMEOUT_MAX microseconds, which is around 280.6 years. This case is unlikely and limiting a timeout to 280.6 years sounds like a reasonable trade-off.
The constant PY_TIMEOUT_MAX is not used in PyPI top 5,000 projects.
<!--
Thanks for your contribution!
Please read this comment in its entirety. It's quite important.
# Pull Request title
It should be in the following format:
```
gh-NNNNN: Summary of the changes made
```
Where: gh-NNNNN refers to the GitHub issue number.
Most PRs will require an issue number. Trivial changes, like fixing a typo, do not need an issue.
# Backport Pull Request title
If this is a backport PR (PR made against branches other than `main`),
please ensure that the PR title is in the following format:
```
[X.Y] <title from the original PR> (GH-NNNN)
```
Where: [X.Y] is the branch name, e.g. [3.6].
GH-NNNN refers to the PR number from `main`.
-->
<!-- readthedocs-preview cpython-previews start -->
----
:books: Documentation preview :books:: https://cpython-previews--110217.org.readthedocs.build/
<!-- readthedocs-preview cpython-previews end -->
<!-- gh-issue-number: gh-110014 -->
* Issue: gh-110014
<!-- /gh-issue-number -->
| https://api.github.com/repos/python/cpython/pulls/110217 | 2023-10-02T11:45:18Z | 2023-10-02T16:07:56Z | 2023-10-02T16:07:56Z | 2023-10-02T18:08:41Z | 1,599 | python/cpython | 3,865 |
Use live badge for source repos' users/stars | diff --git a/README.md b/README.md
index 246c8915..61f37309 100644
--- a/README.md
+++ b/README.md
@@ -822,13 +822,13 @@ all over the world
|Cheat sheets |Repository | Users | Creation Date |
|-----------------------|------------------------------------------------------|------------|---------------|
-|UNIX/Linux, programming|[cheat.sheets](https://github.com/chubin/cheat.sheets)| 38/223 | May 1, 2017 |
-|UNIX/Linux commands |[tldr-pages/tldr](https://github.com/tldr-pages/tldr) | 760/23158 | Dec 8, 2013 |
-|UNIX/Linux commands |[chrisallenlane/cheat](https://github.com/chrisallenlane/cheat)|131/5240|Jul 28, 2013|
-|Programming languages |[adambard/learnxinyminutes-docs](https://github.com/adambard/learnxinyminutes-docs)|1246/6748|Jun 23, 2013|
-|Go |[a8m/go-lang-cheat-sheet](https://github.com/a8m/go-lang-cheat-sheet)|31/4039|Feb 9, 2014|
-|Perl |[pkrumnis/perl1line.txt](https://github.com/pkrumins/perl1line.txt)|5/190|Nov 4, 2011|
-|Programming languages |[StackOverflow](https://stackoverflow.com)|9M |Sep 15, 2008|
+|UNIX/Linux, programming|[cheat.sheets](https://github.com/chubin/cheat.sheets)| |May 1, 2017|
+|UNIX/Linux commands |[tldr-pages/tldr](https://github.com/tldr-pages/tldr) | |Dec 8, 2013|
+|UNIX/Linux commands |[chrisallenlane/cheat](https://github.com/chrisallenlane/cheat)| |Jul 28, 2013|
+|Programming languages |[adambard/learnxinyminutes-docs](https://github.com/adambard/learnxinyminutes-docs)| |Jun 23, 2013|
+|Go |[a8m/go-lang-cheat-sheet](https://github.com/a8m/go-lang-cheat-sheet)| |Feb 9, 2014|
+|Perl |[pkrumnis/perl1line.txt](https://github.com/pkrumins/perl1line.txt)| |Nov 4, 2011|
+|Programming languages |[StackOverflow](https://stackoverflow.com)|[14M](https://stackexchange.com/leagues/1/alltime/stackoverflow)|Sep 15, 2008|
Pie diagram reflecting cheat sheets sources distribution (by number of cheat sheets on cheat.sh originating from a repository):
| Also updates Stackoverflow user count from 9m to 14m.
See https://stackexchange.com/leagues/1/alltime/stackoverflow | https://api.github.com/repos/chubin/cheat.sh/pulls/285 | 2021-02-27T21:14:24Z | 2021-02-28T07:00:50Z | 2021-02-28T07:00:50Z | 2021-02-28T12:24:49Z | 871 | chubin/cheat.sh | 15,154 |
Remove comment about renewer | diff --git a/certbot/main.py b/certbot/main.py
index 8fa6ebfc63a..089eab0c3dd 100644
--- a/certbot/main.py
+++ b/certbot/main.py
@@ -473,8 +473,7 @@ def _report_new_cert(config, cert_path, fullchain_path, key_path=None):
def _determine_account(config):
"""Determine which account to use.
- In order to make the renewer (configuration de/serialization) happy,
- if ``config.account`` is ``None``, it will be updated based on the
+ If ``config.account`` is ``None``, it will be updated based on the
user input. Same for ``config.email``.
:param config: Configuration object
| It's been a while since we had a separate renewer. | https://api.github.com/repos/certbot/certbot/pulls/6115 | 2018-06-14T22:47:17Z | 2018-06-15T08:43:49Z | 2018-06-15T08:43:49Z | 2018-07-11T22:18:20Z | 174 | certbot/certbot | 280 |
Added missing word in practices.rst | diff --git a/docs/topics/practices.rst b/docs/topics/practices.rst
index 14af1318b6f..9c7040519fc 100644
--- a/docs/topics/practices.rst
+++ b/docs/topics/practices.rst
@@ -16,7 +16,7 @@ You can use the :ref:`API <topics-api>` to run Scrapy from a script, instead of
the typical way of running Scrapy via ``scrapy crawl``.
Remember that Scrapy is built on top of the Twisted
-asynchronous networking library, so you need run it inside the Twisted reactor.
+asynchronous networking library, so you need to run it inside the Twisted reactor.
Note that you will also have to shutdown the Twisted reactor yourself after the
spider is finished. This can be achieved by connecting a handler to the
| https://api.github.com/repos/scrapy/scrapy/pulls/617 | 2014-02-27T14:53:45Z | 2014-02-27T15:03:57Z | 2014-02-27T15:03:57Z | 2014-06-16T12:03:39Z | 184 | scrapy/scrapy | 34,487 | |
DOC: contributing.rst, document fast doc building | diff --git a/doc/README.rst b/doc/README.rst
index 29c4113f90909..a1e32c7014648 100644
--- a/doc/README.rst
+++ b/doc/README.rst
@@ -133,6 +133,16 @@ If you want to do a full clean build, do::
python make.py build
+Staring with 0.13.1 you can tell ``make.py`` to compile only a single section
+of the docs, greatly reducing the turn-around time for checking your changes.
+
+ python make.py --no-api # omit autosummary and api section
+ python make.py --single indexing # compile the docs with only a single
+ # section, that which is in indexing.rst
+
+For comparision, a full doc build may take 10 minutes. a ``-no-api`` build
+may take 3 minutes and a single section may take 15 seconds.
+
Where to start?
---------------
| @jreback, don't say you didn't know about this feature later.
| https://api.github.com/repos/pandas-dev/pandas/pulls/6193 | 2014-01-30T21:54:28Z | 2014-01-30T21:54:38Z | 2014-01-30T21:54:38Z | 2014-06-26T23:16:21Z | 220 | pandas-dev/pandas | 45,117 |
[MRG+1] settings: fixing name of the pipeline template | diff --git a/scrapy/templates/project/module/settings.py.tmpl b/scrapy/templates/project/module/settings.py.tmpl
index 72f25ebefea..486df6b718e 100644
--- a/scrapy/templates/project/module/settings.py.tmpl
+++ b/scrapy/templates/project/module/settings.py.tmpl
@@ -65,7 +65,7 @@ ROBOTSTXT_OBEY = True
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
-# '$project_name.pipelines.SomePipeline': 300,
+# '$project_name.pipelines.${ProjectName}Pipeline': 300,
#}
# Enable and configure the AutoThrottle extension (disabled by default)
| `scrapy startproject myproject` creates a `pipelines.py` with an example Pipeline with already the name of the project, it only feels natural that this should also be the name on `settings.py` pipelines section. | https://api.github.com/repos/scrapy/scrapy/pulls/2466 | 2016-12-24T16:57:29Z | 2017-02-02T12:16:57Z | 2017-02-02T12:16:57Z | 2017-02-02T12:16:58Z | 165 | scrapy/scrapy | 34,609 |
Do not codespell any foreign language README.*.md | diff --git a/.github/workflows/codespell.yml b/.github/workflows/codespell.yml
index 9bc619faa..c21f7a642 100644
--- a/.github/workflows/codespell.yml
+++ b/.github/workflows/codespell.yml
@@ -4,7 +4,7 @@ jobs:
codespell:
runs-on: ubuntu-latest
steps:
- - uses: actions/checkout@v2
+ - uses: actions/checkout@v3
- run: python3 -m pip install codespell
- run: codespell --ignore-words-list="ba,fo,hel,revered,womens"
- --skip="./README.de.md,./README.es.md,./README.sv.md,./README.fr.md,./README.de-ch.md,./README.hi.md,./README.pt-br.md,./README.it.md,./README.id.md,*.svg,./benchmarks/snippets.py"
+ --skip="./README.*.md,*.svg,./benchmarks/snippets.py"
| ## Type of changes
- [ ] Bug fix
- [ ] New feature
- [ ] Documentation / docstrings
- [x] Tests
- [ ] Other
## Checklist
- [ ] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [ ] I've added tests for new code.
- [x] I accept that @willmcgugan may be pedantic in the code review.
## Description
Please describe your changes here. If this fixes a bug, please link to the issue, if possible.
| https://api.github.com/repos/Textualize/rich/pulls/2177 | 2022-04-08T05:18:33Z | 2022-04-13T14:01:38Z | 2022-04-13T14:01:38Z | 2022-04-13T14:42:29Z | 233 | Textualize/rich | 48,658 |
Update README.md | diff --git a/topics/kubernetes/README.md b/topics/kubernetes/README.md
index 6c4ae039b..982a13f8b 100644
--- a/topics/kubernetes/README.md
+++ b/topics/kubernetes/README.md
@@ -314,6 +314,7 @@ Outputs the status of each of the control plane components.
<details>
<summary>What happens to running pods if if you stop Kubelet on the worker nodes?</summary><br><b>
+When you stop the kubelet service on a worker node, it will no longer be able to communicate with the Kubernetes API server. As a result, the node will be marked as NotReady and the pods running on that node will be marked as Unknown. The Kubernetes control plane will then attempt to reschedule the pods to other available nodes in the cluster.
</b></details>
#### Nodes Commands
@@ -736,21 +737,29 @@ A Deployment is a declarative statement for the desired state for Pods and Repli
<details>
<summary>How to create a deployment with the image "nginx:alpine"?</code></summary><br><b>
-`kubectl create deployment my_first_deployment --image=nginx:alpine`
+`kubectl create deployment my-first-deployment --image=nginx:alpine`
OR
```
cat << EOF | kubectl create -f -
-apiVersion: v1
-kind: Pod
+apiVersion: apps/v1
+kind: Deployment
metadata:
name: nginx
spec:
- containers:
- - name: nginx
- image: nginx:alpine
-EOF
+ replicas: 1
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:alpine
```
</b></details>
| Added the missing answer of the question | https://api.github.com/repos/bregman-arie/devops-exercises/pulls/10248 | 2024-01-28T12:25:28Z | 2024-02-02T13:14:43Z | 2024-02-02T13:14:43Z | 2024-02-02T13:14:43Z | 418 | bregman-arie/devops-exercises | 17,643 |
[serve] Add initial support for @serve.deployment syntax | diff --git a/python/ray/serve/__init__.py b/python/ray/serve/__init__.py
index c39a81c4fcb84..9ace05d219dfa 100644
--- a/python/ray/serve/__init__.py
+++ b/python/ray/serve/__init__.py
@@ -2,7 +2,7 @@
accept_batch, connect, start, get_replica_context, get_handle,
shadow_traffic, set_traffic, delete_backend, list_backends, create_backend,
get_backend_config, update_backend_config, list_endpoints, delete_endpoint,
- create_endpoint, shutdown, ingress)
+ create_endpoint, shutdown, ingress, deployment)
from ray.serve.batching import batch
from ray.serve.config import BackendConfig, HTTPOptions
from ray.serve.utils import ServeRequest
@@ -18,5 +18,5 @@
"shadow_traffic", "set_traffic", "delete_backend", "list_backends",
"create_backend", "get_backend_config", "update_backend_config",
"list_endpoints", "delete_endpoint", "create_endpoint", "shutdown",
- "ingress"
+ "ingress", "deployment"
]
diff --git a/python/ray/serve/api.py b/python/ray/serve/api.py
index 81f9fd2e59fb7..0a2ed512a0369 100644
--- a/python/ray/serve/api.py
+++ b/python/ray/serve/api.py
@@ -1,3 +1,4 @@
+from abc import ABC
import asyncio
import atexit
import inspect
@@ -1062,3 +1063,129 @@ def decorator(cls):
return cls
return decorator
+
+
+class ServeDeployment(ABC):
+ @classmethod
+ def deploy(self, *init_args) -> None:
+ """Deploy this deployment.
+
+ Args:
+ *init_args (optional): the arguments to pass to the class __init__
+ method. Not valid if this deployment wraps a function.
+ """
+ # TODO(edoakes): how to avoid copy-pasting the docstrings here?
+ raise NotImplementedError()
+
+ @classmethod
+ def delete(self) -> None:
+ """Delete this deployment."""
+ raise NotImplementedError()
+
+ @classmethod
+ def get_handle(self, sync: Optional[bool] = True
+ ) -> Union[RayServeHandle, RayServeSyncHandle]:
+ raise NotImplementedError()
+
+
+def make_deployment_cls(
+ backend_def: Callable,
+ name: str,
+ version: str,
+ ray_actor_options: Optional[Dict] = None,
+ config: Optional[BackendConfig] = None) -> ServeDeployment:
+ class Deployment(ServeDeployment):
+ _backend_def = backend_def
+ _name = name
+ _version = version
+ _ray_actor_options = ray_actor_options
+ _config = config
+
+ @classmethod
+ def deploy(self, *init_args):
+ """Deploy this deployment.
+
+ Args:
+ *init_args (optional): args to pass to the class __init__
+ method. Not valid if this deployment wraps a function.
+ """
+ return _get_global_client().deploy(
+ Deployment._name,
+ Deployment._backend_def,
+ *init_args,
+ ray_actor_options=Deployment._ray_actor_options,
+ config=Deployment._config,
+ version=Deployment._version,
+ _internal=True)
+
+ @classmethod
+ def delete(self):
+ """Delete this deployment."""
+ raise NotImplementedError()
+
+ @classmethod
+ def get_handle(self, sync: Optional[bool] = True
+ ) -> Union[RayServeHandle, RayServeSyncHandle]:
+ return _get_global_client().get_handle(
+ Deployment._name, missing_ok=False, sync=sync, _internal=True)
+
+ @classmethod
+ def options(self,
+ backend_def: Optional[Callable] = None,
+ name: Optional[str] = None,
+ version: Optional[str] = None,
+ ray_actor_options: Optional[Dict] = None,
+ config: Optional[BackendConfig] = None) -> "Deployment":
+ """Return a new deployment with the specified options set."""
+ return make_deployment_cls(
+ backend_def or Deployment._backend_def,
+ name or Deployment._name,
+ version or Deployment._version,
+ ray_actor_options=ray_actor_options
+ or Deployment._ray_actor_options,
+ config=config or Deployment._config,
+ )
+
+ return Deployment
+
+
+# TODO(edoakes): better typing on the return value of the decorator.
+def deployment(name: str,
+ version: Optional[str] = None,
+ ray_actor_options: Optional[Dict] = None,
+ config: Optional[Union[BackendConfig, Dict[str, Any]]] = None
+ ) -> Callable[[Callable], ServeDeployment]:
+ """Define a Serve deployment.
+
+ Args:
+ name (str): Globally-unique name identifying this deployment.
+ version (str): Version of the deployment. This is used to indicate a
+ code change for the deployment; when it is re-deployed with a
+ version change, a rolling update of the replicas will be performed.
+ ray_actor_options (dict): Options to be passed to the Ray actor
+ constructor such as resource requirements.
+ config (dict, serve.BackendConfig, optional): Configuration options
+ for this backend. Either a BackendConfig, or a dictionary
+ mapping strings to values for the following supported options:
+ - "num_replicas": number of processes to start up that
+ will handle requests to this backend.
+ - "max_concurrent_queries": the maximum number of queries that
+ will be sent to a replica of this backend without receiving a
+ response.
+ - "user_config" (experimental): Arguments to pass to the
+ reconfigure method of the backend. The reconfigure method is
+ called if "user_config" is not None.
+
+ Example:
+ >>> @serve.deployment("deployment1", version="v1")
+ class MyDeployment:
+ pass
+
+ >>> MyDeployment.deploy(*init_args)
+ """
+
+ def decorator(backend_def):
+ return make_deployment_cls(backend_def, name, version,
+ ray_actor_options, config)
+
+ return decorator
diff --git a/python/ray/serve/tests/test_deploy.py b/python/ray/serve/tests/test_deploy.py
index 20c70b7b77e08..30862354ba98e 100644
--- a/python/ray/serve/tests/test_deploy.py
+++ b/python/ray/serve/tests/test_deploy.py
@@ -4,53 +4,54 @@
import requests
import ray
+from ray import serve
@pytest.mark.parametrize("use_handle", [True, False])
def test_deploy(serve_instance, use_handle):
- client = serve_instance
-
name = "test"
+ @serve.deployment(name, version="1")
+ def d(*args):
+ return f"1|{os.getpid()}"
+
def call():
if use_handle:
- ret = ray.get(client.get_handle(name).remote())
+ ret = ray.get(d.get_handle().remote())
else:
ret = requests.get(f"http://localhost:8000/{name}").text
return ret.split("|")[0], ret.split("|")[1]
- def v1(*args):
- return f"1|{os.getpid()}"
-
- def v2(*args):
- return f"2|{os.getpid()}"
-
- client.deploy(name, v1, version="1")
+ d.deploy()
val1, pid1 = call()
assert val1 == "1"
# Redeploying with the same version and code should do nothing.
- client.deploy(name, v1, version="1")
+ d.deploy()
val2, pid2 = call()
assert val2 == "1"
assert pid2 == pid1
# Redeploying with a new version should start a new actor.
- client.deploy(name, v1, version="2")
+ d.options(version="2").deploy()
val3, pid3 = call()
assert val3 == "1"
assert pid3 != pid2
+ @serve.deployment(name, version="2")
+ def d(*args):
+ return f"2|{os.getpid()}"
+
# Redeploying with the same version and new code should do nothing.
- client.deploy(name, v2, version="2")
+ d.deploy()
val4, pid4 = call()
assert val4 == "1"
assert pid4 == pid3
# Redeploying with new code and a new version should start a new actor
# running the new code.
- client.deploy(name, v2, version="3")
+ d.options(version="3").deploy()
val5, pid5 = call()
assert val5 == "2"
assert pid5 != pid4
@@ -58,19 +59,10 @@ def v2(*args):
@pytest.mark.parametrize("use_handle", [True, False])
def test_config_change(serve_instance, use_handle):
- client = serve_instance
-
name = "test"
- def call():
- if use_handle:
- ret = ray.get(client.get_handle(name).remote())
- else:
- ret = requests.get(f"http://localhost:8000/{name}").text
-
- return ret.split("|")[0], ret.split("|")[1]
-
- class Backend:
+ @serve.deployment(name, version="1")
+ class D:
def __init__(self):
self.ret = "1"
@@ -80,44 +72,40 @@ def reconfigure(self, d):
def __call__(self, *args):
return f"{self.ret}|{os.getpid()}"
+ def call():
+ if use_handle:
+ ret = ray.get(D.get_handle().remote())
+ else:
+ ret = requests.get(f"http://localhost:8000/{name}").text
+
+ return ret.split("|")[0], ret.split("|")[1]
+
# First deploy with no user config set.
- client.deploy(name, Backend, version="1")
+ D.deploy()
val1, pid1 = call()
assert val1 == "1"
# Now update the user config without changing versions. Actor should stay
# alive but return value should change.
- client.deploy(
- name, Backend, version="1", config={"user_config": {
- "ret": "2"
- }})
+ D.options(config={"user_config": {"ret": "2"}}).deploy()
val2, pid2 = call()
assert pid2 == pid1
assert val2 == "2"
# Update the user config without changing the version again.
- client.deploy(
- name, Backend, version="1", config={"user_config": {
- "ret": "3"
- }})
+ D.options(config={"user_config": {"ret": "3"}}).deploy()
val3, pid3 = call()
assert pid3 == pid2
assert val3 == "3"
# Update the version without changing the user config.
- client.deploy(
- name, Backend, version="2", config={"user_config": {
- "ret": "3"
- }})
+ D.options(version="2", config={"user_config": {"ret": "3"}}).deploy()
val4, pid4 = call()
assert pid4 != pid3
assert val4 == "3"
# Update the version and the user config.
- client.deploy(
- name, Backend, version="3", config={"user_config": {
- "ret": "4"
- }})
+ D.options(version="3", config={"user_config": {"ret": "4"}}).deploy()
val5, pid5 = call()
assert pid5 != pid4
assert val5 == "4"
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number
Closes https://github.com/ray-project/ray/milestone/39
## Checks
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/14869 | 2021-03-23T14:55:05Z | 2021-03-23T20:07:44Z | 2021-03-23T20:07:44Z | 2021-03-23T20:07:44Z | 2,708 | ray-project/ray | 19,656 |
Allow non-interactive revocation without deleting certificates | diff --git a/certbot/cli.py b/certbot/cli.py
index 62246227847..f0fa7eb7ebe 100644
--- a/certbot/cli.py
+++ b/certbot/cli.py
@@ -1220,6 +1220,18 @@ def _create_subparsers(helpful):
key=constants.REVOCATION_REASONS.get)),
action=_EncodeReasonAction, default=flag_default("reason"),
help="Specify reason for revoking certificate. (default: unspecified)")
+ helpful.add("revoke",
+ "--delete-after-revoke", action="store_true",
+ default=flag_default("delete_after_revoke"),
+ help="Delete certificates after revoking them.")
+ helpful.add("revoke",
+ "--no-delete-after-revoke", action="store_false",
+ dest="delete_after_revoke",
+ default=flag_default("delete_after_revoke"),
+ help="Do not delete certificates after revoking them. This "
+ "option should be used with caution because the 'renew' "
+ "subcommand will attempt to renew undeleted revoked "
+ "certificates.")
helpful.add("rollback",
"--checkpoints", type=int, metavar="N",
default=flag_default("rollback_checkpoints"),
diff --git a/certbot/constants.py b/certbot/constants.py
index 0ac82dafe45..a6878824b58 100644
--- a/certbot/constants.py
+++ b/certbot/constants.py
@@ -71,6 +71,7 @@
user_agent_comment=None,
csr=None,
reason=0,
+ delete_after_revoke=None,
rollback_checkpoints=1,
init=False,
prepare=False,
diff --git a/certbot/main.py b/certbot/main.py
index 1c6432fd986..e25e030aadb 100644
--- a/certbot/main.py
+++ b/certbot/main.py
@@ -536,9 +536,11 @@ def _delete_if_appropriate(config): # pylint: disable=too-many-locals,too-many-b
display = zope.component.getUtility(interfaces.IDisplay)
reporter_util = zope.component.getUtility(interfaces.IReporter)
- msg = ("Would you like to delete the cert(s) you just revoked?")
- attempt_deletion = display.yesno(msg, yes_label="Yes (recommended)", no_label="No",
- force_interactive=True, default=True)
+ attempt_deletion = config.delete_after_revoke
+ if attempt_deletion is None:
+ msg = ("Would you like to delete the cert(s) you just revoked?")
+ attempt_deletion = display.yesno(msg, yes_label="Yes (recommended)", no_label="No",
+ force_interactive=True, default=True)
if not attempt_deletion:
reporter_util.add_message("Not deleting revoked certs.", reporter_util.LOW_PRIORITY)
diff --git a/certbot/tests/cli_test.py b/certbot/tests/cli_test.py
index 2fce412e230..c5935d7224a 100644
--- a/certbot/tests/cli_test.py
+++ b/certbot/tests/cli_test.py
@@ -164,6 +164,8 @@ def test_help(self):
self.assertTrue("--cert-path" in out)
self.assertTrue("--key-path" in out)
self.assertTrue("--reason" in out)
+ self.assertTrue("--delete-after-revoke" in out)
+ self.assertTrue("--no-delete-after-revoke" in out)
out = self._help_output(['-h', 'config_changes'])
self.assertTrue("--cert-path" not in out)
@@ -412,6 +414,18 @@ def test_no_directory_hooks_set(self):
def test_no_directory_hooks_unset(self):
self.assertTrue(self.parse([]).directory_hooks)
+ def test_delete_after_revoke(self):
+ namespace = self.parse(["--delete-after-revoke"])
+ self.assertTrue(namespace.delete_after_revoke)
+
+ def test_delete_after_revoke_default(self):
+ namespace = self.parse([])
+ self.assertEqual(namespace.delete_after_revoke, None)
+
+ def test_no_delete_after_revoke(self):
+ namespace = self.parse(["--no-delete-after-revoke"])
+ self.assertFalse(namespace.delete_after_revoke)
+
class DefaultTest(unittest.TestCase):
"""Tests for certbot.cli._Default."""
diff --git a/certbot/tests/main_test.py b/certbot/tests/main_test.py
index 04b71dcc7fe..b1d58542f50 100644
--- a/certbot/tests/main_test.py
+++ b/certbot/tests/main_test.py
@@ -298,25 +298,29 @@ def test_revocation_with_prompt(self, mock_get_utility,
self._call()
self.assertFalse(mock_delete.called)
-class DeleteIfAppropriateTest(unittest.TestCase):
+class DeleteIfAppropriateTest(test_util.ConfigTestCase):
"""Tests for certbot.main._delete_if_appropriate """
- def setUp(self):
- self.config = mock.Mock()
- self.config.namespace = mock.Mock()
- self.config.namespace.noninteractive_mode = False
-
def _call(self, mock_config):
from certbot.main import _delete_if_appropriate
_delete_if_appropriate(mock_config)
- @mock.patch('certbot.cert_manager.delete')
+ def _test_delete_opt_out_common(self, mock_get_utility):
+ with mock.patch('certbot.cert_manager.delete') as mock_delete:
+ self._call(self.config)
+ mock_delete.assert_not_called()
+ self.assertTrue(mock_get_utility().add_message.called)
+
+ @test_util.patch_get_utility()
+ def test_delete_flag_opt_out(self, mock_get_utility):
+ self.config.delete_after_revoke = False
+ self._test_delete_opt_out_common(mock_get_utility)
+
@test_util.patch_get_utility()
- def test_delete_opt_out(self, mock_get_utility, mock_delete):
+ def test_delete_prompt_opt_out(self, mock_get_utility):
util_mock = mock_get_utility()
util_mock.yesno.return_value = False
- self._call(self.config)
- mock_delete.assert_not_called()
+ self._test_delete_opt_out_common(mock_get_utility)
# pylint: disable=too-many-arguments
@mock.patch('certbot.storage.renewal_file_for_certname')
@@ -397,6 +401,28 @@ def test_noninteractive_deletion(self, mock_get_utility, mock_delete,
self._call(config)
self.assertEqual(mock_delete.call_count, 1)
+ # pylint: disable=too-many-arguments
+ @mock.patch('certbot.storage.renewal_file_for_certname')
+ @mock.patch('certbot.cert_manager.match_and_check_overlaps')
+ @mock.patch('certbot.storage.full_archive_path')
+ @mock.patch('certbot.cert_manager.cert_path_to_lineage')
+ @mock.patch('certbot.cert_manager.delete')
+ @test_util.patch_get_utility()
+ def test_opt_in_deletion(self, mock_get_utility, mock_delete,
+ mock_cert_path_to_lineage, mock_full_archive_dir,
+ mock_match_and_check_overlaps, mock_renewal_file_for_certname):
+ # pylint: disable = unused-argument
+ config = self.config
+ config.namespace.delete_after_revoke = True
+ config.cert_path = "/some/reasonable/path"
+ config.certname = ""
+ mock_cert_path_to_lineage.return_value = "example.com"
+ mock_full_archive_dir.return_value = ""
+ mock_match_and_check_overlaps.return_value = ""
+ self._call(config)
+ self.assertEqual(mock_delete.call_count, 1)
+ self.assertFalse(mock_get_utility().yesno.called)
+
# pylint: disable=too-many-arguments
@mock.patch('certbot.storage.renewal_file_for_certname')
@mock.patch('certbot.cert_manager.match_and_check_overlaps')
diff --git a/tests/boulder-integration.sh b/tests/boulder-integration.sh
index 1e0b7754b78..e1aad43365d 100755
--- a/tests/boulder-integration.sh
+++ b/tests/boulder-integration.sh
@@ -345,9 +345,14 @@ common auth --must-staple --domains "must-staple.le.wtf"
openssl x509 -in "${root}/conf/live/must-staple.le.wtf/cert.pem" -text | grep '1.3.6.1.5.5.7.1.24'
# revoke by account key
-common revoke --cert-path "$root/conf/live/le.wtf/cert.pem"
+common revoke --cert-path "$root/conf/live/le.wtf/cert.pem" --delete-after-revoke
# revoke renewed
-common revoke --cert-path "$root/conf/live/le1.wtf/cert.pem"
+common revoke --cert-path "$root/conf/live/le1.wtf/cert.pem" --no-delete-after-revoke
+if [ ! -d "$root/conf/live/le1.wtf" ]; then
+ echo "cert deleted when --no-delete-after-revoke was used!"
+ exit 1
+fi
+common delete --cert-name le1.wtf
# revoke by cert key
common revoke --cert-path "$root/conf/live/le2.wtf/cert.pem" \
--key-path "$root/conf/live/le2.wtf/privkey.pem"
| Fixes #5323. | https://api.github.com/repos/certbot/certbot/pulls/5386 | 2018-01-08T19:11:53Z | 2018-01-09T01:02:21Z | 2018-01-09T01:02:21Z | 2018-01-09T01:02:23Z | 2,067 | certbot/certbot | 44 |
Add -gptq-preload for 4-bit offloading | diff --git a/modules/GPTQ_loader.py b/modules/GPTQ_loader.py
index 7045a0986c..67899547b5 100644
--- a/modules/GPTQ_loader.py
+++ b/modules/GPTQ_loader.py
@@ -9,6 +9,7 @@
sys.path.insert(0, str(Path("repositories/GPTQ-for-LLaMa")))
import llama
+import llama_inference_offload
import opt
@@ -24,7 +25,10 @@ def load_quantized(model_name):
model_type = shared.args.gptq_model_type.lower()
if model_type == 'llama':
- load_quant = llama.load_quant
+ if not shared.args.gptq_pre_layer:
+ load_quant = llama.load_quant
+ else:
+ load_quant = llama_inference_offload.load_quant
elif model_type == 'opt':
load_quant = opt.load_quant
else:
@@ -53,24 +57,26 @@ def load_quantized(model_name):
print(f"Could not find {pt_model}, exiting...")
exit()
- model = load_quant(str(path_to_model), str(pt_path), shared.args.gptq_bits)
-
- # Multiple GPUs or GPU+CPU
- if shared.args.gpu_memory:
- memory_map = list(map(lambda x : x.strip(), shared.args.gpu_memory))
- max_cpu_memory = shared.args.cpu_memory.strip() if shared.args.cpu_memory is not None else '99GiB'
- max_memory = {}
- for i in range(len(memory_map)):
- max_memory[i] = f'{memory_map[i]}GiB' if not re.match('.*ib$', memory_map[i].lower()) else memory_map[i]
- max_memory['cpu'] = max_cpu_memory
+ # Using qwopqwop200's offload
+ if shared.args.gptq_pre_layer:
+ model = load_quant(str(path_to_model), str(pt_path), shared.args.gptq_bits, shared.args.gptq_pre_layer)
+ else:
+ model = load_quant(str(path_to_model), str(pt_path), shared.args.gptq_bits)
- device_map = accelerate.infer_auto_device_map(model, max_memory=max_memory, no_split_module_classes=["LlamaDecoderLayer"])
- print("Using the following device map for the 4-bit model:", device_map)
- # https://huggingface.co/docs/accelerate/package_reference/big_modeling#accelerate.dispatch_model
- model = accelerate.dispatch_model(model, device_map=device_map, offload_buffers=True)
+ # Using accelerate offload (doesn't work properly)
+ if shared.args.gpu_memory:
+ memory_map = list(map(lambda x : x.strip(), shared.args.gpu_memory))
+ max_cpu_memory = shared.args.cpu_memory.strip() if shared.args.cpu_memory is not None else '99GiB'
+ max_memory = {}
+ for i in range(len(memory_map)):
+ max_memory[i] = f'{memory_map[i]}GiB' if not re.match('.*ib$', memory_map[i].lower()) else memory_map[i]
+ max_memory['cpu'] = max_cpu_memory
- # Single GPU
- elif not shared.args.cpu:
- model = model.to(torch.device('cuda:0'))
+ device_map = accelerate.infer_auto_device_map(model, max_memory=max_memory, no_split_module_classes=["LlamaDecoderLayer"])
+ print("Using the following device map for the 4-bit model:", device_map)
+ # https://huggingface.co/docs/accelerate/package_reference/big_modeling#accelerate.dispatch_model
+ model = accelerate.dispatch_model(model, device_map=device_map, offload_buffers=True)
+ elif not shared.args.cpu:
+ model = model.to(torch.device('cuda:0'))
return model
diff --git a/modules/shared.py b/modules/shared.py
index 8cae1079e9..8d591f4f49 100644
--- a/modules/shared.py
+++ b/modules/shared.py
@@ -79,8 +79,9 @@ def str2bool(v):
parser.add_argument('--cpu', action='store_true', help='Use the CPU to generate text.')
parser.add_argument('--load-in-8bit', action='store_true', help='Load the model with 8-bit precision.')
parser.add_argument('--load-in-4bit', action='store_true', help='DEPRECATED: use --gptq-bits 4 instead.')
-parser.add_argument('--gptq-bits', type=int, default=0, help='Load a pre-quantized model with specified precision. 2, 3, 4 and 8bit are supported. Currently only works with LLaMA and OPT.')
-parser.add_argument('--gptq-model-type', type=str, help='Model type of pre-quantized model. Currently only LLaMa and OPT are supported.')
+parser.add_argument('--gptq-bits', type=int, default=0, help='GPTQ: Load a pre-quantized model with specified precision. 2, 3, 4 and 8bit are supported. Currently only works with LLaMA and OPT.')
+parser.add_argument('--gptq-model-type', type=str, help='GPTQ: Model type of pre-quantized model. Currently only LLaMa and OPT are supported.')
+parser.add_argument('--gptq-pre-layer', type=int, default=0, help='GPTQ: The number of layers to preload.')
parser.add_argument('--bf16', action='store_true', help='Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU.')
parser.add_argument('--auto-devices', action='store_true', help='Automatically split the model across the available GPU(s) and CPU.')
parser.add_argument('--disk', action='store_true', help='If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk.')
| This works in a 4GB card now:
```
python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20
``` | https://api.github.com/repos/oobabooga/text-generation-webui/pulls/460 | 2023-03-20T19:30:36Z | 2023-03-20T19:30:56Z | 2023-03-20T19:30:56Z | 2023-03-22T18:42:49Z | 1,280 | oobabooga/text-generation-webui | 26,752 |
Update sessions.py | diff --git a/requests/sessions.py b/requests/sessions.py
index d73d700fa6..ebfdecedc3 100644
--- a/requests/sessions.py
+++ b/requests/sessions.py
@@ -201,10 +201,7 @@ def resolve_redirects(self, resp, req, stream=False, timeout=None,
prepared_request.body = None
headers = prepared_request.headers
- try:
- del headers['Cookie']
- except KeyError:
- pass
+ headers.pop('Cookie', None)
# Extract any cookies sent on the response to the cookiejar
# in the new request. Because we've mutated our copied prepared
@@ -271,7 +268,6 @@ def rebuild_auth(self, prepared_request, response):
if new_auth is not None:
prepared_request.prepare_auth(new_auth)
- return
def rebuild_proxies(self, prepared_request, proxies):
"""This method re-evaluates the proxy configuration by considering the
| -Made removing a key-value pair more modular
-Remove unneeeded return statement | https://api.github.com/repos/psf/requests/pulls/5105 | 2019-05-29T18:49:53Z | 2019-08-20T04:14:37Z | 2019-08-20T04:14:37Z | 2021-08-31T00:07:05Z | 218 | psf/requests | 32,075 |
Hitbtc3 'has' | diff --git a/js/hitbtc3.js b/js/hitbtc3.js
index b01c2fd198fa..98710cf32b0d 100644
--- a/js/hitbtc3.js
+++ b/js/hitbtc3.js
@@ -18,7 +18,7 @@ module.exports = class hitbtc3 extends Exchange {
'has': {
'CORS': false,
'spot': true,
- 'margin': undefined, // has but not fully implemented
+ 'margin': true,
'swap': true,
'future': false,
'option': undefined,
@@ -29,18 +29,22 @@ module.exports = class hitbtc3 extends Exchange {
'createReduceOnlyOrder': true,
'editOrder': true,
'fetchBalance': true,
+ 'fetchBorrowRate': undefined,
+ 'fetchBorrowRateHistory': undefined,
+ 'fetchBorrowRateHistories': undefined,
+ 'fetchBorrowRates': undefined,
'fetchClosedOrders': true,
'fetchCurrencies': true,
'fetchDepositAddress': true,
'fetchDeposits': true,
- 'fetchFundingHistory': false,
+ 'fetchFundingHistory': undefined,
'fetchFundingRate': true,
'fetchFundingRateHistory': true,
'fetchFundingRates': false,
'fetchIndexOHLCV': true,
'fetchLeverage': true,
- 'fetchLeverageTiers': false,
- 'fetchMarketLeverageTiers': false,
+ 'fetchLeverageTiers': undefined,
+ 'fetchMarketLeverageTiers': undefined,
'fetchMarkets': true,
'fetchMarkOHLCV': true,
'fetchMyTrades': true,
| Set the remaining margin methods to undefined and set margin to true. | https://api.github.com/repos/ccxt/ccxt/pulls/12611 | 2022-04-02T23:08:03Z | 2022-04-03T00:24:19Z | 2022-04-03T00:24:19Z | 2022-04-03T00:24:20Z | 397 | ccxt/ccxt | 13,883 |
Added a test to ensure empty sessions are saved. | diff --git a/tests/sessions_tests/tests.py b/tests/sessions_tests/tests.py
index 713ff9e55319c..76e625aa76fbc 100644
--- a/tests/sessions_tests/tests.py
+++ b/tests/sessions_tests/tests.py
@@ -695,6 +695,45 @@ def test_flush_empty_without_session_cookie_doesnt_set_cookie(self):
# The session is accessed so "Vary: Cookie" should be set.
self.assertEqual(response['Vary'], 'Cookie')
+ def test_empty_session_saved(self):
+ """"
+ If a session is emptied of data but still has a key, it should still
+ be updated.
+ """
+ request = RequestFactory().get('/')
+ response = HttpResponse('Session test')
+ middleware = SessionMiddleware()
+
+ # Set a session key and some data.
+ middleware.process_request(request)
+ request.session['foo'] = 'bar'
+ # Handle the response through the middleware.
+ response = middleware.process_response(request, response)
+ self.assertEqual(tuple(request.session.items()), (('foo', 'bar'),))
+ # A cookie should be set, along with Vary: Cookie.
+ self.assertIn(
+ 'Set-Cookie: sessionid=%s' % request.session.session_key,
+ str(response.cookies)
+ )
+ self.assertEqual(response['Vary'], 'Cookie')
+
+ # Empty the session data.
+ del request.session['foo']
+ # Handle the response through the middleware.
+ response = HttpResponse('Session test')
+ response = middleware.process_response(request, response)
+ self.assertEqual(dict(request.session.values()), {})
+ session = Session.objects.get(session_key=request.session.session_key)
+ self.assertEqual(session.get_decoded(), {})
+ # While the session is empty, it hasn't been flushed so a cookie should
+ # still be set, along with Vary: Cookie.
+ self.assertGreater(len(request.session.session_key), 8)
+ self.assertIn(
+ 'Set-Cookie: sessionid=%s' % request.session.session_key,
+ str(response.cookies)
+ )
+ self.assertEqual(response['Vary'], 'Cookie')
+
# Don't need DB flushing for these tests, so can use unittest.TestCase as base class
class CookieSessionTests(SessionTestsMixin, unittest.TestCase):
| This test fails if the `not bool(self._session_key)` condition in `SessionBase.is_empty()` is removed (currently no other tests do).
| https://api.github.com/repos/django/django/pulls/5153 | 2015-08-18T21:58:55Z | 2015-08-20T14:45:08Z | 2015-08-20T14:45:08Z | 2015-08-20T15:35:46Z | 506 | django/django | 51,179 |
Add TaskReply types for task hooks | diff --git a/website/src/components/Tasks/CreateTask.tsx b/website/src/components/Tasks/CreateTask.tsx
index 8d406fd755..448ad28784 100644
--- a/website/src/components/Tasks/CreateTask.tsx
+++ b/website/src/components/Tasks/CreateTask.tsx
@@ -8,6 +8,7 @@ import { TaskSurveyProps } from "src/components/Tasks/Task";
import { TaskHeader } from "src/components/Tasks/TaskHeader";
import { getTypeSafei18nKey } from "src/lib/i18n";
import { TaskType } from "src/types/Task";
+import { CreateTaskReply } from "src/types/TaskResponses";
import { CreateTaskType } from "src/types/Tasks";
export const CreateTask = ({
@@ -17,7 +18,7 @@ export const CreateTask = ({
isDisabled,
onReplyChanged,
onValidityChanged,
-}: TaskSurveyProps<CreateTaskType, { text: string }>) => {
+}: TaskSurveyProps<CreateTaskType, CreateTaskReply>) => {
const { t, i18n } = useTranslation(["tasks", "common"]);
const cardColor = useColorModeValue("gray.50", "gray.800");
const titleColor = useColorModeValue("gray.800", "gray.300");
diff --git a/website/src/components/Tasks/EvaluateTask.tsx b/website/src/components/Tasks/EvaluateTask.tsx
index e2f07749af..2f24e6f1f4 100644
--- a/website/src/components/Tasks/EvaluateTask.tsx
+++ b/website/src/components/Tasks/EvaluateTask.tsx
@@ -6,6 +6,7 @@ import { SurveyCard } from "src/components/Survey/SurveyCard";
import { TaskSurveyProps } from "src/components/Tasks/Task";
import { TaskHeader } from "src/components/Tasks/TaskHeader";
import { TaskType } from "src/types/Task";
+import { EvaluateTaskReply } from "src/types/TaskResponses";
import { RankTaskType } from "src/types/Tasks";
export const EvaluateTask = ({
@@ -15,7 +16,7 @@ export const EvaluateTask = ({
isDisabled,
onReplyChanged,
onValidityChanged,
-}: TaskSurveyProps<RankTaskType, { ranking: number[] }>) => {
+}: TaskSurveyProps<RankTaskType, EvaluateTaskReply>) => {
const cardColor = useColorModeValue("gray.50", "gray.800");
const [ranking, setRanking] = useState<number[]>(null);
diff --git a/website/src/components/Tasks/LabelTask/LabelTask.tsx b/website/src/components/Tasks/LabelTask/LabelTask.tsx
index 3336975792..9c211d624a 100644
--- a/website/src/components/Tasks/LabelTask/LabelTask.tsx
+++ b/website/src/components/Tasks/LabelTask/LabelTask.tsx
@@ -6,6 +6,7 @@ import { MessageTable } from "src/components/Messages/MessageTable";
import { TwoColumnsWithCards } from "src/components/Survey/TwoColumnsWithCards";
import { TaskSurveyProps } from "src/components/Tasks/Task";
import { TaskHeader } from "src/components/Tasks/TaskHeader";
+import { LabelTaskReply } from "src/types/TaskResponses";
import { LabelTaskType } from "src/types/Tasks";
const isRequired = (labelName: string, requiredLabels?: string[]) => {
@@ -18,7 +19,7 @@ export const LabelTask = ({
isEditable,
onReplyChanged,
onValidityChanged,
-}: TaskSurveyProps<LabelTaskType, { text: string; labels: Record<string, number>; message_id: string }>) => {
+}: TaskSurveyProps<LabelTaskType, LabelTaskReply>) => {
const { t } = useTranslation("labelling");
const [values, setValues] = useState<number[]>(new Array(task.labels.length).fill(null));
const [userInputMade, setUserInputMade] = useBoolean(false);
diff --git a/website/src/components/Tasks/Task/Task.tsx b/website/src/components/Tasks/Task/Task.tsx
index c10ab597f3..5416063643 100644
--- a/website/src/components/Tasks/Task/Task.tsx
+++ b/website/src/components/Tasks/Task/Task.tsx
@@ -53,12 +53,12 @@ interface UpdateValidity {
replyValidity: TaskReplyValidity;
}
-export interface TaskSurveyProps<TaskType extends BaseTask, T> {
+export interface TaskSurveyProps<TaskType extends BaseTask, ReplyContent> {
task: TaskType;
taskType: TaskInfo;
isEditable: boolean;
isDisabled?: boolean;
- onReplyChanged: (content: T) => void;
+ onReplyChanged: (content: ReplyContent) => void;
onValidityChanged: (validity: TaskReplyValidity) => void;
}
diff --git a/website/src/hooks/tasks/useCreateReply.ts b/website/src/hooks/tasks/useCreateReply.ts
index 23bc041d39..9bb0c48709 100644
--- a/website/src/hooks/tasks/useCreateReply.ts
+++ b/website/src/hooks/tasks/useCreateReply.ts
@@ -1,8 +1,11 @@
+import { useGenericTaskAPI } from "src/hooks/tasks/useGenericTaskAPI";
import { TaskType } from "src/types/Task";
+import { CreateTaskReply } from "src/types/TaskResponses";
import { CreateAssistantReplyTask, CreateInitialPromptTask, CreatePrompterReplyTask } from "src/types/Tasks";
-import { useGenericTaskAPI } from "./useGenericTaskAPI";
-
-export const useCreateAssistantReply = () => useGenericTaskAPI<CreateAssistantReplyTask>(TaskType.assistant_reply);
-export const useCreatePrompterReply = () => useGenericTaskAPI<CreatePrompterReplyTask>(TaskType.prompter_reply);
-export const useCreateInitialPrompt = () => useGenericTaskAPI<CreateInitialPromptTask>(TaskType.initial_prompt);
+export const useCreateAssistantReply = () =>
+ useGenericTaskAPI<CreateAssistantReplyTask, CreateTaskReply>(TaskType.assistant_reply);
+export const useCreatePrompterReply = () =>
+ useGenericTaskAPI<CreatePrompterReplyTask, CreateTaskReply>(TaskType.prompter_reply);
+export const useCreateInitialPrompt = () =>
+ useGenericTaskAPI<CreateInitialPromptTask, CreateTaskReply>(TaskType.initial_prompt);
diff --git a/website/src/hooks/tasks/useEvaluateReplies.ts b/website/src/hooks/tasks/useEvaluateReplies.ts
new file mode 100644
index 0000000000..a2e19a1c74
--- /dev/null
+++ b/website/src/hooks/tasks/useEvaluateReplies.ts
@@ -0,0 +1,13 @@
+import { useGenericTaskAPI } from "src/hooks/tasks/useGenericTaskAPI";
+import { TaskType } from "src/types/Task";
+import { EvaluateTaskReply } from "src/types/TaskResponses";
+import { RankAssistantRepliesTask, RankInitialPromptsTask, RankPrompterRepliesTask } from "src/types/Tasks";
+
+export const useRankAssistantRepliesTask = () =>
+ useGenericTaskAPI<RankAssistantRepliesTask, EvaluateTaskReply>(TaskType.rank_assistant_replies);
+
+export const useRankPrompterRepliesTask = () =>
+ useGenericTaskAPI<RankPrompterRepliesTask, EvaluateTaskReply>(TaskType.rank_prompter_replies);
+
+export const useRankInitialPromptsTask = () =>
+ useGenericTaskAPI<RankInitialPromptsTask, EvaluateTaskReply>(TaskType.rank_initial_prompts);
diff --git a/website/src/hooks/tasks/useGenericTaskAPI.tsx b/website/src/hooks/tasks/useGenericTaskAPI.tsx
index 7a7d651b93..e7630bdc1f 100644
--- a/website/src/hooks/tasks/useGenericTaskAPI.tsx
+++ b/website/src/hooks/tasks/useGenericTaskAPI.tsx
@@ -3,11 +3,11 @@ import { TaskInfos } from "src/components/Tasks/TaskTypes";
import { get, post } from "src/lib/api";
import { TaskApiHook } from "src/types/Hooks";
import { BaseTask, ServerTaskResponse, TaskResponse, TaskType as TaskTypeEnum } from "src/types/Task";
+import { AllTaskReplies } from "src/types/TaskResponses";
import useSWRImmutable from "swr/immutable";
import useSWRMutation from "swr/mutation";
-// TODO: provide type for the content reply, this will be much harder since the replies vary vastly
-export const useGenericTaskAPI = <TaskType extends BaseTask, ResponseContent = any>(
+export const useGenericTaskAPI = <TaskType extends BaseTask, ResponseContent = AllTaskReplies>(
taskType: TaskTypeEnum
): TaskApiHook<TaskType, ResponseContent> => {
const [response, setResponse] = useState<TaskResponse<TaskType>>({ taskAvailability: "AWAITING_INITIAL" });
@@ -71,5 +71,5 @@ export const useGenericTaskAPI = <TaskType extends BaseTask, ResponseContent = a
[response, sendTaskContent]
);
- return { response, isLoading, rejectTask, completeTask, skipTask };
+ return { response, isLoading, rejectTask, completeTask };
};
diff --git a/website/src/hooks/tasks/useLabelingTask.ts b/website/src/hooks/tasks/useLabelingTask.ts
index 3782c7a340..23d305d000 100644
--- a/website/src/hooks/tasks/useLabelingTask.ts
+++ b/website/src/hooks/tasks/useLabelingTask.ts
@@ -1,9 +1,11 @@
+import { useGenericTaskAPI } from "src/hooks/tasks/useGenericTaskAPI";
import { TaskType } from "src/types/Task";
+import { LabelTaskReply } from "src/types/TaskResponses";
import { LabelAssistantReplyTask, LabelInitialPromptTask, LabelPrompterReplyTask } from "src/types/Tasks";
-import { useGenericTaskAPI } from "./useGenericTaskAPI";
-
export const useLabelAssistantReplyTask = () =>
- useGenericTaskAPI<LabelAssistantReplyTask>(TaskType.label_assistant_reply);
-export const useLabelInitialPromptTask = () => useGenericTaskAPI<LabelInitialPromptTask>(TaskType.label_initial_prompt);
-export const useLabelPrompterReplyTask = () => useGenericTaskAPI<LabelPrompterReplyTask>(TaskType.label_prompter_reply);
+ useGenericTaskAPI<LabelAssistantReplyTask, LabelTaskReply>(TaskType.label_assistant_reply);
+export const useLabelInitialPromptTask = () =>
+ useGenericTaskAPI<LabelInitialPromptTask, LabelTaskReply>(TaskType.label_initial_prompt);
+export const useLabelPrompterReplyTask = () =>
+ useGenericTaskAPI<LabelPrompterReplyTask, LabelTaskReply>(TaskType.label_prompter_reply);
diff --git a/website/src/hooks/tasks/useRankReplies.ts b/website/src/hooks/tasks/useRankReplies.ts
deleted file mode 100644
index d4accda00b..0000000000
--- a/website/src/hooks/tasks/useRankReplies.ts
+++ /dev/null
@@ -1,12 +0,0 @@
-import { TaskType } from "src/types/Task";
-import { RankAssistantRepliesTask, RankInitialPromptsTask, RankPrompterRepliesTask } from "src/types/Tasks";
-
-import { useGenericTaskAPI } from "./useGenericTaskAPI";
-
-export const useRankAssistantRepliesTask = () =>
- useGenericTaskAPI<RankAssistantRepliesTask>(TaskType.rank_assistant_replies);
-
-export const useRankPrompterRepliesTask = () =>
- useGenericTaskAPI<RankPrompterRepliesTask>(TaskType.rank_prompter_replies);
-
-export const useRankInitialPromptsTask = () => useGenericTaskAPI<RankInitialPromptsTask>(TaskType.rank_initial_prompts);
diff --git a/website/src/lib/constants.ts b/website/src/lib/constants.ts
index a260fa28d6..9efe3d0337 100644
--- a/website/src/lib/constants.ts
+++ b/website/src/lib/constants.ts
@@ -3,17 +3,17 @@ import {
useCreateInitialPrompt,
useCreatePrompterReply,
} from "src/hooks/tasks/useCreateReply";
+import {
+ useRankAssistantRepliesTask,
+ useRankInitialPromptsTask,
+ useRankPrompterRepliesTask,
+} from "src/hooks/tasks/useEvaluateReplies";
import { useGenericTaskAPI } from "src/hooks/tasks/useGenericTaskAPI";
import {
useLabelAssistantReplyTask,
useLabelInitialPromptTask,
useLabelPrompterReplyTask,
} from "src/hooks/tasks/useLabelingTask";
-import {
- useRankAssistantRepliesTask,
- useRankInitialPromptsTask,
- useRankPrompterRepliesTask,
-} from "src/hooks/tasks/useRankReplies";
import { TaskApiHooks } from "src/types/Hooks";
import { TaskType } from "src/types/Task";
diff --git a/website/src/types/Hooks.ts b/website/src/types/Hooks.ts
index acdd0b589d..5225c00904 100644
--- a/website/src/types/Hooks.ts
+++ b/website/src/types/Hooks.ts
@@ -4,7 +4,6 @@ export type TaskApiHook<Task extends BaseTask, ResponseContent> = {
response: TaskResponse<Task>;
isLoading: boolean;
completeTask: (interaction: ResponseContent) => Promise<void>;
- skipTask: () => Promise<void>;
rejectTask: (reason: string) => Promise<void>;
};
diff --git a/website/src/types/TaskResponses.ts b/website/src/types/TaskResponses.ts
new file mode 100644
index 0000000000..a6a49e896d
--- /dev/null
+++ b/website/src/types/TaskResponses.ts
@@ -0,0 +1,15 @@
+export interface CreateTaskReply {
+ text: string;
+}
+
+export interface EvaluateTaskReply {
+ ranking: number[];
+}
+
+export interface LabelTaskReply {
+ text: string;
+ labels: Record<string, number>;
+ message_id: string;
+}
+
+export type AllTaskReplies = CreateTaskReply | EvaluateTaskReply | LabelTaskReply;
| With the PR every part of the task process is now typed.
We still do a lot of casting around which is not ideal but this works for now. | https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/1078 | 2023-02-03T07:31:29Z | 2023-02-03T08:33:25Z | 2023-02-03T08:33:25Z | 2023-02-03T08:33:26Z | 3,188 | LAION-AI/Open-Assistant | 37,539 |
[client] Fix ray client object ref releasing in wrong context. | diff --git a/python/ray/_raylet.pxd b/python/ray/_raylet.pxd
index 7715063ea8ec0..29cdc56118704 100644
--- a/python/ray/_raylet.pxd
+++ b/python/ray/_raylet.pxd
@@ -93,6 +93,7 @@ cdef class ObjectRef(BaseID):
cdef class ClientObjectRef(ObjectRef):
cdef object _mutex
cdef object _id_future
+ cdef object _client_worker_ref
cdef _set_id(self, id)
cdef inline _wait_for_id(self, timeout=None)
@@ -107,6 +108,7 @@ cdef class ActorID(BaseID):
cdef class ClientActorRef(ActorID):
cdef object _mutex
cdef object _id_future
+ cdef object _client_worker_ref
cdef _set_id(self, id)
cdef inline _wait_for_id(self, timeout=None)
diff --git a/python/ray/includes/object_ref.pxi b/python/ray/includes/object_ref.pxi
index 860f2fe2877c6..95073b86e54e8 100644
--- a/python/ray/includes/object_ref.pxi
+++ b/python/ray/includes/object_ref.pxi
@@ -5,6 +5,7 @@ import concurrent.futures
import functools
import logging
import threading
+import weakref
from typing import Callable, Any, Union
import ray
@@ -154,6 +155,9 @@ cdef class ClientObjectRef(ObjectRef):
def __init__(self, id: Union[bytes, concurrent.futures.Future]):
self.in_core_worker = False
self._mutex = threading.Lock()
+ # client worker might be cleaned up before __dealloc__ is called.
+ # so use a weakref to check whether it's alive or not.
+ self._client_worker_ref = weakref.ref(client.ray.get_context().client_worker)
if isinstance(id, bytes):
self._set_id(id)
elif isinstance(id, concurrent.futures.Future):
@@ -162,14 +166,8 @@ cdef class ClientObjectRef(ObjectRef):
raise TypeError("Unexpected type for id {}".format(id))
def __dealloc__(self):
- if client is None or client.ray is None:
- # Similar issue as mentioned in ObjectRef.__dealloc__ above. The
- # client package or client.ray object might be set
- # to None when the script exits. Should be safe to skip
- # call_release in this case, since the client should have already
- # disconnected at this point.
- return
- if client.ray.is_connected():
+ client_worker = self._client_worker_ref()
+ if client_worker is not None and client_worker.is_connected():
try:
self._wait_for_id()
# cython would suppress this exception as well, but it tries to
@@ -182,7 +180,7 @@ cdef class ClientObjectRef(ObjectRef):
"a method on the actor reference before its destructor "
"is run.")
if not self.data.IsNil():
- client.ray.call_release(self.id)
+ client_worker.call_release(self.id)
cdef CObjectID native(self):
self._wait_for_id()
@@ -251,13 +249,16 @@ cdef class ClientObjectRef(ObjectRef):
data = loads_from_server(resp.get.data)
py_callback(data)
-
- client.ray._register_callback(self, deserialize_obj)
+ client_worker = self._client_worker_ref()
+ assert client_worker is not None
+ client_worker.register_callback(self, deserialize_obj)
cdef _set_id(self, id):
check_id(id)
self.data = CObjectID.FromBinary(<c_string>id)
- client.ray.call_retain(id)
+ client_worker = self._client_worker_ref()
+ assert client_worker is not None
+ client_worker.call_retain(id)
cdef inline _wait_for_id(self, timeout=None):
if self._id_future:
diff --git a/python/ray/includes/unique_ids.pxi b/python/ray/includes/unique_ids.pxi
index 93822f570d090..e86462922f05e 100644
--- a/python/ray/includes/unique_ids.pxi
+++ b/python/ray/includes/unique_ids.pxi
@@ -325,6 +325,9 @@ cdef class ClientActorRef(ActorID):
def __init__(self, id: Union[bytes, concurrent.futures.Future]):
self._mutex = threading.Lock()
+ # client worker might be cleaned up before __dealloc__ is called.
+ # so use a weakref to check whether it's alive or not.
+ self._client_worker_ref = weakref.ref(client.ray.get_context().client_worker)
if isinstance(id, bytes):
self._set_id(id)
elif isinstance(id, Future):
@@ -333,13 +336,8 @@ cdef class ClientActorRef(ActorID):
raise TypeError("Unexpected type for id {}".format(id))
def __dealloc__(self):
- if client is None or client.ray is None:
- # The client package or client.ray object might be set
- # to None when the script exits. Should be safe to skip
- # call_release in this case, since the client should have already
- # disconnected at this point.
- return
- if client.ray.is_connected():
+ client_worker = self._client_worker_ref()
+ if client_worker is not None and client_worker.is_connected():
try:
self._wait_for_id()
# cython would suppress this exception as well, but it tries to
@@ -352,7 +350,7 @@ cdef class ClientActorRef(ActorID):
"a method on the actor reference before its destructor "
"is run.")
if not self.data.IsNil():
- client.ray.call_release(self.id)
+ client_worker.call_release(self.id)
def binary(self):
self._wait_for_id()
@@ -381,7 +379,9 @@ cdef class ClientActorRef(ActorID):
cdef _set_id(self, id):
check_id(id, CActorID.Size())
self.data = CActorID.FromBinary(<c_string>id)
- client.ray.call_retain(id)
+ client_worker = self._client_worker_ref()
+ assert client_worker is not None
+ client_worker.call_retain(id)
cdef _wait_for_id(self, timeout=None):
if self._id_future:
@@ -390,7 +390,6 @@ cdef class ClientActorRef(ActorID):
self._set_id(self._id_future.result(timeout=timeout))
self._id_future = None
-
cdef class FunctionID(UniqueID):
def __init__(self, id):
diff --git a/python/ray/tests/test_client.py b/python/ray/tests/test_client.py
index d02edeed51dcc..07ba7b9a3d1c9 100644
--- a/python/ray/tests/test_client.py
+++ b/python/ray/tests/test_client.py
@@ -758,5 +758,31 @@ def f():
ray.get(f.remote())
+@pytest.mark.parametrize(
+ "call_ray_start",
+ ["ray start --head --ray-client-server-port 25553 --num-cpus 1"],
+ indirect=True,
+)
+def test_object_ref_release(call_ray_start):
+ """This is to test the release of an object in previous session is
+ handled correctly.
+ """
+ import ray
+
+ ray.init("ray://localhost:25553")
+
+ a = ray.put("Hello")
+
+ ray.shutdown()
+ ray.init("ray://localhost:25553")
+ # a is release in the session which doesn't create it.
+ del a
+
+ with disable_client_hook():
+ # Make sure a doesn't generate a release request.
+ ref_cnt = ray.util.client.ray.get_context().client_worker.reference_count
+ assert all(v > 0 for v in ref_cnt.values())
+
+
if __name__ == "__main__":
sys.exit(pytest.main(["-v", __file__]))
diff --git a/python/ray/util/client/__init__.py b/python/ray/util/client/__init__.py
index 2f55135e6e084..174ddf2ded2f1 100644
--- a/python/ray/util/client/__init__.py
+++ b/python/ray/util/client/__init__.py
@@ -138,8 +138,10 @@ def _check_versions(self, conn_info: Dict[str, Any], ignore_version: bool) -> No
def disconnect(self):
"""Disconnect the Ray Client."""
+
if self.client_worker is not None:
self.client_worker.close()
+ self.api.worker = None
self.client_worker = None
# remote can be called outside of a connection, which is why it
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
`ClientObjectRef` async fetching and deallocating, which can lead to being released in an incorrect context.
This PR make object ref associated with client context and call the context directly to avoid the issue.
Three potential cases fixed:
- ray inited with two context
- allocate an object ref in context 1
- switch to context 2
- delete the ref allocated in context 1
- context 1 will not release the object since the release is sent to context 2
and (tested in test case)
- ray inited with one context
- allocate an object ref
- shutdown and re-inited
- connected successfully
- delete the object ref
- it will lead the reference counting having negative numbers
and (broken when enable GCS HA)
- ray inited with one context
- allocate an object ref
- shutdown and re-inited
- before connected successfully, object ref deallocated.
- ray client server will fail to start because it now will receive `release` as first message
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/22025 | 2022-02-01T04:30:26Z | 2022-02-02T06:42:40Z | 2022-02-02T06:42:40Z | 2022-02-04T04:48:35Z | 1,970 | ray-project/ray | 19,905 |
Bump certbot's acme depenency to 0.22.1 | diff --git a/setup.py b/setup.py
index 3667a6976a4..ba521ed2aed 100644
--- a/setup.py
+++ b/setup.py
@@ -34,9 +34,7 @@ def read_file(filename, encoding='utf8'):
# specified here to avoid masking the more specific request requirements in
# acme. See https://github.com/pypa/pip/issues/988 for more info.
install_requires = [
- # Remember to update local-oldest-requirements.txt when changing the
- # minimum acme version.
- 'acme>0.21.1',
+ 'acme>=0.22.1',
# We technically need ConfigArgParse 0.10.0 for Python 2.6 support, but
# saying so here causes a runtime error against our temporary fork of 0.9.3
# in which we added 2.6 support (see #2243), so we relax the requirement.
| Fixes #5821. Also removed the comment about `local-oldest-requirements.txt` since it doesn't seem to be relevant anymore. | https://api.github.com/repos/certbot/certbot/pulls/5826 | 2018-04-04T20:04:34Z | 2018-04-11T21:36:54Z | 2018-04-11T21:36:53Z | 2018-04-11T21:36:58Z | 215 | certbot/certbot | 3,509 |
2.2.1 | diff --git a/docs/news.rst b/docs/news.rst
index 80d130e4a71..fd507c3cc55 100644
--- a/docs/news.rst
+++ b/docs/news.rst
@@ -3,6 +3,16 @@
Release notes
=============
+.. _release-2.2.1:
+
+Scrapy 2.2.1 (2020-07-17)
+-------------------------
+
+* The :command:`startproject` command no longer makes unintended changes to
+ the permissions of files in the destination folder, such as removing
+ execution permissions (:issue:`4662`, :issue:`4666`)
+
+
.. _release-2.2.0:
Scrapy 2.2.0 (2020-06-24)
diff --git a/scrapy/commands/startproject.py b/scrapy/commands/startproject.py
index 8522819592d..eccc2a3e162 100644
--- a/scrapy/commands/startproject.py
+++ b/scrapy/commands/startproject.py
@@ -1,10 +1,10 @@
import re
import os
-import stat
import string
from importlib import import_module
from os.path import join, exists, abspath
from shutil import ignore_patterns, move, copy2, copystat
+from stat import S_IWUSR as OWNER_WRITE_PERMISSION
import scrapy
from scrapy.commands import ScrapyCommand
@@ -20,7 +20,12 @@
('${project_name}', 'middlewares.py.tmpl'),
)
-IGNORE = ignore_patterns('*.pyc', '.svn')
+IGNORE = ignore_patterns('*.pyc', '__pycache__', '.svn')
+
+
+def _make_writable(path):
+ current_permissions = os.stat(path).st_mode
+ os.chmod(path, current_permissions | OWNER_WRITE_PERMISSION)
class Command(ScrapyCommand):
@@ -78,30 +83,10 @@ def _copytree(self, src, dst):
self._copytree(srcname, dstname)
else:
copy2(srcname, dstname)
- copystat(src, dst)
- self._set_rw_permissions(dst)
+ _make_writable(dstname)
- def _set_rw_permissions(self, path):
- """
- Sets permissions of a directory tree to +rw and +rwx for folders.
- This is necessary if the start template files come without write
- permissions.
- """
- mode_rw = (stat.S_IRUSR
- | stat.S_IWUSR
- | stat.S_IRGRP
- | stat.S_IROTH)
-
- mode_x = (stat.S_IXUSR
- | stat.S_IXGRP
- | stat.S_IXOTH)
-
- os.chmod(path, mode_rw | mode_x)
- for root, dirs, files in os.walk(path):
- for dir in dirs:
- os.chmod(join(root, dir), mode_rw | mode_x)
- for file in files:
- os.chmod(join(root, file), mode_rw)
+ copystat(src, dst)
+ _make_writable(dst)
def run(self, args, opts):
if len(args) not in (1, 2):
diff --git a/tests/test_commands.py b/tests/test_commands.py
index 24a341759b7..00223782473 100644
--- a/tests/test_commands.py
+++ b/tests/test_commands.py
@@ -2,11 +2,14 @@
import json
import optparse
import os
+from stat import S_IWRITE as ANYONE_WRITE_PERMISSION
import subprocess
import sys
import tempfile
from contextlib import contextmanager
+from itertools import chain
from os.path import exists, join, abspath
+from pathlib import Path
from shutil import rmtree, copytree
from tempfile import mkdtemp
from threading import Timer
@@ -15,6 +18,7 @@
import scrapy
from scrapy.commands import ScrapyCommand
+from scrapy.commands.startproject import IGNORE
from scrapy.settings import Settings
from scrapy.utils.python import to_unicode
from scrapy.utils.test import get_testenv
@@ -119,6 +123,29 @@ def test_startproject_with_project_dir(self):
self.assertEqual(2, self.call('startproject', self.project_name, project_dir, 'another_params'))
+def get_permissions_dict(path, renamings=None, ignore=None):
+ renamings = renamings or tuple()
+ permissions_dict = {
+ '.': os.stat(path).st_mode,
+ }
+ for root, dirs, files in os.walk(path):
+ nodes = list(chain(dirs, files))
+ if ignore:
+ ignored_names = ignore(root, nodes)
+ nodes = [node for node in nodes if node not in ignored_names]
+ for node in nodes:
+ absolute_path = os.path.join(root, node)
+ relative_path = os.path.relpath(absolute_path, path)
+ for search_string, replacement in renamings:
+ relative_path = relative_path.replace(
+ search_string,
+ replacement
+ )
+ permissions = os.stat(absolute_path).st_mode
+ permissions_dict[relative_path] = permissions
+ return permissions_dict
+
+
class StartprojectTemplatesTest(ProjectTest):
def setUp(self):
@@ -139,6 +166,149 @@ def test_startproject_template_override(self):
self.assertIn(self.tmpl_proj, out)
assert exists(join(self.proj_path, 'root_template'))
+ def test_startproject_permissions_from_writable(self):
+ """Check that generated files have the right permissions when the
+ template folder has the same permissions as in the project, i.e.
+ everything is writable."""
+ scrapy_path = scrapy.__path__[0]
+ project_template = os.path.join(scrapy_path, 'templates', 'project')
+ project_name = 'startproject1'
+ renamings = (
+ ('module', project_name),
+ ('.tmpl', ''),
+ )
+ expected_permissions = get_permissions_dict(
+ project_template,
+ renamings,
+ IGNORE,
+ )
+
+ destination = mkdtemp()
+ process = subprocess.Popen(
+ (
+ sys.executable,
+ '-m',
+ 'scrapy.cmdline',
+ 'startproject',
+ project_name,
+ ),
+ cwd=destination,
+ env=self.env,
+ )
+ process.wait()
+
+ project_dir = os.path.join(destination, project_name)
+ actual_permissions = get_permissions_dict(project_dir)
+
+ self.assertEqual(actual_permissions, expected_permissions)
+
+ def test_startproject_permissions_from_read_only(self):
+ """Check that generated files have the right permissions when the
+ template folder has been made read-only, which is something that some
+ systems do.
+
+ See https://github.com/scrapy/scrapy/pull/4604
+ """
+ scrapy_path = scrapy.__path__[0]
+ templates_dir = os.path.join(scrapy_path, 'templates')
+ project_template = os.path.join(templates_dir, 'project')
+ project_name = 'startproject2'
+ renamings = (
+ ('module', project_name),
+ ('.tmpl', ''),
+ )
+ expected_permissions = get_permissions_dict(
+ project_template,
+ renamings,
+ IGNORE,
+ )
+
+ def _make_read_only(path):
+ current_permissions = os.stat(path).st_mode
+ os.chmod(path, current_permissions & ~ANYONE_WRITE_PERMISSION)
+
+ read_only_templates_dir = str(Path(mkdtemp()) / 'templates')
+ copytree(templates_dir, read_only_templates_dir)
+
+ for root, dirs, files in os.walk(read_only_templates_dir):
+ for node in chain(dirs, files):
+ _make_read_only(os.path.join(root, node))
+
+ destination = mkdtemp()
+ process = subprocess.Popen(
+ (
+ sys.executable,
+ '-m',
+ 'scrapy.cmdline',
+ 'startproject',
+ project_name,
+ '--set',
+ 'TEMPLATES_DIR={}'.format(read_only_templates_dir),
+ ),
+ cwd=destination,
+ env=self.env,
+ )
+ process.wait()
+
+ project_dir = os.path.join(destination, project_name)
+ actual_permissions = get_permissions_dict(project_dir)
+
+ self.assertEqual(actual_permissions, expected_permissions)
+
+ def test_startproject_permissions_unchanged_in_destination(self):
+ """Check that pre-existing folders and files in the destination folder
+ do not see their permissions modified."""
+ scrapy_path = scrapy.__path__[0]
+ project_template = os.path.join(scrapy_path, 'templates', 'project')
+ project_name = 'startproject3'
+ renamings = (
+ ('module', project_name),
+ ('.tmpl', ''),
+ )
+ expected_permissions = get_permissions_dict(
+ project_template,
+ renamings,
+ IGNORE,
+ )
+
+ destination = mkdtemp()
+ project_dir = os.path.join(destination, project_name)
+
+ existing_nodes = {
+ oct(permissions)[2:] + extension: permissions
+ for extension in ('', '.d')
+ for permissions in (
+ 0o444, 0o555, 0o644, 0o666, 0o755, 0o777,
+ )
+ }
+ os.mkdir(project_dir)
+ project_dir_path = Path(project_dir)
+ for node, permissions in existing_nodes.items():
+ path = project_dir_path / node
+ if node.endswith('.d'):
+ path.mkdir(mode=permissions)
+ else:
+ path.touch(mode=permissions)
+ expected_permissions[node] = path.stat().st_mode
+
+ process = subprocess.Popen(
+ (
+ sys.executable,
+ '-m',
+ 'scrapy.cmdline',
+ 'startproject',
+ project_name,
+ '.',
+ ),
+ cwd=project_dir,
+ env=self.env,
+ )
+ process.wait()
+
+ actual_permissions = get_permissions_dict(project_dir)
+
+ self.assertEqual(actual_permissions, expected_permissions)
+
class CommandTest(ProjectTest):
| https://api.github.com/repos/scrapy/scrapy/pulls/4685 | 2020-07-17T10:16:15Z | 2020-07-17T10:41:28Z | 2020-07-17T10:41:28Z | 2020-07-17T10:41:28Z | 2,266 | scrapy/scrapy | 35,131 | |
Refactor persistence logic; use single file for persistence | diff --git a/localstack/config.py b/localstack/config.py
index 35337cb614c2a..ffaf6cbaf1c66 100644
--- a/localstack/config.py
+++ b/localstack/config.py
@@ -192,9 +192,10 @@ def in_docker():
if LOCALSTACK_HOSTNAME == HOSTNAME:
DOCKER_HOST_FROM_CONTAINER = 'host.docker.internal'
# update LOCALSTACK_HOSTNAME if host.docker.internal is available
- if is_in_docker and LOCALSTACK_HOSTNAME == DOCKER_BRIDGE_IP:
+ if is_in_docker:
DOCKER_HOST_FROM_CONTAINER = socket.gethostbyname('host.docker.internal')
- LOCALSTACK_HOSTNAME = DOCKER_HOST_FROM_CONTAINER
+ if LOCALSTACK_HOSTNAME == DOCKER_BRIDGE_IP:
+ LOCALSTACK_HOSTNAME = DOCKER_HOST_FROM_CONTAINER
except socket.error:
pass
diff --git a/localstack/services/es/es_api.py b/localstack/services/es/es_api.py
index e610ac01af54b..1384bbbb93754 100644
--- a/localstack/services/es/es_api.py
+++ b/localstack/services/es/es_api.py
@@ -2,6 +2,7 @@
import time
from random import randint
from flask import Flask, jsonify, request, make_response
+from localstack.utils import persistence
from localstack.services import generic_proxy
from localstack.utils.aws import aws_stack
from localstack.constants import TEST_AWS_ACCOUNT_ID
@@ -161,7 +162,7 @@ def get_domain_status(domain_name, deleted=False):
def start_elasticsearch_instance():
# Note: keep imports here to avoid circular dependencies
from localstack.services.es import es_starter
- from localstack.services.infra import check_infra, restore_persisted_data, Plugin
+ from localstack.services.infra import check_infra, Plugin
api_name = 'elasticsearch'
plugin = Plugin(api_name, start=es_starter.start_elasticsearch, check=es_starter.check_elasticsearch)
@@ -172,7 +173,7 @@ def start_elasticsearch_instance():
# ensure that all infra components are up and running
check_infra(apis=apis, additional_checks=[es_starter.check_elasticsearch])
# restore persisted data
- restore_persisted_data(apis=apis)
+ persistence.restore_persisted_data(apis=apis)
return t1
diff --git a/localstack/services/generic_proxy.py b/localstack/services/generic_proxy.py
index 6c4f3ee94a95a..314db3a65d735 100644
--- a/localstack/services/generic_proxy.py
+++ b/localstack/services/generic_proxy.py
@@ -289,7 +289,7 @@ def is_full_url(url):
kwargs = {
'method': method,
'path': path,
- 'data': data,
+ 'data': self.data_bytes,
'headers': forward_headers,
'response': response
}
diff --git a/localstack/services/infra.py b/localstack/services/infra.py
index 3f5fb52565d57..9475c8b9584d4 100644
--- a/localstack/services/infra.py
+++ b/localstack/services/infra.py
@@ -270,11 +270,6 @@ def get_service_status(service, port=None):
return status
-def restore_persisted_data(apis):
- for api in apis:
- persistence.restore_persisted_data(api)
-
-
def register_signal_handlers():
global SIGNAL_HANDLERS_SETUP
if SIGNAL_HANDLERS_SETUP:
@@ -464,7 +459,7 @@ def start_infra(asynchronous=False, apis=None):
# ensure that all infra components are up and running
check_infra(apis=apis)
# restore persisted data
- restore_persisted_data(apis=apis)
+ persistence.restore_persisted_data(apis=apis)
print('Ready.')
sys.stdout.flush()
if not asynchronous and thread:
diff --git a/localstack/services/s3/s3_listener.py b/localstack/services/s3/s3_listener.py
index 161c27c432e36..1ffa3f2d67115 100644
--- a/localstack/services/s3/s3_listener.py
+++ b/localstack/services/s3/s3_listener.py
@@ -802,9 +802,6 @@ def forward_request(self, method, path, data, headers):
if method == 'PUT' and not headers.get('content-type'):
headers['content-type'] = 'binary/octet-stream'
- # persist this API call to disk
- persistence.record('s3', method, path, data, headers)
-
# parse query params
query = parsed_path.query
path = parsed_path.path
@@ -893,6 +890,9 @@ def return_response(self, method, path, data, headers, response):
method = to_str(method)
bucket_name = get_bucket_name(path, headers)
+ # persist this API call to disk
+ persistence.record('s3', method, path, data, headers, response)
+
# No path-name based bucket name? Try host-based
hostname_parts = headers['host'].split('.')
if (not bucket_name or len(bucket_name) == 0) and len(hostname_parts) > 1:
diff --git a/localstack/utils/bootstrap.py b/localstack/utils/bootstrap.py
index fa52e910e9d90..eaa28a0e00296 100644
--- a/localstack/utils/bootstrap.py
+++ b/localstack/utils/bootstrap.py
@@ -425,7 +425,7 @@ def stop(self, quiet=False):
def run(cmd, print_error=True, asynchronous=False, stdin=False,
stderr=subprocess.STDOUT, outfile=None, env_vars=None, inherit_cwd=False,
inherit_env=True, tty=False):
- # don't use subprocess module inn Python 2 as it is not thread-safe
+ # don't use subprocess module in Python 2 as it is not thread-safe
# http://stackoverflow.com/questions/21194380/is-subprocess-popen-not-thread-safe
if six.PY2:
import subprocess32 as subprocess
diff --git a/localstack/utils/persistence.py b/localstack/utils/persistence.py
index 333b5ccc5aeb6..693deac53af25 100644
--- a/localstack/utils/persistence.py
+++ b/localstack/utils/persistence.py
@@ -8,7 +8,12 @@
from localstack.utils.aws import aws_stack
from localstack.utils.common import to_bytes, to_str
-API_FILE_PATTERN = '{data_dir}/{api}_api_calls.json'
+USE_SINGLE_DUMP_FILE = True
+
+if USE_SINGLE_DUMP_FILE:
+ API_FILE_PATTERN = '{data_dir}/recorded_api_calls.json'
+else:
+ API_FILE_PATTERN = '{data_dir}/{api}_api_calls.json'
# Stack with flags to indicate whether we are currently re-playing API calls.
# (We should not be re-playing and recording at the same time)
@@ -18,37 +23,46 @@
API_FILE_PATHS = {}
# set up logger
-LOGGER = logging.getLogger(__name__)
+LOG = logging.getLogger(__name__)
-def should_record(api, method, path, data, headers):
+def should_record(api, method, path, data, headers, response=None):
""" Decide whether or not a given API call should be recorded (persisted to disk) """
if api == 's3':
return method in ['PUT', 'POST', 'DELETE']
return False
-def record(api, method, path, data, headers):
+def record(api, method, path, data, headers, response=None):
""" Record a given API call to a persistent file on disk """
file_path = get_file_path(api)
- if CURRENTLY_REPLAYING or not file_path or not should_record(api, method, path, data, headers):
+ should_be_recorded = should_record(api, method, path, data, headers, response=response)
+ if CURRENTLY_REPLAYING or not file_path or not should_be_recorded:
return
entry = None
try:
if isinstance(data, dict):
data = json.dumps(data)
- if data or data in [u'', b'']:
- try:
- data = to_bytes(data)
- except Exception as e:
- LOGGER.warning('Unable to call to_bytes: %s' % e)
- data = to_str(base64.b64encode(data))
+
+ def get_recordable_data(data):
+ if data or data in [u'', b'']:
+ try:
+ data = to_bytes(data)
+ except Exception as e:
+ LOG.warning('Unable to call to_bytes: %s' % e)
+ data = to_str(base64.b64encode(data))
+ return data
+
+ data = get_recordable_data(data)
+ response_data = get_recordable_data('' if response is None else response.content)
+
entry = {
'a': api,
'm': method,
'p': path,
'd': data,
- 'h': dict(headers)
+ 'h': dict(headers),
+ 'rd': response_data
}
with open(file_path, 'a') as dumpfile:
dumpfile.write('%s\n' % json.dumps(entry))
@@ -56,15 +70,19 @@ def record(api, method, path, data, headers):
print('Error recording API call to persistent file: %s %s' % (e, traceback.format_exc()))
+def prepare_replay_data(command):
+ data = command['d']
+ data = data and base64.b64decode(data)
+ return data
+
+
def replay_command(command):
function = getattr(requests, command['m'].lower())
- data = command['d']
- if data:
- data = base64.b64decode(data)
+ data = prepare_replay_data(command)
endpoint = aws_stack.get_local_service_url(command['a'])
full_url = (endpoint[:-1] if endpoint.endswith('/') else endpoint) + command['p']
- result = function(full_url, data=data, headers=command['h'], verify=False)
- return result
+ response = function(full_url, data=data, headers=command['h'], verify=False)
+ return response
def replay(api):
@@ -83,11 +101,15 @@ def replay(api):
finally:
CURRENTLY_REPLAYING.pop(0)
if count:
- LOGGER.info('Restored %s API calls from persistent file: %s' % (count, file_path))
+ LOG.info('Restored %s API calls from persistent file: %s' % (count, file_path))
-def restore_persisted_data(api):
- return replay(api)
+def restore_persisted_data(apis):
+ if USE_SINGLE_DUMP_FILE:
+ return replay('_all_')
+ apis = apis if isinstance(apis, list) else [apis]
+ for api in apis:
+ replay(apis)
# ---------------
| * Refactor persistence logic
* Use single file for persistence
* Add `response` to list of arguments for persisting API calls | https://api.github.com/repos/localstack/localstack/pulls/2011 | 2020-02-02T20:36:38Z | 2020-02-03T08:58:30Z | 2020-02-03T08:58:30Z | 2021-04-21T07:10:57Z | 2,411 | localstack/localstack | 28,851 |
Add Introduction to Data-Centric AI | diff --git a/courses.md b/courses.md
index e30d6a18..b02898b4 100644
--- a/courses.md
+++ b/courses.md
@@ -53,6 +53,7 @@ The following is a list of free or paid online courses on machine learning, stat
* [Deploying a Deep Learning Model on Web and Mobile Applications Using TensorFlow](https://www.manning.com/liveproject/deploying-a-deep-learning-model-on-web-and-mobile-applications-using-tensorflow) - $ Hands-on project
* [Complete Data Science and ML Course](https://www.scaler.com/data-science-course/) - $
* [ML Observability Fundamentals](https://arize.com/ml-observability-fundamentals/) - free
+* [Introduction to Data-Centric AI (MIT)](https://dcai.csail.mit.edu/) - free
* [Data science course with placement](https://brainalyst.in/data-science-course-placement-guarantee)
* [DATA VISUALIZATION COURSE](https://brainalyst.in/data-visualization-courses-online/)
* [DATA VISUALIZATION PYTHON COURSE](https://brainalyst.in/data-visualization-python/)
| https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/916 | 2023-02-22T16:46:01Z | 2023-03-03T14:12:44Z | 2023-03-03T14:12:44Z | 2023-03-03T14:12:44Z | 255 | josephmisiti/awesome-machine-learning | 52,095 | |
[doc] update document of zero with chunk. | diff --git a/docs/source/en/features/zero_with_chunk.md b/docs/source/en/features/zero_with_chunk.md
index d7a99f2fbbfd..d6f6f611a64c 100644
--- a/docs/source/en/features/zero_with_chunk.md
+++ b/docs/source/en/features/zero_with_chunk.md
@@ -3,7 +3,7 @@
Author: [Hongxiu Liu](https://github.com/ver217), [Jiarui Fang](https://github.com/feifeibear), [Zijian Ye](https://github.com/ZijianYY)
**Prerequisite:**
-- [Define Your Configuration](../basics/define_your_config.md)
+- [Train with booster](../basics/booster_api.md)
**Example Code**
@@ -97,6 +97,7 @@ For simplicity, we just use randomly generated data here.
First we only need to import `GPT2LMHeadModel` from `Huggingface transformers` to define our model, which does not require users to define or modify the model, so that users can use it more conveniently.
+Define a GPT model:
```python
class GPTLMModel(nn.Module):
@@ -182,34 +183,6 @@ def split_param_col_tp1d(param: ColoParameter, pg: ProcessGroup):
split_param_single_dim_tp1d(-1, param, pg)
```
-Define a model which uses Gemini + ZeRO DDP:
-
-```python
-def gemini_zero_dpp(model: torch.nn.Module, pg: ProcessGroup, placement_policy: str = "auto"):
- cai_version = colossalai.__version__
- if version.parse(cai_version) > version.parse("0.1.10"):
- from colossalai.nn.parallel import GeminiDDP
- model = GeminiDDP(model,
- device=get_current_device(),
- placement_policy=placement_policy,
- pin_memory=True,
- search_range_mb=32)
- elif version.parse(cai_version) <= version.parse("0.1.10") and version.parse(cai_version) >= version.parse("0.1.9"):
- from colossalai.gemini import ChunkManager, GeminiManager
- chunk_size = ChunkManager.search_chunk_size(model, 64 * 1024**2, 32)
- gemini_manager = GeminiManager(placement_policy, chunk_manager)
- chunk_manager = ChunkManager(chunk_size,
- pg,
- enable_distributed_storage=True,
- init_device=GeminiManager.get_default_device(placement_policy))
- model = ZeroDDP(model, gemini_manager)
- else:
- raise NotImplemented(f"CAI version {cai_version} is not supported")
- return model
-```
-
-As we pre-train GPT in this example, we just use a simple language model loss.
-
Write a function to get random inputs:
```python
@@ -219,9 +192,15 @@ def get_data(batch_size, seq_len, vocab_size):
return input_ids, attention_mask
```
-Finally, we can define our training loop:
+Finally, we define a model which uses Gemini + ZeRO DDP and define our training loop, As we pre-train GPT in this example, we just use a simple language model loss:
```python
+from torch.optim import Adam
+
+from colossalai.booster import Booster
+from colossalai.zero import ColoInitContext
+from colossalai.booster.plugin import GeminiPlugin
+
def main():
args = parse_args()
BATCH_SIZE = 8
@@ -232,22 +211,23 @@ def main():
# build criterion
criterion = GPTLMLoss()
+ optimizer = Adam(model.parameters(), lr=0.001)
torch.manual_seed(123)
default_pg = ProcessGroup(tp_degree=args.tp_degree)
- default_dist_spec = ShardSpec([-1], [args.tp_degree]) if args.shardinit else None
+ default_dist_spec = ShardSpec([-1], [args.tp_degree])
# build GPT model
with ColoInitContext(device='cpu', default_dist_spec=default_dist_spec, default_pg=default_pg):
model = gpt2_medium(checkpoint=True)
pg = default_pg
# Tensor Parallelism (TP)
tensor_parallelize(model, pg)
+
# Gemini + ZeRO DP, Note it must be used after TP
- model = gemini_zero_dpp(model, pg, args.placement)
- # build optimizer
- optimizer = GeminiAdamOptimizer(model, lr=1e-3, initial_scale=2**5)
- numel = sum([p.numel() for p in model.parameters()])
- get_tflops_func = partial(get_tflops, numel, BATCH_SIZE, SEQ_LEN)
+ plugin = GeminiPlugin(placement_policy='cuda', max_norm=1.0, initial_scale=2**5)
+ booster = Booster(plugin=plugin)
+ model, optimizer, criterion, _, _ = booster.boost(model, optimizer, criterion)
+
torch.cuda.synchronize()
model.train()
for n in range(NUM_STEPS):
@@ -256,10 +236,12 @@ def main():
optimizer.zero_grad()
outputs = model(input_ids, attn_mask)
loss = criterion(outputs, input_ids)
- optimizer.backward(loss)
+ booster.backward(loss, optimizer)
optimizer.step()
torch.cuda.synchronize()
```
> ⚠️ Note: If you want to use the Gemini module, please do not use the [Gradient Accumulation](../features/gradient_accumulation.md) we mentioned before。
The complete example can be found on [Train GPT with Colossal-AI](https://github.com/hpcaitech/ColossalAI/tree/main/examples/language/gpt).
+
+<!-- doc-test-command: torchrun --standalone --nproc_per_node=1 zero_with_chunk.py -->
diff --git a/docs/source/zh-Hans/features/gradient_accumulation_with_booster.md b/docs/source/zh-Hans/features/gradient_accumulation_with_booster.md
index ab86f34f2dec..a8422060f0ea 100644
--- a/docs/source/zh-Hans/features/gradient_accumulation_with_booster.md
+++ b/docs/source/zh-Hans/features/gradient_accumulation_with_booster.md
@@ -1,4 +1,4 @@
-# 梯度累积 (最新版本)
+# 梯度累积 (新版本)
作者: [Mingyan Jiang](https://github.com/jiangmingyan)
diff --git a/docs/source/zh-Hans/features/mixed_precision_training_with_booster.md b/docs/source/zh-Hans/features/mixed_precision_training_with_booster.md
index 6954556a8e9a..187aef1a6c4a 100644
--- a/docs/source/zh-Hans/features/mixed_precision_training_with_booster.md
+++ b/docs/source/zh-Hans/features/mixed_precision_training_with_booster.md
@@ -1,4 +1,4 @@
-# 自动混合精度训练 (最新版本)
+# 自动混合精度训练 (新版本)
作者: [Mingyan Jiang](https://github.com/jiangmingyan)
diff --git a/docs/source/zh-Hans/features/zero_with_chunk.md b/docs/source/zh-Hans/features/zero_with_chunk.md
index ba57ba4e8e61..9030464ddf9a 100644
--- a/docs/source/zh-Hans/features/zero_with_chunk.md
+++ b/docs/source/zh-Hans/features/zero_with_chunk.md
@@ -4,7 +4,7 @@
**前置教程:**
-- [定义配置文件](../basics/define_your_config.md)
+- [booster使用](../basics/booster_api.md)
**示例代码**
@@ -97,6 +97,8 @@ optimizer.step()
首先我们只需要引入`Huggingface transformers` 的 `GPT2LMHeadModel`来定义我们的模型,不需要用户进行模型的定义与修改,方便用户使用。
+定义GPT模型:
+
```python
class GPTLMModel(nn.Module):
@@ -182,34 +184,6 @@ def split_param_col_tp1d(param: ColoParameter, pg: ProcessGroup):
split_param_single_dim_tp1d(-1, param, pg)
```
-定义一个使用 Gemini + ZeRO DDP 的模型:
-
-```python
-def gemini_zero_dpp(model: torch.nn.Module, pg: ProcessGroup, placement_policy: str = "auto"):
- cai_version = colossalai.__version__
- if version.parse(cai_version) > version.parse("0.1.10"):
- from colossalai.nn.parallel import GeminiDDP
- model = GeminiDDP(model,
- device=get_current_device(),
- placement_policy=placement_policy,
- pin_memory=True,
- search_range_mb=32)
- elif version.parse(cai_version) <= version.parse("0.1.10") and version.parse(cai_version) >= version.parse("0.1.9"):
- from colossalai.gemini import ChunkManager, GeminiManager
- chunk_size = ChunkManager.search_chunk_size(model, 64 * 1024**2, 32)
- gemini_manager = GeminiManager(placement_policy, chunk_manager)
- chunk_manager = ChunkManager(chunk_size,
- pg,
- enable_distributed_storage=True,
- init_device=GeminiManager.get_default_device(placement_policy))
- model = ZeroDDP(model, gemini_manager)
- else:
- raise NotImplemented(f"CAI version {cai_version} is not supported")
- return model
-```
-
-由于我们在这个例子中对GPT进行预训练,因此只使用了一个简单的语言模型损失函数。
-
写一个获得随机输入的函数:
```python
@@ -219,9 +193,16 @@ def get_data(batch_size, seq_len, vocab_size):
return input_ids, attention_mask
```
-最后,我们可以定义我们的训练循环:
+
+最后,使用booster注入 Gemini + ZeRO DDP 特性, 并定义训练循环。由于我们在这个例子中对GPT进行预训练,因此只使用了一个简单的语言模型损失函数:
```python
+from torch.optim import Adam
+
+from colossalai.booster import Booster
+from colossalai.zero import ColoInitContext
+from colossalai.booster.plugin import GeminiPlugin
+
def main():
args = parse_args()
BATCH_SIZE = 8
@@ -232,22 +213,23 @@ def main():
# build criterion
criterion = GPTLMLoss()
+ optimizer = Adam(model.parameters(), lr=0.001)
torch.manual_seed(123)
default_pg = ProcessGroup(tp_degree=args.tp_degree)
- default_dist_spec = ShardSpec([-1], [args.tp_degree]) if args.shardinit else None
+ default_dist_spec = ShardSpec([-1], [args.tp_degree])
# build GPT model
with ColoInitContext(device='cpu', default_dist_spec=default_dist_spec, default_pg=default_pg):
model = gpt2_medium(checkpoint=True)
pg = default_pg
# Tensor Parallelism (TP)
tensor_parallelize(model, pg)
+
# Gemini + ZeRO DP, Note it must be used after TP
- model = gemini_zero_dpp(model, pg, args.placement)
- # build optimizer
- optimizer = GeminiAdamOptimizer(model, lr=1e-3, initial_scale=2**5)
- numel = sum([p.numel() for p in model.parameters()])
- get_tflops_func = partial(get_tflops, numel, BATCH_SIZE, SEQ_LEN)
+ plugin = GeminiPlugin(placement_policy='cuda', max_norm=1.0, initial_scale=2**5)
+ booster = Booster(plugin=plugin)
+ model, optimizer, criterion, _, _ = booster.boost(model, optimizer, criterion)
+
torch.cuda.synchronize()
model.train()
for n in range(NUM_STEPS):
@@ -256,10 +238,12 @@ def main():
optimizer.zero_grad()
outputs = model(input_ids, attn_mask)
loss = criterion(outputs, input_ids)
- optimizer.backward(loss)
+ booster.backward(loss, optimizer)
optimizer.step()
torch.cuda.synchronize()
```
> ⚠️ 注意:如果你使用Gemini模块的话,请不要使用我们之前提到过的[梯度累加](../features/gradient_accumulation.md)。
完整的例子代码可以在 [Train GPT with Colossal-AI](https://github.com/hpcaitech/ColossalAI/tree/main/examples/language/gpt). 获得。
+
+<!-- doc-test-command: torchrun --standalone --nproc_per_node=1 zero_with_chunk.py -->
diff --git a/docs/source/zh-Hans/get_started/installation.md b/docs/source/zh-Hans/get_started/installation.md
index a32627db6f00..a6c88672b907 100755
--- a/docs/source/zh-Hans/get_started/installation.md
+++ b/docs/source/zh-Hans/get_started/installation.md
@@ -47,7 +47,7 @@ CUDA_EXT=1 pip install .
pip install .
```
-如果您在使用CUDA 10.2,您仍然可以从源码安装ColossalA。但是您需要手动下载cub库并将其复制到相应的目录。
+如果您在使用CUDA 10.2,您仍然可以从源码安装ColossalAI。但是您需要手动下载cub库并将其复制到相应的目录。
```bash
# clone the repository
| ## 📌 Checklist before creating the PR
- [x] I have created an issue for this PR for traceability
- [x] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`
- [x] I have added relevant tags if possible for us to better distinguish different PRs
## 🚨 Issue number
> Link this PR to your issue with words like fixed to automatically close the linked issue upon merge
>
> e.g. `fixed #1234`, `closed #1234`, `resolved #1234`
#3730
## 📝 What does this PR do?
> Summarize your work here.
> if you have any plots/diagrams/screenshots/tables, please attach them here.
fix title of mixed precision
## 💥 Checklist before requesting a review
- [x] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
- [x] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
- [x] I have performed a self-review of my code
- [x] I have added thorough tests.
- [x] I have added docstrings for all the functions/methods I implemented
## ⭐️ Do you enjoy contributing to Colossal-AI?
- [x] 🌝 Yes, I do.
- [ ] 🌚 No, I don't.
Tell us more if you don't enjoy contributing to Colossal-AI.
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/3855 | 2023-05-26T03:23:34Z | 2023-05-30T10:41:56Z | 2023-05-30T10:41:56Z | 2023-05-30T10:41:57Z | 3,009 | hpcaitech/ColossalAI | 11,219 |
tf backend supports bool variable | diff --git a/keras/backend/tensorflow_backend.py b/keras/backend/tensorflow_backend.py
index fad9b20ea30..eff4a3c122b 100644
--- a/keras/backend/tensorflow_backend.py
+++ b/keras/backend/tensorflow_backend.py
@@ -180,32 +180,6 @@ def set_session(session):
# VARIABLE MANIPULATION
-def _convert_string_dtype(dtype):
- """Get the type from a string.
-
- # Arguments
- dtype: A string representation of a type.
-
- # Returns
- The type requested.
-
- # Raises
- ValueError: if `dtype` is not supported.
- """
- mapping = {'float16': tf.float16,
- 'float32': tf.float32,
- 'float64': tf.float64,
- 'int16': tf.int16,
- 'int32': tf.int32,
- 'int64': tf.int64,
- 'uint8': tf.int8,
- 'uint16': tf.uint16}
-
- if dtype not in mapping:
- raise ValueError('Unsupported dtype:', dtype)
- return mapping[dtype]
-
-
def _to_tensor(x, dtype):
"""Convert the input `x` to a tensor of type `dtype`.
@@ -313,7 +287,7 @@ def variable(value, dtype=None, name=None, constraint=None):
v._keras_shape = sparse_coo.shape
v._uses_learning_phase = False
return v
- v = tf.Variable(value, dtype=_convert_string_dtype(dtype), name=name)
+ v = tf.Variable(value, dtype=tf.as_dtype(dtype), name=name)
if isinstance(value, np.ndarray):
v._keras_shape = value.shape
elif hasattr(value, 'get_shape'):
@@ -621,7 +595,7 @@ def zeros(shape, dtype=None, name=None):
"""
if dtype is None:
dtype = floatx()
- tf_dtype = _convert_string_dtype(dtype)
+ tf_dtype = tf.as_dtype(dtype)
return variable(tf.constant_initializer(0., dtype=tf_dtype)(shape),
dtype, name)
@@ -649,7 +623,7 @@ def ones(shape, dtype=None, name=None):
"""
if dtype is None:
dtype = floatx()
- tf_dtype = _convert_string_dtype(dtype)
+ tf_dtype = tf.as_dtype(dtype)
return variable(tf.constant_initializer(1., dtype=tf_dtype)(shape),
dtype, name)
@@ -769,7 +743,7 @@ def random_uniform_variable(shape, low, high, dtype=None,
"""
if dtype is None:
dtype = floatx()
- tf_dtype = _convert_string_dtype(dtype)
+ tf_dtype = tf.as_dtype(dtype)
if seed is None:
# ensure that randomness is conditioned by the Numpy RNG
seed = np.random.randint(10e8)
@@ -806,7 +780,7 @@ def random_normal_variable(shape, mean, scale, dtype=None,
"""
if dtype is None:
dtype = floatx()
- tf_dtype = _convert_string_dtype(dtype)
+ tf_dtype = tf.as_dtype(dtype)
if seed is None:
# ensure that randomness is conditioned by the Numpy RNG
seed = np.random.randint(10e8)
@@ -2154,7 +2128,7 @@ def set_value(x, value):
(of the same shape).
"""
value = np.asarray(value, dtype=dtype(x))
- tf_dtype = _convert_string_dtype(x.dtype.name.split('_')[0])
+ tf_dtype = tf.as_dtype(x.dtype.name.split('_')[0])
if hasattr(x, '_assign_placeholder'):
assign_placeholder = x._assign_placeholder
assign_op = x._assign_op
@@ -2178,7 +2152,7 @@ def batch_set_value(tuples):
feed_dict = {}
for x, value in tuples:
value = np.asarray(value, dtype=dtype(x))
- tf_dtype = _convert_string_dtype(x.dtype.name.split('_')[0])
+ tf_dtype = tf.as_dtype(x.dtype.name.split('_')[0])
if hasattr(x, '_assign_placeholder'):
assign_placeholder = x._assign_placeholder
assign_op = x._assign_op
diff --git a/tests/keras/backend/backend_test.py b/tests/keras/backend/backend_test.py
index ad4e94dca98..1104bbe5e5e 100644
--- a/tests/keras/backend/backend_test.py
+++ b/tests/keras/backend/backend_test.py
@@ -217,11 +217,6 @@ def test_random_variables(self):
mean=0., scale=1.,
shape_or_val=False, assert_value_equality=False)
- # not supported dtype
- for dtype in ['int16', 'int32', 'int64', 'uint8', 'uint16', 'double']:
- with pytest.raises(ValueError):
- ztf = KTF.random_normal_variable((2, 3), 0, 1, dtype=dtype)
-
@pytest.mark.parametrize('k', [KTF], ids=['TensorFlow'])
def test_batch_dot_shape(self, k):
x_batch = k.ones(shape=(32, 20))
@@ -1397,6 +1392,14 @@ def test_set_floatx(self):
# Restore old value
set_floatx(old_floatx)
+ def test_variable_support_bool_dtype(self):
+ # Github issue: 7819
+ if K.backend() == 'tensorflow':
+ assert K.dtype(K.variable(1, dtype='int16')) == 'int16'
+ assert K.dtype(K.variable(False, dtype='bool')) == 'bool'
+ with pytest.raises(TypeError):
+ K.variable('', dtype='unsupported')
+
if __name__ == '__main__':
pytest.main([__file__])
| use `tf.as_dtype` to convert string to Dtype , see #7819
### How to test
+ [x] add unit test.
+ [x] pass all tests. | https://api.github.com/repos/keras-team/keras/pulls/7832 | 2017-09-07T04:19:08Z | 2017-09-09T00:19:48Z | 2017-09-09T00:19:48Z | 2017-09-09T00:23:54Z | 1,281 | keras-team/keras | 46,952 |
timeframe 4h for bitfinex | diff --git a/js/bitfinex2.js b/js/bitfinex2.js
index ee47e8582eb9..fab2ca2fa94c 100644
--- a/js/bitfinex2.js
+++ b/js/bitfinex2.js
@@ -53,6 +53,7 @@ module.exports = class bitfinex2 extends bitfinex {
'30m': '30m',
'1h': '1h',
'3h': '3h',
+ '4h': '4h',
'6h': '6h',
'12h': '12h',
'1d': '1D',
| added: timeframe 4h for bitfinex | https://api.github.com/repos/ccxt/ccxt/pulls/8470 | 2021-02-16T16:31:09Z | 2021-02-16T17:04:22Z | 2021-02-16T17:04:22Z | 2021-02-16T17:04:22Z | 144 | ccxt/ccxt | 13,599 |
DOC Fix typo on comment about t-SNE | diff --git a/doc/modules/manifold.rst b/doc/modules/manifold.rst
index 72e8c7485df44..e6e8e842fa7fc 100644
--- a/doc/modules/manifold.rst
+++ b/doc/modules/manifold.rst
@@ -602,7 +602,7 @@ be well separated by non linear methods that focus on the local structure (e.g.
an SVM with a Gaussian RBF kernel). However, failing to visualize well
separated homogeneously labeled groups with t-SNE in 2D does not necessarily
imply that the data cannot be correctly classified by a supervised model. It
-might be the case that 2 dimensions are not low enough to accurately represents
+might be the case that 2 dimensions are not high enough to accurately represent
the internal structure of the data.
| This is just a small correction in the text.
An alternative fix could be this:
> It might be the case that 2 dimensions are too low to accurately represent the internal structure of the data. | https://api.github.com/repos/scikit-learn/scikit-learn/pulls/20009 | 2021-04-29T20:29:24Z | 2021-04-29T23:33:33Z | 2021-04-29T23:33:33Z | 2021-04-29T23:39:38Z | 179 | scikit-learn/scikit-learn | 46,516 |
Defines the RenewableCert API | diff --git a/certbot/certbot/_internal/storage.py b/certbot/certbot/_internal/storage.py
index bb36f462adc..72eb3de85e0 100644
--- a/certbot/certbot/_internal/storage.py
+++ b/certbot/certbot/_internal/storage.py
@@ -17,6 +17,7 @@
from certbot import crypto_util
from certbot._internal import error_handler
from certbot import errors
+from certbot import interfaces
from certbot import util
from certbot.compat import os
from certbot.compat import filesystem
@@ -376,7 +377,7 @@ def delete_files(config, certname):
logger.debug("Unable to remove %s", archive_path)
-class RenewableCert(object):
+class RenewableCert(interfaces.RenewableCert):
"""Renewable certificate.
Represents a lineage of certificates that is under the management of
@@ -423,7 +424,7 @@ def __init__(self, config_filename, cli_config, update_symlinks=False):
"""
self.cli_config = cli_config
- self.lineagename = lineagename_for_filename(config_filename)
+ self._lineagename = lineagename_for_filename(config_filename)
# self.configuration should be used to read parameters that
# may have been chosen based on default values from the
@@ -483,6 +484,15 @@ def fullchain_path(self):
"""Duck type for self.fullchain"""
return self.fullchain
+ @property
+ def lineagename(self):
+ """Name given to the certificate lineage.
+
+ :rtype: str
+
+ """
+ return self._lineagename
+
@property
def target_expiry(self):
"""The current target certificate's expiration datetime
@@ -858,21 +868,15 @@ def update_all_links_to(self, version):
for _, link in previous_links:
os.unlink(link)
- def names(self, version=None):
+ def names(self):
"""What are the subject names of this certificate?
- (If no version is specified, use the current version.)
-
- :param int version: the desired version number
:returns: the subject names
:rtype: `list` of `str`
:raises .CertStorageError: if could not find cert file.
"""
- if version is None:
- target = self.current_target("cert")
- else:
- target = self.version("cert", version)
+ target = self.current_target("cert")
if target is None:
raise errors.CertStorageError("could not find cert file")
with open(target) as f:
diff --git a/certbot/certbot/crypto_util.py b/certbot/certbot/crypto_util.py
index 12291af382b..5c375cc5568 100644
--- a/certbot/certbot/crypto_util.py
+++ b/certbot/certbot/crypto_util.py
@@ -213,26 +213,28 @@ def verify_renewable_cert(renewable_cert):
2. That fullchain matches cert and chain when concatenated.
3. Check that the private key matches the certificate.
- :param `.storage.RenewableCert` renewable_cert: cert to verify
+ :param renewable_cert: cert to verify
+ :type renewable_cert: certbot.interfaces.RenewableCert
:raises errors.Error: If verification fails.
"""
verify_renewable_cert_sig(renewable_cert)
verify_fullchain(renewable_cert)
- verify_cert_matches_priv_key(renewable_cert.cert, renewable_cert.privkey)
+ verify_cert_matches_priv_key(renewable_cert.cert_path, renewable_cert.key_path)
def verify_renewable_cert_sig(renewable_cert):
- """Verifies the signature of a `.storage.RenewableCert` object.
+ """Verifies the signature of a RenewableCert object.
- :param `.storage.RenewableCert` renewable_cert: cert to verify
+ :param renewable_cert: cert to verify
+ :type renewable_cert: certbot.interfaces.RenewableCert
:raises errors.Error: If signature verification fails.
"""
try:
- with open(renewable_cert.chain, 'rb') as chain_file: # type: IO[bytes]
+ with open(renewable_cert.chain_path, 'rb') as chain_file: # type: IO[bytes]
chain = x509.load_pem_x509_certificate(chain_file.read(), default_backend())
- with open(renewable_cert.cert, 'rb') as cert_file: # type: IO[bytes]
+ with open(renewable_cert.cert_path, 'rb') as cert_file: # type: IO[bytes]
cert = x509.load_pem_x509_certificate(cert_file.read(), default_backend())
pk = chain.public_key()
with warnings.catch_warnings():
@@ -240,7 +242,7 @@ def verify_renewable_cert_sig(renewable_cert):
cert.signature_hash_algorithm)
except (IOError, ValueError, InvalidSignature) as e:
error_str = "verifying the signature of the cert located at {0} has failed. \
- Details: {1}".format(renewable_cert.cert, e)
+ Details: {1}".format(renewable_cert.cert_path, e)
logger.exception(error_str)
raise errors.Error(error_str)
@@ -301,16 +303,17 @@ def verify_cert_matches_priv_key(cert_path, key_path):
def verify_fullchain(renewable_cert):
""" Verifies that fullchain is indeed cert concatenated with chain.
- :param `.storage.RenewableCert` renewable_cert: cert to verify
+ :param renewable_cert: cert to verify
+ :type renewable_cert: certbot.interfaces.RenewableCert
:raises errors.Error: If cert and chain do not combine to fullchain.
"""
try:
- with open(renewable_cert.chain) as chain_file: # type: IO[str]
+ with open(renewable_cert.chain_path) as chain_file: # type: IO[str]
chain = chain_file.read()
- with open(renewable_cert.cert) as cert_file: # type: IO[str]
+ with open(renewable_cert.cert_path) as cert_file: # type: IO[str]
cert = cert_file.read()
- with open(renewable_cert.fullchain) as fullchain_file: # type: IO[str]
+ with open(renewable_cert.fullchain_path) as fullchain_file: # type: IO[str]
fullchain = fullchain_file.read()
if (cert + chain) != fullchain:
error_str = "fullchain does not match cert + chain for {0}!"
diff --git a/certbot/certbot/interfaces.py b/certbot/certbot/interfaces.py
index edf71e63f47..cf993a55bc7 100644
--- a/certbot/certbot/interfaces.py
+++ b/certbot/certbot/interfaces.py
@@ -532,6 +532,62 @@ def print_messages(self):
"""Prints messages to the user and clears the message queue."""
+@six.add_metaclass(abc.ABCMeta)
+class RenewableCert(object):
+ """Interface to a certificate lineage."""
+
+ @abc.abstractproperty
+ def cert_path(self):
+ """Path to the certificate file.
+
+ :rtype: str
+
+ """
+
+ @abc.abstractproperty
+ def key_path(self):
+ """Path to the private key file.
+
+ :rtype: str
+
+ """
+
+ @abc.abstractproperty
+ def chain_path(self):
+ """Path to the certificate chain file.
+
+ :rtype: str
+
+ """
+
+ @abc.abstractproperty
+ def fullchain_path(self):
+ """Path to the full chain file.
+
+ The full chain is the certificate file plus the chain file.
+
+ :rtype: str
+
+ """
+
+ @abc.abstractproperty
+ def lineagename(self):
+ """Name given to the certificate lineage.
+
+ :rtype: str
+
+ """
+
+ @abc.abstractmethod
+ def names(self):
+ """What are the subject names of this certificate?
+
+ :returns: the subject names
+ :rtype: `list` of `str`
+ :raises .CertStorageError: if could not find cert file.
+
+ """
+
# Updater interfaces
#
# When "certbot renew" is run, Certbot will iterate over each lineage and check
@@ -570,7 +626,7 @@ def generic_updates(self, lineage, *args, **kwargs):
This method is called once for each lineage.
:param lineage: Certificate lineage object
- :type lineage: storage.RenewableCert
+ :type lineage: RenewableCert
"""
@@ -599,6 +655,6 @@ def renew_deploy(self, lineage, *args, **kwargs):
This method is called once for each lineage renewed
:param lineage: Certificate lineage object
- :type lineage: storage.RenewableCert
+ :type lineage: RenewableCert
"""
diff --git a/certbot/certbot/plugins/enhancements.py b/certbot/certbot/plugins/enhancements.py
index d917b0ea408..44638e91dd4 100644
--- a/certbot/certbot/plugins/enhancements.py
+++ b/certbot/certbot/plugins/enhancements.py
@@ -62,7 +62,7 @@ def enable(lineage, domains, installer, config):
Run enable method for each requested enhancement that is supported.
:param lineage: Certificate lineage object
- :type lineage: certbot._internal.storage.RenewableCert
+ :type lineage: certbot.interfaces.RenewableCert
:param domains: List of domains in certificate to enhance
:type domains: str
@@ -123,7 +123,7 @@ def update_autohsts(self, lineage, *args, **kwargs):
Implementation of this method should increase the max-age value.
:param lineage: Certificate lineage object
- :type lineage: certbot._internal.storage.RenewableCert
+ :type lineage: certbot.interfaces.RenewableCert
.. note:: prepare() method inherited from `interfaces.IPlugin` might need
to be called manually within implementation of this interface method
@@ -137,7 +137,7 @@ def deploy_autohsts(self, lineage, *args, **kwargs):
Long max-age value should be set in implementation of this method.
:param lineage: Certificate lineage object
- :type lineage: certbot._internal.storage.RenewableCert
+ :type lineage: certbot.interfaces.RenewableCert
"""
@abc.abstractmethod
@@ -148,7 +148,7 @@ def enable_autohsts(self, lineage, domains, *args, **kwargs):
over the subsequent runs of Certbot renew.
:param lineage: Certificate lineage object
- :type lineage: certbot._internal.storage.RenewableCert
+ :type lineage: certbot.interfaces.RenewableCert
:param domains: List of domains in certificate to enhance
:type domains: str
diff --git a/certbot/tests/crypto_util_test.py b/certbot/tests/crypto_util_test.py
index 666e4c082eb..7438fed5a3a 100644
--- a/certbot/tests/crypto_util_test.py
+++ b/certbot/tests/crypto_util_test.py
@@ -181,15 +181,15 @@ def setUp(self):
super(VerifyCertSetup, self).setUp()
self.renewable_cert = mock.MagicMock()
- self.renewable_cert.cert = SS_CERT_PATH
- self.renewable_cert.chain = SS_CERT_PATH
- self.renewable_cert.privkey = RSA2048_KEY_PATH
- self.renewable_cert.fullchain = test_util.vector_path('cert_fullchain_2048.pem')
+ self.renewable_cert.cert_path = SS_CERT_PATH
+ self.renewable_cert.chain_path = SS_CERT_PATH
+ self.renewable_cert.key_path = RSA2048_KEY_PATH
+ self.renewable_cert.fullchain_path = test_util.vector_path('cert_fullchain_2048.pem')
self.bad_renewable_cert = mock.MagicMock()
- self.bad_renewable_cert.chain = SS_CERT_PATH
- self.bad_renewable_cert.cert = SS_CERT_PATH
- self.bad_renewable_cert.fullchain = SS_CERT_PATH
+ self.bad_renewable_cert.chain_path = SS_CERT_PATH
+ self.bad_renewable_cert.cert_path = SS_CERT_PATH
+ self.bad_renewable_cert.fullchain_path = SS_CERT_PATH
class VerifyRenewableCertTest(VerifyCertSetup):
@@ -219,13 +219,13 @@ def test_cert_sig_match(self):
def test_cert_sig_match_ec(self):
renewable_cert = mock.MagicMock()
- renewable_cert.cert = P256_CERT_PATH
- renewable_cert.chain = P256_CERT_PATH
- renewable_cert.privkey = P256_KEY
+ renewable_cert.cert_path = P256_CERT_PATH
+ renewable_cert.chain_path = P256_CERT_PATH
+ renewable_cert.key_path = P256_KEY
self.assertEqual(None, self._call(renewable_cert))
def test_cert_sig_mismatch(self):
- self.bad_renewable_cert.cert = test_util.vector_path('cert_512_bad.pem')
+ self.bad_renewable_cert.cert_path = test_util.vector_path('cert_512_bad.pem')
self.assertRaises(errors.Error, self._call, self.bad_renewable_cert)
diff --git a/certbot/tests/storage_test.py b/certbot/tests/storage_test.py
index 35f10656ed2..06c881a8725 100644
--- a/certbot/tests/storage_test.py
+++ b/certbot/tests/storage_test.py
@@ -404,20 +404,6 @@ def test_names(self):
self.assertEqual(self.test_rc.names(),
["example.com", "www.example.com"])
- # Trying a non-current version
- self._write_out_kind("cert", 15, test_util.load_vector("cert_512.pem"))
-
- self.assertEqual(self.test_rc.names(12),
- ["example.com", "www.example.com"])
-
- # Testing common name is listed first
- self._write_out_kind(
- "cert", 12, test_util.load_vector("cert-5sans_512.pem"))
-
- self.assertEqual(
- self.test_rc.names(12),
- ["example.com"] + ["{0}.example.com".format(c) for c in "abcd"])
-
# Trying missing cert
os.unlink(self.test_rc.cert)
self.assertRaises(errors.CertStorageError, self.test_rc.names)
| This is my proposed fix for #7540. I would ideally like this to be included in our 1.0 release.
I came up with this design by adding all attributes used either in our own plugins, 3rd party plugins listed at https://certbot.eff.org/docs/using.html#third-party-plugins, or our public API code.
Despite me thinking that zope is unneeded nowadays, I initially tried to use it to define this interface since we have it and it gives us a way to define expected attributes, but it doesn't work because zope interface objects also have a method called `names` which conflict with the API.
~~For defining attributes on the object, I took the approach Joona and I came up with in https://github.com/certbot/certbot/pull/7246. The `*_path` attributes could be more naturally defined as properties which works, but I don't like doing this because I think whether it's an attribute or property is an implementation detail.~~
@adferrand or @joohoi, are you able to review this? | https://api.github.com/repos/certbot/certbot/pulls/7603 | 2019-11-26T19:35:50Z | 2019-11-27T19:32:01Z | 2019-11-27T19:32:01Z | 2019-11-27T19:32:20Z | 3,294 | certbot/certbot | 3,096 |
Adds support for proxy authentication and repoview enablement. | diff --git a/lib/ansible/modules/packaging/os/pulp_repo.py b/lib/ansible/modules/packaging/os/pulp_repo.py
index 8b6a676a352803..5fda31ba31b692 100644
--- a/lib/ansible/modules/packaging/os/pulp_repo.py
+++ b/lib/ansible/modules/packaging/os/pulp_repo.py
@@ -40,6 +40,14 @@
the Basic authentication header upon initial request.
type: bool
default: 'no'
+ generate_sqlite:
+ description:
+ - Boolean flag to indicate whether sqlite files should be generated during
+ a repository publish.
+ required: false
+ type: bool
+ default: 'no'
+ version_added: "2.8"
importer_ssl_ca_cert:
description:
- CA certificate string used to validate the feed source SSL certificate.
@@ -65,9 +73,25 @@
description:
- Proxy url setting for the pulp repository importer. This is in the
format scheme://host.
+ required: false
+ default: null
proxy_port:
description:
- Proxy port setting for the pulp repository importer.
+ required: false
+ default: null
+ proxy_username:
+ description:
+ - Proxy username for the pulp repository importer.
+ required: false
+ default: null
+ version_added: "2.8"
+ proxy_password:
+ description:
+ - Proxy password for the pulp repository importer.
+ required: false
+ default: null
+ version_added: "2.8"
publish_distributor:
description:
- Distributor to use when state is C(publish). The default is to
@@ -84,6 +108,14 @@
description:
- Repo plugin type to use (i.e. C(rpm), C(docker)).
default: rpm
+ repoview:
+ description:
+ - Whether to generate repoview files for a published repository. Setting
+ this to "yes" automatically activates `generate_sqlite`.
+ required: false
+ type: bool
+ default: 'no'
+ version_added: "2.8"
serve_http:
description:
- Make the repo available over HTTP.
@@ -195,6 +227,9 @@ def compare_repo_distributor_config(self, repo_id, **kwargs):
for distributor in repo_config['distributors']:
for key, value in kwargs.items():
+ if key not in distributor['config'].keys():
+ return False
+
if not distributor['config'][key] == value:
return False
@@ -219,10 +254,14 @@ def create_repo(
repo_id,
relative_url,
feed=None,
+ generate_sqlite=False,
serve_http=False,
serve_https=True,
proxy_host=None,
proxy_port=None,
+ proxy_username=None,
+ proxy_password=None,
+ repoview=False,
ssl_ca_cert=None,
ssl_client_cert=None,
ssl_client_key=None,
@@ -242,6 +281,8 @@ def create_repo(
yum_distributor['distributor_config']['http'] = serve_http
yum_distributor['distributor_config']['https'] = serve_https
yum_distributor['distributor_config']['relative_url'] = relative_url
+ yum_distributor['distributor_config']['repoview'] = repoview
+ yum_distributor['distributor_config']['generate_sqlite'] = generate_sqlite or repoview
data['distributors'].append(yum_distributor)
if add_export_distributor:
@@ -253,6 +294,8 @@ def create_repo(
export_distributor['distributor_config']['http'] = serve_http
export_distributor['distributor_config']['https'] = serve_https
export_distributor['distributor_config']['relative_url'] = relative_url
+ export_distributor['distributor_config']['repoview'] = repoview
+ export_distributor['distributor_config']['generate_sqlite'] = generate_sqlite or repoview
data['distributors'].append(export_distributor)
data['importer_type_id'] = "yum_importer"
@@ -267,6 +310,12 @@ def create_repo(
if proxy_port:
data['importer_config']['proxy_port'] = proxy_port
+ if proxy_username:
+ data['importer_config']['proxy_username'] = proxy_username
+
+ if proxy_password:
+ data['importer_config']['proxy_password'] = proxy_password
+
if ssl_ca_cert:
data['importer_config']['ssl_ca_cert'] = ssl_ca_cert
@@ -479,16 +528,20 @@ def main():
argument_spec.update(
add_export_distributor=dict(default=False, type='bool'),
feed=dict(),
+ generate_sqlite=dict(default=False, type='bool'),
importer_ssl_ca_cert=dict(),
importer_ssl_client_cert=dict(),
importer_ssl_client_key=dict(),
name=dict(required=True, aliases=['repo']),
proxy_host=dict(),
proxy_port=dict(),
+ proxy_username=dict(),
+ proxy_password=dict(no_log=True),
publish_distributor=dict(),
pulp_host=dict(default="https://127.0.0.1"),
relative_url=dict(),
repo_type=dict(default="rpm"),
+ repoview=dict(default=False, type='bool'),
serve_http=dict(default=False, type='bool'),
serve_https=dict(default=True, type='bool'),
state=dict(
@@ -501,16 +554,20 @@ def main():
add_export_distributor = module.params['add_export_distributor']
feed = module.params['feed']
+ generate_sqlite = module.params['generate_sqlite']
importer_ssl_ca_cert = module.params['importer_ssl_ca_cert']
importer_ssl_client_cert = module.params['importer_ssl_client_cert']
importer_ssl_client_key = module.params['importer_ssl_client_key']
proxy_host = module.params['proxy_host']
proxy_port = module.params['proxy_port']
+ proxy_username = module.params['proxy_username']
+ proxy_password = module.params['proxy_password']
publish_distributor = module.params['publish_distributor']
pulp_host = module.params['pulp_host']
relative_url = module.params['relative_url']
repo = module.params['name']
repo_type = module.params['repo_type']
+ repoview = module.params['repoview']
serve_http = module.params['serve_http']
serve_https = module.params['serve_https']
state = module.params['state']
@@ -584,10 +641,14 @@ def main():
repo_id=repo,
relative_url=relative_url,
feed=feed,
+ generate_sqlite=generate_sqlite,
serve_http=serve_http,
serve_https=serve_https,
proxy_host=proxy_host,
proxy_port=proxy_port,
+ proxy_username=proxy_username,
+ proxy_password=proxy_password,
+ repoview=repoview,
ssl_ca_cert=importer_ssl_ca_cert,
ssl_client_cert=importer_ssl_client_cert,
ssl_client_key=importer_ssl_client_key,
@@ -604,6 +665,8 @@ def main():
feed=feed,
proxy_host=proxy_host,
proxy_port=proxy_port,
+ proxy_username=proxy_username,
+ proxy_password=proxy_password,
ssl_ca_cert=importer_ssl_ca_cert,
ssl_client_cert=importer_ssl_client_cert,
ssl_client_key=importer_ssl_client_key
@@ -614,6 +677,8 @@ def main():
feed=feed,
proxy_host=proxy_host,
proxy_port=proxy_port,
+ proxy_username=proxy_username,
+ proxy_password=proxy_password,
ssl_ca_cert=importer_ssl_ca_cert,
ssl_client_cert=importer_ssl_client_cert,
ssl_client_key=importer_ssl_client_key)
@@ -632,6 +697,18 @@ def main():
changed = True
+ if not server.compare_repo_distributor_config(repo, generate_sqlite=generate_sqlite):
+ if not module.check_mode:
+ server.update_repo_distributor_config(repo, generate_sqlite=generate_sqlite)
+
+ changed = True
+
+ if not server.compare_repo_distributor_config(repo, repoview=repoview):
+ if not module.check_mode:
+ server.update_repo_distributor_config(repo, repoview=repoview)
+
+ changed = True
+
if not server.compare_repo_distributor_config(repo, http=serve_http):
if not module.check_mode:
server.update_repo_distributor_config(repo, http=serve_http)
| ##### SUMMARY
Adds support for allowing pulp to authenticate through a proxy to download upstream content. This also adds the `repoview` option because this actually broke with the newer Pulp API.
Repeat of #32484 because I screwed up that PR
##### ISSUE TYPE
- Feature Pull Request
##### COMPONENT NAME
pulp_repo
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes below -->
```
ansible 2.5.5
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/username/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15 (default, May 16 2018, 17:50:09) [GCC 8.1.1 20180502 (Red Hat 8.1.1-1)]
``` | https://api.github.com/repos/ansible/ansible/pulls/41908 | 2018-06-25T15:16:22Z | 2018-11-24T17:54:16Z | 2018-11-24T17:54:16Z | 2019-07-22T16:00:27Z | 1,888 | ansible/ansible | 48,716 |
Rename Template | diff --git a/templates/chat-bot-feedback/chat_bot_feedback/__init__.py b/templates/chat-bot-feedback/chat_bot_feedback/__init__.py
index 06d4114d28ee60..e51bb07ae8b866 100644
--- a/templates/chat-bot-feedback/chat_bot_feedback/__init__.py
+++ b/templates/chat-bot-feedback/chat_bot_feedback/__init__.py
@@ -1,3 +1,3 @@
-from conversational_feedback.chain import chain
+from chat_bot_feedback.chain import chain
__all__ = ["chain"]
| To chatbot feedback. Update import | https://api.github.com/repos/langchain-ai/langchain/pulls/12649 | 2023-10-31T16:12:51Z | 2023-10-31T16:15:30Z | 2023-10-31T16:15:30Z | 2023-10-31T16:15:31Z | 114 | langchain-ai/langchain | 43,092 |
Swebench di | diff --git a/sub_swebench_dataset/readme.md b/sub_swebench_dataset/readme.md
new file mode 100644
index 000000000..77fb23b2a
--- /dev/null
+++ b/sub_swebench_dataset/readme.md
@@ -0,0 +1,71 @@
+# Dataset Description
+
+The index of sub_swebench is a subset of swebench, with two columns in total, each column containing 50 id_instance.
+
+The id_instance is a balanced subset of pass and fail samples for CognitionAI on swebench.
+Sampling list:https://github.com/CognitionAI/devin-swebench-results/tree/main/
+Original dataset:https://huggingface.co/datasets/princeton-nlp/SWE-bench/
+
+## fail dataset Description:
+
+There are a total of 491 txt files listed.
+In the original dataset, the distribution of pass case categories is:
+
+- astropy: 24
+- django: 160
+- matplotlib: 42
+- mwaskom: 4
+- pallets: 3
+- psf: 9
+- pydata: 29
+- pylint-dev: 13
+- pytest-dev: 20
+- scikit-learn: 56
+- sphinx-doc: 46
+- sympy: 85
+
+### After balanced sampling:
+
+There are a total of 50 txt files listed.
+
+- Django: 16
+- Scikit-Learn: 6
+- Sympy: 10
+- sphinx-doc:5
+- matplotlib: 4
+- pydata: 3
+- astropy: 2
+- pytest-dev: 2
+- psf: 1
+- pylint-dev: 1
+
+
+
+## pass dataset Description:
+
+
+
+There are a total of 79 txt files listed.
+In the original dataset, the distribution of pass case categories is:
+
+- astropy: 4
+- django: 38
+- matplotlib: 3
+- pydata: 3
+- pytest-dev: 6
+- scikit-learn: 12
+- sphinx-doc: 2
+- sympy: 11
+
+### After balanced sampling:
+
+There are a total of 50 txt files listed.
+
+- Django: 23
+- Scikit-Learn: 8
+- Sympy: 7
+- Pytest: 4
+- Astropy: 3
+- Xarray (pydata): 2
+- Matplotlib: 2
+- Sphinx: 1
diff --git a/sub_swebench_dataset/sub_swebench.csv b/sub_swebench_dataset/sub_swebench.csv
new file mode 100644
index 000000000..f3aa32dac
Binary files /dev/null and b/sub_swebench_dataset/sub_swebench.csv differ
|
**Explanation**
<!-- Added a sub_swebench folder. This folder contains a readme and a subset of swebench consisting of instance IDs.-->
| https://api.github.com/repos/geekan/MetaGPT/pulls/1038 | 2024-03-19T07:28:59Z | 2024-03-19T07:52:08Z | 2024-03-19T07:52:08Z | 2024-03-19T07:52:08Z | 622 | geekan/MetaGPT | 16,684 |
Make sure netrc doesn't override any authentication settings explicitly set by the client | diff --git a/requests/sessions.py b/requests/sessions.py
index 6d1000dc52..c24ed5aa4c 100644
--- a/requests/sessions.py
+++ b/requests/sessions.py
@@ -289,8 +289,8 @@ def request(self, method, url,
for (k, v) in env_proxies.items():
proxies.setdefault(k, v)
- # Set environment's basic authentication.
- if not auth:
+ # Set environment's basic authentication if not explicitly set.
+ if not auth and not self.auth:
auth = get_netrc_auth(url)
# Look for configuration.
diff --git a/test_requests.py b/test_requests.py
index 7ad10ee9b0..0758210545 100755
--- a/test_requests.py
+++ b/test_requests.py
@@ -234,6 +234,34 @@ def test_BASICAUTH_TUPLE_HTTP_200_OK_GET(self):
r = s.get(url)
self.assertEqual(r.status_code, 200)
+ def test_basicauth_with_netrc(self):
+ auth = ('user', 'pass')
+ wrong_auth = ('wronguser', 'wrongpass')
+ url = httpbin('basic-auth', 'user', 'pass')
+
+ def get_netrc_auth_mock(url):
+ return auth
+ requests.sessions.get_netrc_auth = get_netrc_auth_mock
+
+ # Should use netrc and work.
+ r = requests.get(url)
+ self.assertEqual(r.status_code, 200)
+
+ # Given auth should override and fail.
+ r = requests.get(url, auth=wrong_auth)
+ self.assertEqual(r.status_code, 401)
+
+ s = requests.session()
+
+ # Should use netrc and work.
+ r = s.get(url)
+ self.assertEqual(r.status_code, 200)
+
+ # Given auth should override and fail.
+ s.auth = wrong_auth
+ r = s.get(url)
+ self.assertEqual(r.status_code, 401)
+
def test_DIGEST_HTTP_200_OK_GET(self):
auth = HTTPDigestAuth('user', 'pass')
| This fixes issue #1438.
.netrc is still going to be used by default if it's available (that's the intended behavior based on #446), but now any explicit settings (whether they're set on the session or passed to request()) will take precedence.
| https://api.github.com/repos/psf/requests/pulls/1439 | 2013-06-27T21:24:25Z | 2013-07-15T13:22:54Z | 2013-07-15T13:22:54Z | 2021-09-08T23:06:30Z | 479 | psf/requests | 32,047 |
Refs #29892 -- Replaced Selenium .submit() shim with .click() on the submit button. | diff --git a/tests/admin_widgets/tests.py b/tests/admin_widgets/tests.py
index 32f065b325ea1..f854f4dc207d7 100644
--- a/tests/admin_widgets/tests.py
+++ b/tests/admin_widgets/tests.py
@@ -941,7 +941,7 @@ def test_date_time_picker_shortcuts(self):
# Submit the form.
with self.wait_page_loaded():
- self.selenium.find_element_by_tag_name('form').submit()
+ self.selenium.find_element_by_name('_save').click()
# Make sure that "now" in javascript is within 10 seconds
# from "now" on the server side.
| There is no WebDriver submit primitive. The Selenium project implements
it as a convenience only. The geckodriver developers recommend against
using it. Replace it with a real primitive, click on the submit button.
Fixes failing Seleninum test test_date_time_picker_shortcuts when using
the Firefox Selenium driver.
https://code.djangoproject.com/ticket/29892 | https://api.github.com/repos/django/django/pulls/12147 | 2019-11-26T13:03:19Z | 2019-11-28T08:24:19Z | 2019-11-28T08:24:19Z | 2019-11-28T11:20:47Z | 143 | django/django | 50,901 |
Use yield from instead of looping yield | diff --git a/requests/models.py b/requests/models.py
index 7e1522837f..3cd49f5bba 100644
--- a/requests/models.py
+++ b/requests/models.py
@@ -813,8 +813,7 @@ def generate():
# Special case for urllib3.
if hasattr(self.raw, "stream"):
try:
- for chunk in self.raw.stream(chunk_size, decode_content=True):
- yield chunk
+ yield from self.raw.stream(chunk_size, decode_content=True)
except ProtocolError as e:
raise ChunkedEncodingError(e)
except DecodeError as e:
| This merge request updates the stream yield functionality to use `yield from` instead of `yield` within a for loop. Because this is yielding from an iterable, we can use `yield from` which is not only slightly shorter but also on average 15% more performant than using `yield` inside of a loop. This is a result of some of the optimizations included as part of [PEP 380](https://peps.python.org/pep-0380/) | https://api.github.com/repos/psf/requests/pulls/6170 | 2022-06-20T13:01:10Z | 2022-06-29T02:08:26Z | 2022-06-29T02:08:26Z | 2023-06-30T00:03:08Z | 138 | psf/requests | 32,630 |
FIX tol in SGDRegressorBenchmark | diff --git a/asv_benchmarks/benchmarks/linear_model.py b/asv_benchmarks/benchmarks/linear_model.py
index 663ceca61d063..b694a109329f0 100644
--- a/asv_benchmarks/benchmarks/linear_model.py
+++ b/asv_benchmarks/benchmarks/linear_model.py
@@ -164,7 +164,11 @@ def make_data(self, params):
return data
def make_estimator(self, params):
- estimator = SGDRegressor(max_iter=1000, tol=1e-16, random_state=0)
+ (representation,) = params
+
+ max_iter = 60 if representation == "dense" else 300
+
+ estimator = SGDRegressor(max_iter=max_iter, tol=None, random_state=0)
return estimator
| <!--
Thanks for contributing a pull request! Please ensure you have taken a look at
the contribution guidelines: https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md
-->
#### Reference Issues/PRs
<!--
Example: Fixes #1234. See also #3456.
Please use keywords (e.g., Fixes) to create link to the issues or pull requests
you resolved, so that they will automatically be closed when your pull request
is merged. See https://github.com/blog/1506-closing-issues-via-pull-requests
-->
Fixes #26095
#### What does this implement/fix? Explain your changes.
- Sets the value of tol to None for SGDRegressorBenchmark
- Catches the ConvergenceWarning and assets that the n_iter_ is equal to max_iter to clarify that the benchmark runs for max_iters.
#### Any other comments?
CC: @ogrisel @jeremiedbb @glemaitre
<!--
Please be aware that we are a loose team of volunteers so patience is
necessary; assistance handling other issues is very welcome. We value
all user contributions, no matter how minor they are. If we are slow to
review, either the pull request needs some benchmarking, tinkering,
convincing, etc. or more likely the reviewers are simply busy. In either
case, we ask for your understanding during the review process.
For more information, see our FAQ on this topic:
http://scikit-learn.org/dev/faq.html#why-is-my-pull-request-not-getting-any-attention.
Thanks for contributing!
-->
| https://api.github.com/repos/scikit-learn/scikit-learn/pulls/26146 | 2023-04-11T12:43:10Z | 2023-04-14T10:24:17Z | 2023-04-14T10:24:17Z | 2023-04-19T11:05:27Z | 185 | scikit-learn/scikit-learn | 46,520 |
Make auto saved workflow stored per tab | diff --git a/web/scripts/api.js b/web/scripts/api.js
index 3a9bcc87a4..8c8155be66 100644
--- a/web/scripts/api.js
+++ b/web/scripts/api.js
@@ -5,6 +5,7 @@ class ComfyApi extends EventTarget {
super();
this.api_host = location.host;
this.api_base = location.pathname.split('/').slice(0, -1).join('/');
+ this.initialClientId = sessionStorage.getItem("clientId");
}
apiURL(route) {
@@ -118,7 +119,8 @@ class ComfyApi extends EventTarget {
case "status":
if (msg.data.sid) {
this.clientId = msg.data.sid;
- window.name = this.clientId;
+ window.name = this.clientId; // use window name so it isnt reused when duplicating tabs
+ sessionStorage.setItem("clientId", this.clientId); // store in session storage so duplicate tab can load correct workflow
}
this.dispatchEvent(new CustomEvent("status", { detail: msg.data.status }));
break;
diff --git a/web/scripts/app.js b/web/scripts/app.js
index 6df393ba60..b3a8489933 100644
--- a/web/scripts/app.js
+++ b/web/scripts/app.js
@@ -1499,12 +1499,17 @@ export class ComfyApp {
// Load previous workflow
let restored = false;
try {
- const json = localStorage.getItem("workflow");
- if (json) {
- const workflow = JSON.parse(json);
- await this.loadGraphData(workflow);
- restored = true;
- }
+ const loadWorkflow = async (json) => {
+ if (json) {
+ const workflow = JSON.parse(json);
+ await this.loadGraphData(workflow);
+ return true;
+ }
+ };
+ const clientId = api.initialClientId ?? api.clientId;
+ restored =
+ (clientId && (await loadWorkflow(sessionStorage.getItem(`workflow:${clientId}`)))) ||
+ (await loadWorkflow(localStorage.getItem("workflow")));
} catch (err) {
console.error("Error loading previous workflow", err);
}
@@ -1515,7 +1520,13 @@ export class ComfyApp {
}
// Save current workflow automatically
- setInterval(() => localStorage.setItem("workflow", JSON.stringify(this.graph.serialize())), 1000);
+ setInterval(() => {
+ const workflow = JSON.stringify(this.graph.serialize());
+ localStorage.setItem("workflow", workflow);
+ if (api.clientId) {
+ sessionStorage.setItem(`workflow:${api.clientId}`, workflow);
+ }
+ }, 1000);
this.#addDrawNodeHandler();
this.#addDrawGroupsHandler();
| Stores the workflow and client id in sessionStorage so when working with multiple tabs and a tab is refreshed, the correct workflow will be loaded instead of the one that happened to save last.
Also means when you duplicate a tab, you'll get the correct workflow. There appears to be a bug in Firefox where a duplicate of a duplicate etc ... at some point the sessionStorage ends up empty and you lose the ID but it works fine in Chrome. | https://api.github.com/repos/comfyanonymous/ComfyUI/pulls/2672 | 2024-01-29T18:48:23Z | 2024-01-30T00:39:08Z | 2024-01-30T00:39:08Z | 2024-01-30T00:39:08Z | 623 | comfyanonymous/ComfyUI | 17,798 |
Correct the actual unescaped character | diff --git a/docs/quickstart.rst b/docs/quickstart.rst
index 9bddbfc0a7..e07e8dbf9d 100644
--- a/docs/quickstart.rst
+++ b/docs/quickstart.rst
@@ -468,7 +468,7 @@ Here is a basic introduction to how the :class:`~markupsafe.Markup` class works:
>>> Markup.escape('<blink>hacker</blink>')
Markup('<blink>hacker</blink>')
>>> Markup('<em>Marked up</em> » HTML').striptags()
- 'Marked up \xbb HTML'
+ 'Marked up » HTML'
.. versionchanged:: 0.5
| `»` should be unescaped to `»` after `striptags`.
Ref: https://flask.palletsprojects.com/en/2.0.x/api/#flask.Markup.striptags | https://api.github.com/repos/pallets/flask/pulls/4332 | 2021-11-07T02:20:02Z | 2021-11-07T14:20:43Z | 2021-11-07T14:20:43Z | 2021-11-22T00:03:32Z | 167 | pallets/flask | 20,378 |
Disable stale for PRs | diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml
index fe9b55ec99b..d8e227c77b9 100644
--- a/.github/workflows/stale.yml
+++ b/.github/workflows/stale.yml
@@ -17,9 +17,15 @@ jobs:
# Idle number of days before marking issues stale
days-before-issue-stale: 365
+ # Never mark PRs as stale
+ days-before-pr-stale: -1
+
# Idle number of days before closing stale issues
days-before-issue-close: 30
+ # Never close PRs
+ days-before-pr-close: -1
+
# Ignore issues with an assignee
exempt-all-issue-assignees: true
| I noticed our stale action is processing PRs. See https://github.com/certbot/certbot/actions/runs/4169673316/jobs/7217894512#step:2:1225 for example. I believe we only wanted to mark issues as stale.
It's not very clear in the documentation, but I think setting these values to -1 should stop it wanting to modify PRs. See https://github.com/actions/stale#days-before-stale and https://github.com/actions/stale#days-before-close.
Updating PRs may have quietly failed since we didn't give the action permissions for PRs, but this hopefully should stop it from ever trying. | https://api.github.com/repos/certbot/certbot/pulls/9577 | 2023-02-14T16:55:19Z | 2023-02-14T20:28:10Z | 2023-02-14T20:28:09Z | 2023-02-14T20:28:12Z | 175 | certbot/certbot | 863 |
ccxt.d.ts: add Exchange#fetchTime(). | diff --git a/ccxt.d.ts b/ccxt.d.ts
index 99ed70c593c1..7c08fba64485 100644
--- a/ccxt.d.ts
+++ b/ccxt.d.ts
@@ -295,6 +295,7 @@ declare module 'ccxt' {
loadMarkets (reload?: boolean): Promise<{ [symbol: string]: Market }>;
fetchTicker (symbol: string, params?: { [x: string]: any }): Promise<Ticker>;
fetchTickers (symbols?: string[], params?: { [x: string]: any }): Promise<{ [x: string]: Ticker }>;
+ fetchTime (): Promise<number>;
fetchMarkets (): Promise<Market[]>;
fetchOrderStatus (id: string, market: string): Promise<string>;
encode (str: string): string;
| https://api.github.com/repos/ccxt/ccxt/pulls/5889 | 2019-09-29T23:10:47Z | 2019-09-30T09:13:48Z | 2019-09-30T09:13:47Z | 2019-10-01T01:38:04Z | 181 | ccxt/ccxt | 13,035 | |
Editing P.9: "Don't waste time or space" Example Text | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index 49a10134c..27ad3686f 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -1055,8 +1055,8 @@ There are several more performance bugs and gratuitous complication.
for (int i = 0; i < strlen(s); ++i) s[i] = tolower(s[i]);
}
-Yes, this is an example from production code.
-We leave it to the reader to figure out what's wasted.
+This is actually an example from production code.
+We can see that in our condition we have `i < strlen(s)`. This expression will be evaluated on every iteration of the loop, which means that `strlen` must walk through string every loop to discover its length. While the string contents are changing, it's assumed that `toLower` will not affect the length of the string, so it's better to cache the length outside the loop and not incur that cost each iteration.
##### Note
diff --git a/scripts/hunspell/isocpp.dic b/scripts/hunspell/isocpp.dic
index a333866cb..29bec28fb 100644
--- a/scripts/hunspell/isocpp.dic
+++ b/scripts/hunspell/isocpp.dic
@@ -564,6 +564,7 @@ tmp
TMP
tock
TODO
+toLower
toolchains
TotallyOrdered
TP
| Hi there, this example came up in a discussion with a newer coder in an #include channel. We explained what was going on with it and had a little talk about how it didn't seem helpful not to just explain it. Thus this PR was born. | https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/1439 | 2019-06-08T23:14:22Z | 2019-06-20T18:12:30Z | 2019-06-20T18:12:30Z | 2019-06-20T18:12:43Z | 327 | isocpp/CppCoreGuidelines | 15,720 |
[MRG +1] ColumnTransformer: store evaluated function column specifier during fit | diff --git a/sklearn/compose/_column_transformer.py b/sklearn/compose/_column_transformer.py
index e09d2d09d7e43..9014623280d2e 100644
--- a/sklearn/compose/_column_transformer.py
+++ b/sklearn/compose/_column_transformer.py
@@ -211,20 +211,29 @@ def set_params(self, **kwargs):
self._set_params('_transformers', **kwargs)
return self
- def _iter(self, X=None, fitted=False, replace_strings=False):
- """Generate (name, trans, column, weight) tuples
+ def _iter(self, fitted=False, replace_strings=False):
+ """
+ Generate (name, trans, X_subset, weight, column) tuples.
+
+ If fitted=True, use the fitted transformers, else use the
+ user specified transformers updated with converted column names
+ and potentially appended with transformer for remainder.
+
"""
if fitted:
transformers = self.transformers_
else:
- transformers = self.transformers
+ # interleave the validated column specifiers
+ transformers = [
+ (name, trans, column) for (name, trans, _), column
+ in zip(self.transformers, self._columns)
+ ]
+ # add transformer tuple for remainder
if self._remainder[2] is not None:
transformers = chain(transformers, [self._remainder])
get_weight = (self.transformer_weights or {}).get
for name, trans, column in transformers:
- sub = None if X is None else _get_column(X, column)
-
if replace_strings:
# replace 'passthrough' with identity transformer and
# skip in case of 'drop'
@@ -235,7 +244,7 @@ def _iter(self, X=None, fitted=False, replace_strings=False):
elif trans == 'drop':
continue
- yield (name, trans, sub, get_weight(name))
+ yield (name, trans, column, get_weight(name))
def _validate_transformers(self):
if not self.transformers:
@@ -257,6 +266,17 @@ def _validate_transformers(self):
"specifiers. '%s' (type %s) doesn't." %
(t, type(t)))
+ def _validate_column_callables(self, X):
+ """
+ Converts callable column specifications.
+ """
+ columns = []
+ for _, _, column in self.transformers:
+ if callable(column):
+ column = column(X)
+ columns.append(column)
+ self._columns = columns
+
def _validate_remainder(self, X):
"""
Validates ``remainder`` and defines ``_remainder`` targeting
@@ -274,7 +294,7 @@ def _validate_remainder(self, X):
n_columns = X.shape[1]
cols = []
- for _, _, columns in self.transformers:
+ for columns in self._columns:
cols.extend(_get_column_indices(X, columns))
remaining_idx = sorted(list(set(range(n_columns)) - set(cols))) or None
@@ -320,27 +340,23 @@ def get_feature_names(self):
def _update_fitted_transformers(self, transformers):
# transformers are fitted; excludes 'drop' cases
- transformers = iter(transformers)
+ fitted_transformers = iter(transformers)
transformers_ = []
- transformer_iter = self.transformers
- if self._remainder[2] is not None:
- transformer_iter = chain(transformer_iter, [self._remainder])
-
- for name, old, column in transformer_iter:
+ for name, old, column, _ in self._iter():
if old == 'drop':
trans = 'drop'
elif old == 'passthrough':
# FunctionTransformer is present in list of transformers,
# so get next transformer, but save original string
- next(transformers)
+ next(fitted_transformers)
trans = 'passthrough'
else:
- trans = next(transformers)
+ trans = next(fitted_transformers)
transformers_.append((name, trans, column))
# sanity check that transformers is exhausted
- assert not list(transformers)
+ assert not list(fitted_transformers)
self.transformers_ = transformers_
def _validate_output(self, result):
@@ -348,7 +364,8 @@ def _validate_output(self, result):
Ensure that the output of each transformer is 2D. Otherwise
hstack can raise an error or produce incorrect results.
"""
- names = [name for name, _, _, _ in self._iter(replace_strings=True)]
+ names = [name for name, _, _, _ in self._iter(fitted=True,
+ replace_strings=True)]
for Xs, name in zip(result, names):
if not getattr(Xs, 'ndim', 0) == 2:
raise ValueError(
@@ -366,9 +383,9 @@ def _fit_transform(self, X, y, func, fitted=False):
try:
return Parallel(n_jobs=self.n_jobs)(
delayed(func)(clone(trans) if not fitted else trans,
- X_sel, y, weight)
- for _, trans, X_sel, weight in self._iter(
- X=X, fitted=fitted, replace_strings=True))
+ _get_column(X, column), y, weight)
+ for _, trans, column, weight in self._iter(
+ fitted=fitted, replace_strings=True))
except ValueError as e:
if "Expected 2D array, got 1D array instead" in str(e):
raise ValueError(_ERR_MSG_1DCOLUMN)
@@ -419,8 +436,9 @@ def fit_transform(self, X, y=None):
sparse matrices.
"""
- self._validate_remainder(X)
self._validate_transformers()
+ self._validate_column_callables(X)
+ self._validate_remainder(X)
result = self._fit_transform(X, y, _fit_transform_one)
@@ -545,9 +563,6 @@ def _get_column(X, key):
can use any hashable object as key).
"""
- if callable(key):
- key = key(X)
-
# check whether we have string column names or integers
if _check_key_type(key, int):
column_names = False
@@ -589,9 +604,6 @@ def _get_column_indices(X, key):
"""
n_columns = X.shape[1]
- if callable(key):
- key = key(X)
-
if _check_key_type(key, int):
if isinstance(key, int):
return [key]
diff --git a/sklearn/compose/tests/test_column_transformer.py b/sklearn/compose/tests/test_column_transformer.py
index f67806a52c543..7e5e5029fa71a 100644
--- a/sklearn/compose/tests/test_column_transformer.py
+++ b/sklearn/compose/tests/test_column_transformer.py
@@ -873,6 +873,8 @@ def func(X):
remainder='drop')
assert_array_equal(ct.fit_transform(X_array), X_res_first)
assert_array_equal(ct.fit(X_array).transform(X_array), X_res_first)
+ assert callable(ct.transformers[0][2])
+ assert ct.transformers_[0][2] == [0]
pd = pytest.importorskip('pandas')
X_df = pd.DataFrame(X_array, columns=['first', 'second'])
@@ -886,3 +888,5 @@ def func(X):
remainder='drop')
assert_array_equal(ct.fit_transform(X_df), X_res_first)
assert_array_equal(ct.fit(X_df).transform(X_df), X_res_first)
+ assert callable(ct.transformers[0][2])
+ assert ct.transformers_[0][2] == ['first']
| Fixes https://github.com/scikit-learn/scikit-learn/issues/12097 | https://api.github.com/repos/scikit-learn/scikit-learn/pulls/12107 | 2018-09-18T14:57:09Z | 2018-09-21T11:22:35Z | 2018-09-21T11:22:35Z | 2018-10-19T15:12:19Z | 1,727 | scikit-learn/scikit-learn | 46,486 |
Add | diff --git a/add two no b/add two no
new file mode 100644
index 0000000000..d1b6fd9e45
--- /dev/null
+++ b/add two no
@@ -0,0 +1,7 @@
+num1 = 1.5
+num2 = 6.3
+
+
+sum = num1 + num2
+
+print('The sum of {0} and {1} is {2}'.format(num1, num2, sum))
| Addition of two no | https://api.github.com/repos/geekcomputers/Python/pulls/1074 | 2020-10-06T14:48:20Z | 2020-10-10T20:39:56Z | 2020-10-10T20:39:56Z | 2020-10-10T20:39:56Z | 113 | geekcomputers/Python | 31,453 |
TST: add note about scope of base extension tests to all files | diff --git a/pandas/tests/extension/test_datetime.py b/pandas/tests/extension/test_datetime.py
index 0fde1e8a2fdb8..281bbc21e3106 100644
--- a/pandas/tests/extension/test_datetime.py
+++ b/pandas/tests/extension/test_datetime.py
@@ -1,3 +1,18 @@
+"""
+This file contains a minimal set of tests for compliance with the extension
+array interface test suite, and should contain no other tests.
+The test suite for the full functionality of the array is located in
+`pandas/tests/arrays/`.
+
+The tests in this file are inherited from the BaseExtensionTests, and only
+minimal tweaks should be applied to get the tests passing (by overwriting a
+parent method).
+
+Additional tests should either be added to one of the BaseExtensionTests
+classes (if they are relevant for the extension interface for all dtypes), or
+be added to the array-specific tests in `pandas/tests/arrays/`.
+
+"""
import numpy as np
import pytest
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/test_numpy.py
index 29790d14f93cc..1f0181eec8830 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/test_numpy.py
@@ -1,3 +1,18 @@
+"""
+This file contains a minimal set of tests for compliance with the extension
+array interface test suite, and should contain no other tests.
+The test suite for the full functionality of the array is located in
+`pandas/tests/arrays/`.
+
+The tests in this file are inherited from the BaseExtensionTests, and only
+minimal tweaks should be applied to get the tests passing (by overwriting a
+parent method).
+
+Additional tests should either be added to one of the BaseExtensionTests
+classes (if they are relevant for the extension interface for all dtypes), or
+be added to the array-specific tests in `pandas/tests/arrays/`.
+
+"""
import numpy as np
import pytest
diff --git a/pandas/tests/extension/test_period.py b/pandas/tests/extension/test_period.py
index 817881e00fa99..30dd6193846a4 100644
--- a/pandas/tests/extension/test_period.py
+++ b/pandas/tests/extension/test_period.py
@@ -1,3 +1,18 @@
+"""
+This file contains a minimal set of tests for compliance with the extension
+array interface test suite, and should contain no other tests.
+The test suite for the full functionality of the array is located in
+`pandas/tests/arrays/`.
+
+The tests in this file are inherited from the BaseExtensionTests, and only
+minimal tweaks should be applied to get the tests passing (by overwriting a
+parent method).
+
+Additional tests should either be added to one of the BaseExtensionTests
+classes (if they are relevant for the extension interface for all dtypes), or
+be added to the array-specific tests in `pandas/tests/arrays/`.
+
+"""
import numpy as np
import pytest
diff --git a/pandas/tests/extension/test_sparse.py b/pandas/tests/extension/test_sparse.py
index ffd56b9c23bc8..86f9080571459 100644
--- a/pandas/tests/extension/test_sparse.py
+++ b/pandas/tests/extension/test_sparse.py
@@ -1,3 +1,18 @@
+"""
+This file contains a minimal set of tests for compliance with the extension
+array interface test suite, and should contain no other tests.
+The test suite for the full functionality of the array is located in
+`pandas/tests/arrays/`.
+
+The tests in this file are inherited from the BaseExtensionTests, and only
+minimal tweaks should be applied to get the tests passing (by overwriting a
+parent method).
+
+Additional tests should either be added to one of the BaseExtensionTests
+classes (if they are relevant for the extension interface for all dtypes), or
+be added to the array-specific tests in `pandas/tests/arrays/`.
+
+"""
import numpy as np
import pytest
diff --git a/pandas/tests/extension/test_string.py b/pandas/tests/extension/test_string.py
index d49c4c5cf4889..d0a3ef17afdbc 100644
--- a/pandas/tests/extension/test_string.py
+++ b/pandas/tests/extension/test_string.py
@@ -1,3 +1,18 @@
+"""
+This file contains a minimal set of tests for compliance with the extension
+array interface test suite, and should contain no other tests.
+The test suite for the full functionality of the array is located in
+`pandas/tests/arrays/`.
+
+The tests in this file are inherited from the BaseExtensionTests, and only
+minimal tweaks should be applied to get the tests passing (by overwriting a
+parent method).
+
+Additional tests should either be added to one of the BaseExtensionTests
+classes (if they are relevant for the extension interface for all dtypes), or
+be added to the array-specific tests in `pandas/tests/arrays/`.
+
+"""
import string
import numpy as np
| We already had this note in about half of the files in this directory, copied it to include in the other files as well. | https://api.github.com/repos/pandas-dev/pandas/pulls/39003 | 2021-01-06T15:47:35Z | 2021-01-06T18:34:18Z | 2021-01-06T18:34:18Z | 2021-01-12T08:05:04Z | 1,154 | pandas-dev/pandas | 44,876 |
ansible-test: make the httptester for Windows more resiliant around the shell chosen | diff --git a/test/runner/lib/executor.py b/test/runner/lib/executor.py
index e593d8b21210e2..f0310a8d22322c 100644
--- a/test/runner/lib/executor.py
+++ b/test/runner/lib/executor.py
@@ -583,9 +583,9 @@ def forward_ssh_ports(target):
manage = ManageWindowsCI(remote)
manage.upload("test/runner/setup/windows-httptester.ps1", watcher_path)
- # need to use -Command as we cannot pass an array of values with -File
- script = "powershell.exe -NoProfile -ExecutionPolicy Bypass -Command .\\%s -Hosts %s" \
- % (watcher_path, ", ".join(HTTPTESTER_HOSTS))
+ # We cannot pass an array of string with -File so we just use a delimiter for multiple values
+ script = "powershell.exe -NoProfile -ExecutionPolicy Bypass -File .\\%s -Hosts \"%s\"" \
+ % (watcher_path, "|".join(HTTPTESTER_HOSTS))
if args.verbosity > 3:
script += " -Verbose"
manage.ssh(script, options=ssh_options, force_pty=False)
@@ -600,7 +600,7 @@ def cleanup_ssh_ports(target):
for remote in [r for r in remotes if r.version != '2008']:
# delete the tmp file that keeps the http-tester alive
manage = ManageWindowsCI(remote)
- manage.ssh("del %s /F /Q" % watcher_path)
+ manage.ssh("cmd.exe /c \"del %s /F /Q\"" % watcher_path, force_pty=False)
watcher_path = "ansible-test-http-watcher-%s.ps1" % time.time()
pre_target = forward_ssh_ports
diff --git a/test/runner/setup/windows-httptester.ps1 b/test/runner/setup/windows-httptester.ps1
index 27abad018359e5..b854ef14b4be16 100644
--- a/test/runner/setup/windows-httptester.ps1
+++ b/test/runner/setup/windows-httptester.ps1
@@ -9,13 +9,14 @@ Run this with SSH with the -R arguments to foward ports 8080 and 8443 to the
httptester container.
.PARAMETER Hosts
-A list of hostnames to add to the Windows hosts file for the httptester
-container.
+A list of hostnames, delimited by '|', to add to the Windows hosts file for the
+httptester container, e.g. 'ansible.host.com|secondary.host.test'.
#>
[CmdletBinding()]
param(
- [Parameter(Mandatory=$true, Position=0)][String[]]$Hosts
+ [Parameter(Mandatory=$true, Position=0)][String]$Hosts
)
+$Hosts = $Hosts.Split('|')
$ProgressPreference = "SilentlyContinue"
$ErrorActionPreference = "Stop"
| ##### SUMMARY
Currently the httptester bootstrap process for Windows only works when the default shell for SSH is set to `cmd` (or not set at all). This change makes the process compatible with both `cmd` as well as `powershell`.
##### ISSUE TYPE
- Bugfix Pull Request
##### COMPONENT NAME
ansible-test | https://api.github.com/repos/ansible/ansible/pulls/51416 | 2019-01-29T05:14:01Z | 2019-01-29T21:50:24Z | 2019-01-29T21:50:24Z | 2019-07-25T16:37:55Z | 662 | ansible/ansible | 49,254 |
hitbtc more accountsByType mapping | diff --git a/js/hitbtc.js b/js/hitbtc.js
index 5bc3b8fd7b8a..035bcd3a0cb0 100644
--- a/js/hitbtc.js
+++ b/js/hitbtc.js
@@ -177,6 +177,9 @@ module.exports = class hitbtc extends Exchange {
'bank': 'bank',
'exchange': 'exchange',
'main': 'bank', // alias of the above
+ 'funding': 'bank',
+ 'spot': 'exchange',
+ 'trade': 'exchange',
'trading': 'exchange',
},
'fetchBalanceMethod': {
| https://api.github.com/repos/ccxt/ccxt/pulls/9735 | 2021-08-03T13:10:16Z | 2021-08-03T14:00:47Z | 2021-08-03T14:00:47Z | 2021-08-03T14:00:47Z | 146 | ccxt/ccxt | 13,737 | |
Enable no-else-return rule in ESLint | diff --git a/frontend/.eslintrc b/frontend/.eslintrc
index 6cb6df15b1f4..b91c8706dbd2 100644
--- a/frontend/.eslintrc
+++ b/frontend/.eslintrc
@@ -114,7 +114,8 @@
"no-relative-import-paths/no-relative-import-paths": [
"error",
{ "allowSameFolder": true, "rootDir": "src", "prefix": "src" }
- ]
+ ],
+ "no-else-return": ["error", {"allowElseIf": false}]
},
"settings": {
"react": {
| <!--
Before contributing (PLEASE READ!)
⚠️ If your contribution is more than a few lines of code, then prior to starting to code on it please post in the issue saying you want to volunteer, then wait for a positive response. And if there is no issue for it yet, create it first.
This helps make sure:
1. Two people aren't working on the same thing
2. This is something Streamlit's maintainers believe should be implemented/fixed
3. Any API, UI, or deeper architectural changes that need to be implemented have been fully thought through by Streamlit's maintainers
4. Your time is well spent!
More information in our wiki: https://github.com/streamlit/streamlit/wiki/Contributing
-->
## 📚 Context
Discussion: https://github.com/streamlit/streamlit/pull/6174#discussion_r1122843055
_Please describe the project or issue background here_
- What kind of change does this PR introduce?
- [ ] Bugfix
- [ ] Feature
- [ ] Refactoring
- [X] Other, please describe:
## 🧠 Description of Changes
- _Add bullet points summarizing your changes here_
- [ ] This is a breaking API change
- [ ] This is a visible (user-facing) change
**Revised:**
_Insert screenshot of your updated UI/code here_
**Current:**
_Insert screenshot of existing UI/code here_
## 🧪 Testing Done
- [ ] Screenshots included
- [ ] Added/Updated unit tests
- [ ] Added/Updated e2e tests
## 🌐 References
_Does this depend on other work, documents, or tickets?_
- **Issue**: Closes #XXXX
---
**Contribution License Agreement**
By submitting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.
| https://api.github.com/repos/streamlit/streamlit/pulls/6205 | 2023-03-02T11:26:06Z | 2023-03-09T09:18:30Z | 2023-03-09T09:18:30Z | 2023-03-09T09:59:46Z | 148 | streamlit/streamlit | 22,004 |
Fix typo | diff --git a/docs/docs_skeleton/docs/get_started/quickstart.mdx b/docs/docs_skeleton/docs/get_started/quickstart.mdx
index 8250083adc48d0..8af28aa2970ba2 100644
--- a/docs/docs_skeleton/docs/get_started/quickstart.mdx
+++ b/docs/docs_skeleton/docs/get_started/quickstart.mdx
@@ -138,7 +138,7 @@ The chains and agents we've looked at so far have been stateless, but for many a
The Memory module gives you a way to maintain application state. The base Memory interface is simple: it lets you update state given the latest run inputs and outputs and it lets you modify (or contextualize) the next input using the stored state.
-There are a number of built-in memory systems. The simplest of these are is a buffer memory which just prepends the last few inputs/outputs to the current input - we will use this in the example below.
+There are a number of built-in memory systems. The simplest of these is a buffer memory which just prepends the last few inputs/outputs to the current input - we will use this in the example below.
import MemoryLLM from "@snippets/get_started/quickstart/memory_llms.mdx"
import MemoryChatModel from "@snippets/get_started/quickstart/memory_chat_models.mdx"
diff --git a/langchain/document_loaders/url.py b/langchain/document_loaders/url.py
index be14ed27c82566..3007fe9954f879 100644
--- a/langchain/document_loaders/url.py
+++ b/langchain/document_loaders/url.py
@@ -117,7 +117,7 @@ def load(self) -> List[Document]:
elements = partition_html(url=url, **self.unstructured_kwargs)
except Exception as e:
if self.continue_on_failure:
- logger.error(f"Error fetching or processing {url}, exeption: {e}")
+ logger.error(f"Error fetching or processing {url}, exception: {e}")
continue
else:
raise e
| This PR fixes a typo. | https://api.github.com/repos/langchain-ai/langchain/pulls/7023 | 2023-07-01T16:38:01Z | 2023-07-02T07:17:31Z | 2023-07-02T07:17:30Z | 2023-07-02T08:22:42Z | 443 | langchain-ai/langchain | 42,895 |
Added LectServe to Calendar | diff --git a/README.md b/README.md
index 33bd438deb..918b513263 100644
--- a/README.md
+++ b/README.md
@@ -97,6 +97,7 @@ For information on contributing to this project, please see the [contributing gu
| Church Calendar | Catholic liturgical calendar | No | No | [Go!](http://calapi.inadiutorium.cz/) |
| Date and Time | Global Date and Time | No | No | [Go!](http://www.timeanddate.com/services/api/) |
| Holidays | Free API for obtaining information about holidays. | No | No | [Go!](http://holidayapi.com/) |
+| LectServe | Protestant liturgical calendar | No | No | [Go!](http://www.lectserve.com) |
| Non-Working Days | Database of ICS files for non working days | No | Yes | [Go!](https://github.com/gadael/icsdb) |
### Cloud Storage & File Sharing
| LectServe is a Protestant lectionary featuring the Revised Common
Lectionary and the ACNA Lectionary | https://api.github.com/repos/public-apis/public-apis/pulls/287 | 2017-02-15T20:27:11Z | 2017-02-15T20:58:56Z | 2017-02-15T20:58:56Z | 2017-02-15T20:58:56Z | 221 | public-apis/public-apis | 35,195 |
[RLlib contrib] Fix rllib contrib readmes | diff --git a/rllib_contrib/a3c/README.md b/rllib_contrib/a3c/README.md
new file mode 100644
index 0000000000000..897ebfea96b39
--- /dev/null
+++ b/rllib_contrib/a3c/README.md
@@ -0,0 +1,17 @@
+# A3C (Asynchronous Advantage Actor-Critic)
+
+[A3C](https://arxiv.org/abs/1602.01783) is the asynchronous version of A2C, where gradients are computed on the workers directly after trajectory rollouts, and only then shipped to a central learner to accumulate these gradients on the central model. After the central model update, parameters are broadcast back to all workers. Similar to A2C, A3C scales to 16-32+ worker processes depending on the environment.
+
+
+## Installation
+
+```
+conda create -n rllib-a3c python=3.10
+conda activate rllib-a3c
+pip install -r requirements.txt
+pip install -e '.[development]'
+```
+
+## Usage
+
+[A3C Example](examples/a3c_cartpole_v1.py)
\ No newline at end of file
diff --git a/rllib_contrib/a3c/README.rst b/rllib_contrib/a3c/README.rst
deleted file mode 100644
index df3665c1408e5..0000000000000
--- a/rllib_contrib/a3c/README.rst
+++ /dev/null
@@ -1,21 +0,0 @@
-A3C (Asynchronous Advantage Actor-Critic)
------------------------------------------
-
-`A3C <https://arxiv.org/abs/1602.01783>` is the asynchronous version of A2C, where gradients are computed on the workers directly after trajectory rollouts, and only then shipped to a central learner to accumulate these gradients on the central model. After the central model update, parameters are broadcast back to all workers. Similar to A2C, A3C scales to 16-32+ worker processes depending on the environment.
-
-
-Installation
-------------
-
-.. code-block:: bash
-
- conda create -n rllib-a3c python=3.10
- conda activate rllib-a3c
- pip install -r requirements.txt
- pip install -e '.[development]'
-
-
-Usage
------
-
-.. literalinclude:: examples/a3c_cartpole_v1.py
\ No newline at end of file
diff --git a/rllib_contrib/maml/README.md b/rllib_contrib/maml/README.md
new file mode 100644
index 0000000000000..694ef0fcb502b
--- /dev/null
+++ b/rllib_contrib/maml/README.md
@@ -0,0 +1,23 @@
+# MAML (Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks)
+
+[MAML](https://arxiv.org/abs/1703.03400) is an on-policy meta RL algorithm. Unlike standard RL algorithms, which aim to maximize the sum of rewards into the future for a single task (e.g. HalfCheetah), meta RL algorithms seek to maximize the sum of rewards for *a given distribution of tasks*.
+
+On a high level, MAML seeks to learn quick adaptation across different tasks (e.g. different velocities for HalfCheetah). Quick adaptation is defined by the number of gradient steps it takes to adapt. MAML aims to maximize the RL objective for each task after `X` gradient steps. Doing this requires partitioning the algorithm into two steps. The first step is data collection. This involves collecting data for each task for each step of adaptation (from `1, 2, ..., X`). The second step is the meta-update step. This second step takes all the aggregated ddata from the first step and computes the meta-gradient.
+
+Code here is adapted from https://github.com/jonasrothfuss, which outperforms vanilla MAML and avoids computation of the higher order gradients during the meta-update step. MAML is evaluated on custom environments that are described in greater detail here.
+
+MAML uses additional metrics to measure performance; episode_reward_mean measures the agent’s returns before adaptation, episode_reward_mean_adapt_N measures the agent’s returns after N gradient steps of inner adaptation, and adaptation_delta measures the difference in performance before and after adaptation.
+
+
+## Installation
+
+```
+conda create -n rllib-maml python=3.10
+conda activate rllib-maml
+pip install -r requirements.txt
+pip install -e '.[development]'
+```
+
+## Usage
+
+[MAML Example](examples/cartpole_mass_maml.py)
\ No newline at end of file
diff --git a/rllib_contrib/maml/README.rst b/rllib_contrib/maml/README.rst
deleted file mode 100644
index 912fca39ed35c..0000000000000
--- a/rllib_contrib/maml/README.rst
+++ /dev/null
@@ -1,27 +0,0 @@
-MAML (Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks)
-------------------------------------------------------------------------
-
-`MAML <https://arxiv.org/abs/1703.03400>` is an on-policy meta RL algorithm. Unlike standard RL algorithms, which aim to maximize the sum of rewards into the future for a single task (e.g. HalfCheetah), meta RL algorithms seek to maximize the sum of rewards for *a given distribution of tasks*.
-
-On a high level, MAML seeks to learn quick adaptation across different tasks (e.g. different velocities for HalfCheetah). Quick adaptation is defined by the number of gradient steps it takes to adapt. MAML aims to maximize the RL objective for each task after `X` gradient steps. Doing this requires partitioning the algorithm into two steps. The first step is data collection. This involves collecting data for each task for each step of adaptation (from `1, 2, ..., X`). The second step is the meta-update step. This second step takes all the aggregated ddata from the first step and computes the meta-gradient.
-
-Code here is adapted from `https://github.com/jonasrothfuss`, which outperforms vanilla MAML and avoids computation of the higher order gradients during the meta-update step. MAML is evaluated on custom environments that are described in greater detail here.
-
-MAML uses additional metrics to measure performance; episode_reward_mean measures the agent’s returns before adaptation, episode_reward_mean_adapt_N measures the agent’s returns after N gradient steps of inner adaptation, and adaptation_delta measures the difference in performance before and after adaptation.
-
-
-Installation
-------------
-
-.. code-block:: bash
-
- conda create -n rllib-maml python=3.10
- conda activate rllib-maml
- pip install -r requirements.txt
- pip install -e '.[development]'
-
-
-Usage
------
-
-.. literalinclude:: examples/cartpole_mass_maml.py
\ No newline at end of file
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
The formatting is broken in the rllib contrib readmes. This pr fixes this by using relative links.
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR.
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've added any new APIs to the API Reference. For example, if I added a
method in Tune, I've added it in `doc/source/tune/api/` under the
corresponding `.rst` file.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/35347 | 2023-05-15T18:29:06Z | 2023-05-18T18:24:26Z | 2023-05-18T18:24:26Z | 2023-05-18T18:24:26Z | 1,530 | ray-project/ray | 19,538 |
Add missing newline to delimit sections in blns.txt | diff --git a/blns.txt b/blns.txt
index 6f2a848..019041b 100644
--- a/blns.txt
+++ b/blns.txt
@@ -214,6 +214,7 @@ __ロ(,_,*)
﷽
ﷺ
مُنَاقَشَةُ سُبُلِ اِسْتِخْدَامِ اللُّغَةِ فِي النُّظُمِ الْقَائِمَةِ وَفِيم يَخُصَّ التَّطْبِيقَاتُ الْحاسُوبِيَّةُ،
+
# Unicode Spaces
#
# Strings which contain unicode space characters with special properties (c.f. https://www.cs.tut.fi/~jkorpela/chars/spaces.html)
| https://api.github.com/repos/minimaxir/big-list-of-naughty-strings/pulls/119 | 2017-01-16T21:33:04Z | 2017-01-16T21:35:35Z | 2017-01-16T21:35:35Z | 2017-01-19T04:29:41Z | 177 | minimaxir/big-list-of-naughty-strings | 4,880 | |
show host_vars/ also in --graph (#56307) | diff --git a/changelogs/fragments/show_host_vars_in_graph.yml b/changelogs/fragments/show_host_vars_in_graph.yml
new file mode 100644
index 00000000000000..fa747080a56be3
--- /dev/null
+++ b/changelogs/fragments/show_host_vars_in_graph.yml
@@ -0,0 +1,2 @@
+bugfixes:
+ - show host_vars in ansible-inventory's --graph option.
diff --git a/lib/ansible/cli/inventory.py b/lib/ansible/cli/inventory.py
index a1ee255488a2ee..fedf6fd36ceccc 100644
--- a/lib/ansible/cli/inventory.py
+++ b/lib/ansible/cli/inventory.py
@@ -130,7 +130,6 @@ def run(self):
raise AnsibleOptionsError("You must pass a single valid host to --host parameter")
myvars = self._get_host_variables(host=hosts[0])
- self._remove_internal(myvars)
# FIXME: should we template first?
results = self.dump(myvars)
@@ -222,21 +221,22 @@ def _get_group_variables(self, group):
if group.priority != 1:
res['ansible_group_priority'] = group.priority
- return res
+ return self._remove_internal(res)
def _get_host_variables(self, host):
if context.CLIARGS['export']:
+ # only get vars defined directly host
hostvars = host.get_vars()
- # FIXME: add switch to skip vars plugins
- # add vars plugin info
+ # FIXME: add switch to skip vars plugins, add vars plugin info
for inventory_dir in self.inventory._sources:
hostvars = combine_vars(hostvars, self.get_plugin_vars(inventory_dir, host))
else:
+ # get all vars flattened by host, but skip magic hostvars
hostvars = self.vm.get_vars(host=host, include_hostvars=False)
- return hostvars
+ return self._remove_internal(hostvars)
def _get_group(self, gname):
group = self.inventory.groups.get(gname)
@@ -249,6 +249,8 @@ def _remove_internal(dump):
if internal in dump:
del dump[internal]
+ return dump
+
@staticmethod
def _remove_empty(dump):
# remove empty keys
@@ -259,7 +261,6 @@ def _remove_empty(dump):
@staticmethod
def _show_vars(dump, depth):
result = []
- InventoryCLI._remove_internal(dump)
if context.CLIARGS['show_vars']:
for (name, val) in sorted(dump.items()):
result.append(InventoryCLI._graph_name('{%s = %s}' % (name, val), depth))
@@ -281,7 +282,7 @@ def _graph_group(self, group, depth=0):
if group.name != 'all':
for host in sorted(group.hosts, key=attrgetter('name')):
result.append(self._graph_name(host.name, depth))
- result.extend(self._show_vars(host.get_vars(), depth + 1))
+ result.extend(self._show_vars(self._get_host_variables(host), depth + 1))
result.extend(self._show_vars(self._get_group_variables(group), depth))
@@ -327,7 +328,6 @@ def format_group(group):
for host in hosts:
hvars = self._get_host_variables(host)
if hvars:
- self._remove_internal(hvars)
results['_meta']['hostvars'][host.name] = hvars
return results
@@ -356,7 +356,6 @@ def format_group(group):
if h.name not in seen: # avoid defining host vars more than once
seen.append(h.name)
myvars = self._get_host_variables(host=h)
- self._remove_internal(myvars)
results[group.name]['hosts'][h.name] = myvars
if context.CLIARGS['export']:
@@ -391,7 +390,6 @@ def format_group(group):
if host.name not in seen:
seen.add(host.name)
host_vars = self._get_host_variables(host=host)
- self._remove_internal(host_vars)
else:
host_vars = {}
try:
| * show host_vars/ also in --graph
fixes #53422
(cherry picked from commit de87b25a450f08307e7f7bcb241144e1b2c52aeb)
##### ISSUE TYPE
<!--- Pick one below and delete the rest -->
- Bugfix Pull Request
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below -->
ansible-inventory
| https://api.github.com/repos/ansible/ansible/pulls/57857 | 2019-06-14T17:41:37Z | 2019-06-18T04:58:42Z | 2019-06-18T04:58:42Z | 2019-08-05T16:01:19Z | 942 | ansible/ansible | 49,550 |
feat(gate): fix fetchOpenInterestHistory | diff --git a/ts/src/gate.ts b/ts/src/gate.ts
index 6534533f7357..1e8a9e7e0996 100644
--- a/ts/src/gate.ts
+++ b/ts/src/gate.ts
@@ -5577,8 +5577,8 @@ export default class gate extends Exchange {
*/
await this.loadMarkets ();
const market = this.market (symbol);
- if (!market['future']) {
- throw new BadRequest (this.id + ' fetchOpenInterest() supports future markets only');
+ if (!market['swap']) {
+ throw new BadRequest (this.id + ' fetchOpenInterest() supports swap markets only');
}
const request = {
'contract': market['id'],
| - fixes https://github.com/ccxt/ccxt/issues/18427
DEMO
```
p gate fetchOpenInterestHistory "BTC/USDT:USDT" "5m" None 2
Python v3.10.9
CCXT v4.0.3
gate.fetchOpenInterestHistory(BTC/USDT:USDT,5m,None,2)
[{'datetime': '2023-07-01T13:50:00.000Z',
'info': {'long_liq_amount': '0',
'long_liq_size': '0',
'long_liq_usd': '0',
'lsr_account': '1.1503805175038',
'lsr_taker': '0.42660506698284',
'mark_price': '30551.93',
'open_interest': '150508526',
'open_interest_usd': '459832595.07552',
'short_liq_amount': '0',
'short_liq_size': '0',
'short_liq_usd': '0',
'time': '1688219400',
'top_lsr_account': '1.1538461538462',
'top_lsr_size': '0.98266500683631'},
'openInterestAmount': 150508526.0,
'openInterestValue': 459832595.07552,
'symbol': 'BTC/USDT:USDT',
'timestamp': 1688219400000},
{'datetime': '2023-07-01T13:55:00.000Z',
'info': {'long_liq_amount': '0',
'long_liq_size': '0',
'long_liq_usd': '0',
'lsr_account': '1.1369156041287',
'lsr_taker': '1.3917298866723',
'mark_price': '30571.56',
'open_interest': '150153332',
'open_interest_usd': '459042159.84379',
'short_liq_amount': '0',
'short_liq_size': '0',
'short_liq_usd': '0',
'time': '1688219700',
'top_lsr_account': '1.1538461538462',
'top_lsr_size': '0.98311588029717'},
'openInterestAmount': 150153332.0,
'openInterestValue': 459042159.84379,
'symbol': 'BTC/USDT:USDT',
'timestamp': 1688219700000}]
```
```
p gate fetchOpenInterestHistory "BTC/USD:BTC" "5m" None 1
Python v3.10.9
CCXT v4.0.3
gate.fetchOpenInterestHistory(BTC/USD:BTC,5m,None,1)
[{'datetime': '2023-07-01T13:55:00.000Z',
'info': {'long_liq_amount': '0',
'long_liq_size': '0',
'long_liq_usd': '0',
'lsr_account': '1.4117647058824',
'lsr_taker': '0.48571428571429',
'mark_price': '30510.17',
'open_interest': '51699501',
'open_interest_usd': '51699501',
'short_liq_amount': '0',
'short_liq_size': '0',
'short_liq_usd': '0',
'time': '1688219700',
'top_lsr_account': '1.0181818181818',
'top_lsr_size': '0.987322909593'},
'openInterestAmount': 51699501.0,
'openInterestValue': 51699501.0,
'symbol': 'BTC/USD:BTC',
'timestamp': 1688219700000}]
```
| https://api.github.com/repos/ccxt/ccxt/pulls/18433 | 2023-07-01T13:59:37Z | 2023-07-01T14:29:57Z | 2023-07-01T14:29:57Z | 2023-07-02T12:44:35Z | 164 | ccxt/ccxt | 13,493 |
Validate version of protoc and protobuff | diff --git a/.circleci/config.yml b/.circleci/config.yml
index c648ed1f4783..4e26fedfb992 100644
--- a/.circleci/config.yml
+++ b/.circleci/config.yml
@@ -591,7 +591,7 @@ jobs:
python -m venv venv
source venv/bin/activate
pip install --upgrade pip
- pip install pipenv mypy mypy-protobuf
+ pip install pipenv mypy mypy-protobuf 'protobuf<4'
deactivate
# Add 'activate venv' to $BASH_ENV. This means that our venv will be active
diff --git a/Makefile b/Makefile
index 2d73cd9f4280..6308affaa8a2 100644
--- a/Makefile
+++ b/Makefile
@@ -217,6 +217,19 @@ clean:
# Recompile Protobufs for Python and the frontend.
protobuf:
@# Python protobuf generation
+ if ! command -v protoc &> /dev/null ; then \
+ echo "protoc not installed."; \
+ exit 1; \
+ fi
+ protoc_version=$$(protoc --version | cut -d ' ' -f 2); \
+ protobuf_version=$$(pip show protobuf | grep Version | cut -d " " -f 2-); \
+ if [[ "$${protoc_version%.*.*}" != "$${protobuf_version%.*.*}" ]] ; then \
+ echo -e '\033[31m WARNING: Protoc and protobuf version mismatch \033[0m'; \
+ echo "To avoid compatibility issues, please ensure that the protoc version matches the protobuf version you have installed."; \
+ echo "protoc version: $${protoc_version}"; \
+ echo "protobuf version: $${protobuf_version}"; \
+ echo -n "Do you want to continue anyway? [y/N] " && read ans && [ $${ans:-N} = y ]; \
+ fi
protoc \
--proto_path=proto \
--python_out=lib \
| <!--
Before contributing (PLEASE READ!)
⚠️ If your contribution is more than a few lines of code, then prior to starting to code on it please post in the issue saying you want to volunteer, then wait for a positive response. And if there is no issue for it yet, create it first.
This helps make sure:
1. Two people aren't working on the same thing
2. This is something Streamlit's maintainers believe should be implemented/fixed
3. Any API, UI, or deeper architectural changes that need to be implemented have been fully thought through by Streamlit's maintainers
4. Your time is well spent!
More information in our wiki: https://github.com/streamlit/streamlit/wiki/Contributing
-->
## 📚 Context
We need the old version of `protoc`, because the new version of protoc is not compatible with the currently supported version of `protobuf`.
To improve DX, I add a version check to warn a developer when they is using an incompatible version of `protoc`.
_Please describe the project or issue background here_
- What kind of change does this PR introduce?
- [X] Bugfix
- [ ] Feature
- [ ] Refactoring
- [ ] Other, please describe:
## 🧠 Description of Changes
- _Add bullet points summarizing your changes here_
- [ ] This is a breaking API change
- [ ] This is a visible (user-facing) change
**Revised:**
_Insert screenshot of your updated UI/code here_
**Current:**
_Insert screenshot of existing UI/code here_
## 🧪 Testing Done
- [ ] Screenshots included
- [ ] Added/Updated unit tests
- [ ] Added/Updated e2e tests
## 🌐 References
_Does this depend on other work, documents, or tickets?_
- **Issue**: Closes #XXXX
---
**Contribution License Agreement**
By submitting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.
| https://api.github.com/repos/streamlit/streamlit/pulls/5070 | 2022-08-01T15:48:13Z | 2022-08-03T10:53:45Z | 2022-08-03T10:53:45Z | 2023-10-05T19:27:52Z | 471 | streamlit/streamlit | 22,159 |
Create `labels` dir on labels save | diff --git a/val.py b/val.py
index 8da3ef7667a..b3d05f4305c 100644
--- a/val.py
+++ b/val.py
@@ -72,7 +72,8 @@ def save_one_json(predn, jdict, path, class_map):
def process_batch(detections, labels, iouv):
"""
- Return correct prediction matrix
+ Return correct prediction matrix.
+
Arguments:
detections (array[N, 6]), x1, y1, x2, y2, conf, class
labels (array[M, 5]), class, x1, y1, x2, y2
@@ -258,6 +259,7 @@ def run(
# Save/log
if save_txt:
+ (save_dir / 'labels').mkdir(parents=True, exist_ok=True)
save_one_txt(predn, save_conf, shape, file=save_dir / 'labels' / f'{path.stem}.txt')
if save_json:
save_one_json(predn, jdict, path, class_map) # append to COCO-JSON dictionary
| <!--
Thank you for submitting a YOLOv5 🚀 Pull Request! We want to make contributing to YOLOv5 as easy and transparent as possible. A few tips to get you started:
- Search existing YOLOv5 [PRs](https://github.com/ultralytics/yolov5/pull) to see if a similar PR already exists.
- Link this PR to a YOLOv5 [issue](https://github.com/ultralytics/yolov5/issues) to help us understand what bug fix or feature is being implemented.
- Provide before and after profiling/inference/training results to help us quantify the improvement your PR provides (if applicable).
Please see our ✅ [Contributing Guide](https://docs.ultralytics.com/help/contributing) for more details.
Note that Copilot will summarize this PR below, do not modify the 'copilot:all' line.
-->
copilot:all
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Improved validation processes and outputs for YOLOv5 model.
### 📊 Key Changes
- Added a full stop to a documentation comment for consistency in code documentation.
- Enhanced the saving process by ensuring the 'labels' directory is created if it does not exist when saving output in text format.
### 🎯 Purpose & Impact
- 🧐 Enhances code readability through proper documentation.
- 📂 Ensures robustness in the file-output system by automatically handling the creation of necessary directories, improving user experience and reducing potential errors when saving data. | https://api.github.com/repos/ultralytics/yolov5/pulls/12551 | 2023-12-26T08:14:28Z | 2023-12-26T22:58:32Z | 2023-12-26T22:58:32Z | 2024-01-19T00:50:29Z | 252 | ultralytics/yolov5 | 25,017 |
fixed regexp to add support of unicode for strings | diff --git a/metagpt/actions/rebuild_sequence_view.py b/metagpt/actions/rebuild_sequence_view.py
index 0e67de908..2aac9bf20 100644
--- a/metagpt/actions/rebuild_sequence_view.py
+++ b/metagpt/actions/rebuild_sequence_view.py
@@ -486,7 +486,7 @@ def parse_participant(mermaid_sequence_diagram: str) -> List[str]:
Returns:
List[str]: A list of participants extracted from the sequence diagram.
"""
- pattern = r"participant ([a-zA-Z\.0-9_]+)"
+ pattern = r"participant ([\w\.]+)"
matches = re.findall(pattern, mermaid_sequence_diagram)
matches = [re.sub(r"[\\/'\"]+", "", i) for i in matches]
return matches
| **Features**
<!-- Clear and direct description of the submit features. -->
<!-- If it's a bug fix, please also paste the issue link. -->
- fixed regexp to add support of unicode for strings
**Feature Docs**
<!-- The RFC, tutorial, or use cases about the feature if it's a pretty big update. If not, there is no need to fill. -->
**Influence**
<!-- Tell me the impact of the new feature and I'll focus on it. -->
**Result**
<!-- The screenshot/log of unittest/running result -->
**Other**
fixed regexp to add support of unicode for strings and for improving of readability | https://api.github.com/repos/geekan/MetaGPT/pulls/1079 | 2024-03-22T13:02:23Z | 2024-04-05T14:08:40Z | 2024-04-05T14:08:40Z | 2024-04-05T14:08:40Z | 180 | geekan/MetaGPT | 16,650 |
certbot: Update storage.get_link_target (#4750) | diff --git a/certbot/storage.py b/certbot/storage.py
index 4f167d4eaa2..d03052dae8d 100644
--- a/certbot/storage.py
+++ b/certbot/storage.py
@@ -186,8 +186,15 @@ def get_link_target(link):
:returns: Absolute path to the target of link
:rtype: str
+ :raises .CertStorageError: If link does not exists.
+
"""
- target = os.readlink(link)
+ try:
+ target = os.readlink(link)
+ except OSError:
+ raise errors.CertStorageError(
+ "Expected {0} to be a symlink".format(link))
+
if not os.path.isabs(target):
target = os.path.join(os.path.dirname(link), target)
return os.path.abspath(target)
| * The `get_link_target` function raises `errors.CertStorageError` when
link does not exists. | https://api.github.com/repos/certbot/certbot/pulls/4923 | 2017-07-12T03:18:29Z | 2017-07-13T17:14:00Z | 2017-07-13T17:14:00Z | 2017-07-13T17:14:00Z | 187 | certbot/certbot | 1,490 |
Fix missing bracket in dirty_cat | diff --git a/README.md b/README.md
index 8495c5a9..67bc9517 100644
--- a/README.md
+++ b/README.md
@@ -1221,7 +1221,7 @@ be
* [Shapash](https://github.com/MAIF/shapash) : Shapash is a Python library that provides several types of visualization that display explicit labels that everyone can understand.
* [Eurybia](https://github.com/MAIF/eurybia): Eurybia monitors data and model drift over time and securizes model deployment with data validation.
* [Colossal-AI](https://github.com/hpcaitech/ColossalAI): An open-source deep learning system for large-scale model training and inference with high efficiency and low cost.
-* dirty_cat](https://github.com/dirty-cat/dirty_cat) - facilitates machine-learning on dirty, non-curated categories. It provides transformers and encoders robust to morphological variants, such as typos.
+* [dirty_cat](https://github.com/dirty-cat/dirty_cat) - facilitates machine-learning on dirty, non-curated categories. It provides transformers and encoders robust to morphological variants, such as typos.
* [Upgini](https://github.com/upgini/river): Free automated data & feature enrichment library for machine learning - automatically searches through thousands of ready-to-use features from public and community shared data sources and enriches your training dataset with only the accuracy improving features.
<a name="python-data-analysis--data-visualization"></a>
| Following #867, fixes a missing bracket | https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/869 | 2022-06-30T12:15:58Z | 2022-07-10T14:28:12Z | 2022-07-10T14:28:12Z | 2022-07-10T14:28:13Z | 334 | josephmisiti/awesome-machine-learning | 51,892 |
boruvka.py: A few simplifications and f-strings | diff --git a/DIRECTORY.md b/DIRECTORY.md
index adc9bb9e4699..41485f6f0ca4 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -97,6 +97,7 @@
* [Peak Signal To Noise Ratio](https://github.com/TheAlgorithms/Python/blob/master/compression/peak_signal_to_noise_ratio.py)
## Computer Vision
+ * [Cnn Classification](https://github.com/TheAlgorithms/Python/blob/master/computer_vision/cnn_classification.py)
* [Harris Corner](https://github.com/TheAlgorithms/Python/blob/master/computer_vision/harris_corner.py)
* [Mean Threshold](https://github.com/TheAlgorithms/Python/blob/master/computer_vision/mean_threshold.py)
@@ -300,6 +301,7 @@
* [Bfs Zero One Shortest Path](https://github.com/TheAlgorithms/Python/blob/master/graphs/bfs_zero_one_shortest_path.py)
* [Bidirectional A Star](https://github.com/TheAlgorithms/Python/blob/master/graphs/bidirectional_a_star.py)
* [Bidirectional Breadth First Search](https://github.com/TheAlgorithms/Python/blob/master/graphs/bidirectional_breadth_first_search.py)
+ * [Boruvka](https://github.com/TheAlgorithms/Python/blob/master/graphs/boruvka.py)
* [Breadth First Search](https://github.com/TheAlgorithms/Python/blob/master/graphs/breadth_first_search.py)
* [Breadth First Search 2](https://github.com/TheAlgorithms/Python/blob/master/graphs/breadth_first_search_2.py)
* [Breadth First Search Shortest Path](https://github.com/TheAlgorithms/Python/blob/master/graphs/breadth_first_search_shortest_path.py)
@@ -349,6 +351,7 @@
* [Djb2](https://github.com/TheAlgorithms/Python/blob/master/hashes/djb2.py)
* [Enigma Machine](https://github.com/TheAlgorithms/Python/blob/master/hashes/enigma_machine.py)
* [Hamming Code](https://github.com/TheAlgorithms/Python/blob/master/hashes/hamming_code.py)
+ * [Luhn](https://github.com/TheAlgorithms/Python/blob/master/hashes/luhn.py)
* [Md5](https://github.com/TheAlgorithms/Python/blob/master/hashes/md5.py)
* [Sdbm](https://github.com/TheAlgorithms/Python/blob/master/hashes/sdbm.py)
* [Sha1](https://github.com/TheAlgorithms/Python/blob/master/hashes/sha1.py)
@@ -421,10 +424,12 @@
* [Binomial Distribution](https://github.com/TheAlgorithms/Python/blob/master/maths/binomial_distribution.py)
* [Bisection](https://github.com/TheAlgorithms/Python/blob/master/maths/bisection.py)
* [Ceil](https://github.com/TheAlgorithms/Python/blob/master/maths/ceil.py)
+ * [Check Valid Ip Address](https://github.com/TheAlgorithms/Python/blob/master/maths/check_valid_ip_address.py)
* [Chudnovsky Algorithm](https://github.com/TheAlgorithms/Python/blob/master/maths/chudnovsky_algorithm.py)
* [Collatz Sequence](https://github.com/TheAlgorithms/Python/blob/master/maths/collatz_sequence.py)
* [Combinations](https://github.com/TheAlgorithms/Python/blob/master/maths/combinations.py)
* [Decimal Isolate](https://github.com/TheAlgorithms/Python/blob/master/maths/decimal_isolate.py)
+ * [Double Factorial Recursive](https://github.com/TheAlgorithms/Python/blob/master/maths/double_factorial_recursive.py)
* [Entropy](https://github.com/TheAlgorithms/Python/blob/master/maths/entropy.py)
* [Euclidean Distance](https://github.com/TheAlgorithms/Python/blob/master/maths/euclidean_distance.py)
* [Euclidean Gcd](https://github.com/TheAlgorithms/Python/blob/master/maths/euclidean_gcd.py)
@@ -539,6 +544,7 @@
## Other
* [Activity Selection](https://github.com/TheAlgorithms/Python/blob/master/other/activity_selection.py)
+ * [Date To Weekday](https://github.com/TheAlgorithms/Python/blob/master/other/date_to_weekday.py)
* [Davis–Putnam–Logemann–Loveland](https://github.com/TheAlgorithms/Python/blob/master/other/davis–putnam–logemann–loveland.py)
* [Dijkstra Bankers Algorithm](https://github.com/TheAlgorithms/Python/blob/master/other/dijkstra_bankers_algorithm.py)
* [Doomsday](https://github.com/TheAlgorithms/Python/blob/master/other/doomsday.py)
@@ -854,6 +860,7 @@
* [Counting Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/counting_sort.py)
* [Cycle Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/cycle_sort.py)
* [Double Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/double_sort.py)
+ * [Exchange Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/exchange_sort.py)
* [External Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/external_sort.py)
* [Gnome Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/gnome_sort.py)
* [Heap Sort](https://github.com/TheAlgorithms/Python/blob/master/sorts/heap_sort.py)
@@ -893,6 +900,7 @@
## Strings
* [Aho Corasick](https://github.com/TheAlgorithms/Python/blob/master/strings/aho_corasick.py)
+ * [Alternative String Arrange](https://github.com/TheAlgorithms/Python/blob/master/strings/alternative_string_arrange.py)
* [Anagrams](https://github.com/TheAlgorithms/Python/blob/master/strings/anagrams.py)
* [Autocomplete Using Trie](https://github.com/TheAlgorithms/Python/blob/master/strings/autocomplete_using_trie.py)
* [Boyer Moore Search](https://github.com/TheAlgorithms/Python/blob/master/strings/boyer_moore_search.py)
@@ -902,6 +910,7 @@
* [Check Pangram](https://github.com/TheAlgorithms/Python/blob/master/strings/check_pangram.py)
* [Detecting English Programmatically](https://github.com/TheAlgorithms/Python/blob/master/strings/detecting_english_programmatically.py)
* [Frequency Finder](https://github.com/TheAlgorithms/Python/blob/master/strings/frequency_finder.py)
+ * [Indian Phone Validator](https://github.com/TheAlgorithms/Python/blob/master/strings/indian_phone_validator.py)
* [Is Palindrome](https://github.com/TheAlgorithms/Python/blob/master/strings/is_palindrome.py)
* [Jaro Winkler](https://github.com/TheAlgorithms/Python/blob/master/strings/jaro_winkler.py)
* [Knuth Morris Pratt](https://github.com/TheAlgorithms/Python/blob/master/strings/knuth_morris_pratt.py)
@@ -941,6 +950,7 @@
* [Instagram Crawler](https://github.com/TheAlgorithms/Python/blob/master/web_programming/instagram_crawler.py)
* [Instagram Pic](https://github.com/TheAlgorithms/Python/blob/master/web_programming/instagram_pic.py)
* [Instagram Video](https://github.com/TheAlgorithms/Python/blob/master/web_programming/instagram_video.py)
+ * [Random Anime Character](https://github.com/TheAlgorithms/Python/blob/master/web_programming/random_anime_character.py)
* [Recaptcha Verification](https://github.com/TheAlgorithms/Python/blob/master/web_programming/recaptcha_verification.py)
* [Slack Message](https://github.com/TheAlgorithms/Python/blob/master/web_programming/slack_message.py)
* [Test Fetch Github Info](https://github.com/TheAlgorithms/Python/blob/master/web_programming/test_fetch_github_info.py)
diff --git a/graphs/boruvka.py b/graphs/boruvka.py
index b95bcc39850e..3fa5c6fd2a26 100644
--- a/graphs/boruvka.py
+++ b/graphs/boruvka.py
@@ -1,11 +1,12 @@
"""Borůvka's algorithm.
- Determines the minimum spanning tree(MST) of a graph using the Borůvka's algorithm.
+ Determines the minimum spanning tree (MST) of a graph using the Borůvka's algorithm.
Borůvka's algorithm is a greedy algorithm for finding a minimum spanning tree in a
- graph,or a minimum spanning forest in the case of a graph that is not connected.
+ connected graph, or a minimum spanning forest if a graph that is not connected.
The time complexity of this algorithm is O(ELogV), where E represents the number
of edges, while V represents the number of nodes.
+ O(number_of_edges Log number_of_nodes)
The space complexity of this algorithm is O(V + E), since we have to keep a couple
of lists whose sizes are equal to the number of nodes, as well as keep all the
@@ -19,7 +20,7 @@
doesn't need to presort the edges or maintain a priority queue in order to find the
minimum spanning tree.
Even though that doesn't help its complexity, since it still passes the edges logE
- times, it is a bit more simple to code.
+ times, it is a bit simpler to code.
Details: https://en.wikipedia.org/wiki/Bor%C5%AFvka%27s_algorithm
"""
@@ -31,13 +32,13 @@ def __init__(self, num_of_nodes: int) -> None:
Arguments:
num_of_nodes - the number of nodes in the graph
Attributes:
- m_v - the number of nodes in the graph.
+ m_num_of_nodes - the number of nodes in the graph.
m_edges - the list of edges.
m_component - the dictionary which stores the index of the component which
a node belongs to.
"""
- self.m_v = num_of_nodes
+ self.m_num_of_nodes = num_of_nodes
self.m_edges = []
self.m_component = {}
@@ -57,7 +58,7 @@ def set_component(self, u_node: int) -> None:
"""Finds the component index of a given node"""
if self.m_component[u_node] != u_node:
- for k in self.m_component.keys():
+ for k in self.m_component:
self.m_component[k] = self.find_component(k)
def union(self, component_size: list, u_node: int, v_node: int) -> None:
@@ -82,22 +83,18 @@ def boruvka(self) -> None:
component_size = []
mst_weight = 0
- minimum_weight_edge = [-1] * self.m_v
+ minimum_weight_edge = [-1] * self.m_num_of_nodes
# A list of components (initialized to all of the nodes)
- for node in range(self.m_v):
+ for node in range(self.m_num_of_nodes):
self.m_component.update({node: node})
component_size.append(1)
- num_of_components = self.m_v
+ num_of_components = self.m_num_of_nodes
while num_of_components > 1:
- l_edges = len(self.m_edges)
- for i in range(l_edges):
-
- u = self.m_edges[i][0]
- v = self.m_edges[i][1]
- w = self.m_edges[i][2]
+ for edge in self.m_edges:
+ u, v, w = edge
u_component = self.m_component[u]
v_component = self.m_component[v]
@@ -113,22 +110,16 @@ def boruvka(self) -> None:
observing right now, we will assign the value of the edge
we're observing to it"""
- if (
- minimum_weight_edge[u_component] == -1
- or minimum_weight_edge[u_component][2] > w
- ):
- minimum_weight_edge[u_component] = [u, v, w]
- if (
- minimum_weight_edge[v_component] == -1
- or minimum_weight_edge[v_component][2] > w
- ):
- minimum_weight_edge[v_component] = [u, v, w]
-
- for node in range(self.m_v):
- if minimum_weight_edge[node] != -1:
- u = minimum_weight_edge[node][0]
- v = minimum_weight_edge[node][1]
- w = minimum_weight_edge[node][2]
+ for component in (u_component, v_component):
+ if (
+ minimum_weight_edge[component] == -1
+ or minimum_weight_edge[component][2] > w
+ ):
+ minimum_weight_edge[component] = [u, v, w]
+
+ for edge in minimum_weight_edge:
+ if edge != -1:
+ u, v, w = edge
u_component = self.m_component[u]
v_component = self.m_component[v]
@@ -136,36 +127,19 @@ def boruvka(self) -> None:
if u_component != v_component:
mst_weight += w
self.union(component_size, u_component, v_component)
- print(
- "Added edge ["
- + str(u)
- + " - "
- + str(v)
- + "]\n"
- + "Added weight: "
- + str(w)
- + "\n"
- )
+ print(f"Added edge [{u} - {v}]\nAdded weight: {w}\n")
num_of_components -= 1
- minimum_weight_edge = [-1] * self.m_v
- print("The total weight of the minimal spanning tree is: " + str(mst_weight))
+ minimum_weight_edge = [-1] * self.m_num_of_nodes
+ print(f"The total weight of the minimal spanning tree is: {mst_weight}")
def test_vector() -> None:
"""
- >>> g=Graph(8)
- >>> g.add_edge(0, 1, 10)
- >>> g.add_edge(0, 2, 6)
- >>> g.add_edge(0, 3, 5)
- >>> g.add_edge(1, 3, 15)
- >>> g.add_edge(2, 3, 4)
- >>> g.add_edge(3, 4, 8)
- >>> g.add_edge(4, 5, 10)
- >>> g.add_edge(4, 6, 6)
- >>> g.add_edge(4, 7, 5)
- >>> g.add_edge(5, 7, 15)
- >>> g.add_edge(6, 7, 4)
+ >>> g = Graph(8)
+ >>> for u_v_w in ((0, 1, 10), (0, 2, 6), (0, 3, 5), (1, 3, 15), (2, 3, 4),
+ ... (3, 4, 8), (4, 5, 10), (4, 6, 6), (4, 7, 5), (5, 7, 15), (6, 7, 4)):
+ ... g.add_edge(*u_v_w)
>>> g.boruvka()
Added edge [0 - 3]
Added weight: 5
| Python f-strings simplify the code and [should speed up execution](https://www.scivision.dev/python-f-string-speed).
I don’t know graphs… ~Are `components` and `nodes` the same things or different things?
If they are the same thing, then we should only use one name in the code and drop the other.~
I looked it up.
@srkchowdary2000 Your review, please.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| https://api.github.com/repos/TheAlgorithms/Python/pulls/4660 | 2021-08-24T01:13:52Z | 2021-08-24T13:27:32Z | 2021-08-24T13:27:31Z | 2021-08-24T13:29:24Z | 3,499 | TheAlgorithms/Python | 30,284 |
clarify where to call flask run from in tutorial | diff --git a/docs/tutorial/factory.rst b/docs/tutorial/factory.rst
index 62462e1cd7..41a8c768a6 100644
--- a/docs/tutorial/factory.rst
+++ b/docs/tutorial/factory.rst
@@ -127,7 +127,8 @@ Run The Application
Now you can run your application using the ``flask`` command. From the
terminal, tell Flask where to find your application, then run it in
-development mode.
+development mode. Remember, you should still be in the top-level
+``flask-tutorial`` directory, not the ``flaskr`` package.
Development mode shows an interactive debugger whenever a page raises an
exception, and restarts the server whenever you make changes to the
diff --git a/docs/tutorial/install.rst b/docs/tutorial/install.rst
index fff0b52ce4..06f63dea9d 100644
--- a/docs/tutorial/install.rst
+++ b/docs/tutorial/install.rst
@@ -108,6 +108,7 @@ You can observe that the project is now installed with ``pip list``.
Nothing changes from how you've been running your project so far.
``FLASK_APP`` is still set to ``flaskr`` and ``flask run`` still runs
-the application.
+the application, but you can call it from anywhere, not just the
+``flask-tutorial`` directory.
Continue to :doc:`tests`.
| closes #2967 | https://api.github.com/repos/pallets/flask/pulls/3067 | 2019-01-06T23:11:14Z | 2019-01-06T23:11:31Z | 2019-01-06T23:11:31Z | 2020-11-14T02:33:35Z | 315 | pallets/flask | 20,325 |
Added multiple animal APIs | diff --git a/README.md b/README.md
index 0dbf359f2a..60a4560a6e 100644
--- a/README.md
+++ b/README.md
@@ -60,12 +60,17 @@ Please note a passing build status indicates all listed APIs are available since
### Animals
API | Description | Auth | HTTPS | CORS | Link |
|---|---|---|---|---|---|
+| Cats | Pictures of cats from Tumblr | No | Yes | Unknown | [Go!](https://thecatapi.com/docs.html) |
| Dogs | Based on the Stanford Dogs Dataset | No | Yes | Unknown | [Go!](https://dog.ceo/dog-api/) |
| HTTPCat | Cat for every HTTP Status | No | Yes | Unknown | [Go!](https://http.cat/) |
| IUCN | IUCN Red List of Threatened Species | `apiKey` | No | Unknown | [Go!](http://apiv3.iucnredlist.org/api/v3/docs) |
| Movebank | Movement and Migration data of animals | No | Yes | Unknown | [Go!](https://github.com/movebank/movebank-api-doc) |
| Petfinder | Adoption | `apiKey` | Yes | Unknown | [Go!](https://www.petfinder.com/developers/api-docs/) |
+| RandomCat | Random pictures of cats | No | Yes | Yes | [Go!](https://aws.random.cat/meow) |
+| RandomDog | Random pictures of dogs | No | Yes | Yes | [Go!](https://random.dog/woof.json) |
+| RandomFox | Random pictures of foxes | No | Yes | Yes | [Go!](https://randomfox.ca/floof/) |
| RescueGroups | Adoption | No | Yes | Unknown | [Go!](https://userguide.rescuegroups.org/display/APIDG/API+Developers+Guide+Home) |
+| Shibe.Online | Random pictures of Shibu Inu, cats or birds | No | No | Unknown | [Go!](http://shibe.online/) |
### Anime
API | Description | Auth | HTTPS | CORS | Link |
| Added multiple apis to fetch animal photos
Thank you for taking the time to work on a Pull Request for this project!
To ensure your PR is dealt with swiftly please check the following:
- [x] Your submissions are formatted according to the guidelines in the [contributing guide](CONTRIBUTING.md)
- [x] Your additions are ordered alphabetically
- [x] Your submission has a useful description
- [x] The description does not end with punctuation
- [x] Each table column should be padded with one space on either side
- [x] You have searched the repository for any relevant issues or pull requests
- [x] Any category you are creating has the minimum requirement of 3 items
| https://api.github.com/repos/public-apis/public-apis/pulls/637 | 2018-03-22T23:54:58Z | 2018-03-29T01:25:26Z | 2018-03-29T01:25:26Z | 2018-03-29T01:25:28Z | 476 | public-apis/public-apis | 35,445 |
Fix: v0.91.0 of fastapi Cannot add middleware after an application ha… | diff --git a/requirements_versions.txt b/requirements_versions.txt
index eaa08806d32..331d0fe8651 100644
--- a/requirements_versions.txt
+++ b/requirements_versions.txt
@@ -27,3 +27,4 @@ GitPython==3.1.27
torchsde==0.2.5
safetensors==0.2.7
httpcore<=0.15
+fastapi==0.90.1
| …s started
fix https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7714
```
Traceback (most recent call last):
File "launch.py", line 361, in <module>
start()
File "launch.py", line 356, in start
webui.webui()
File "/content/stable-diffusion-webui/webui.py", line 232, in webui
app.add_middleware(GZipMiddleware, minimum_size=1000)
File "/usr/local/lib/python3.8/dist-packages/starlette/applications.py", line 135, in add_middleware
raise RuntimeError("Cannot add middleware after an application has started")
RuntimeError: Cannot add middleware after an application has started
```
v0.91.0 of fastapi Cannot add middleware after an application has started
https://github.com/tiangolo/fastapi/releases/tag/0.91.0 | https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/7717 | 2023-02-10T16:32:23Z | 2023-02-13T05:12:52Z | 2023-02-13T05:12:51Z | 2024-02-18T21:19:53Z | 103 | AUTOMATIC1111/stable-diffusion-webui | 40,474 |
Doc: redis memory management / automatic flushing. | diff --git a/doc/source/index.rst b/doc/source/index.rst
index da488dd7353c8..f1b205f6c10d8 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -115,6 +115,7 @@ Ray comes with libraries that accelerate deep learning and reinforcement learnin
fault-tolerance.rst
plasma-object-store.rst
resources.rst
+ redis-memory-management.rst
.. toctree::
:maxdepth: 1
diff --git a/doc/source/redis-memory-management.rst b/doc/source/redis-memory-management.rst
new file mode 100644
index 0000000000000..64d2035ed0f31
--- /dev/null
+++ b/doc/source/redis-memory-management.rst
@@ -0,0 +1,98 @@
+Redis Memory Management (EXPERIMENTAL)
+======================================
+
+Ray stores metadata associated with tasks and objects in one or more Redis
+servers, as described in `An Overview of the Internals
+<internals-overview.html>`_. Applications that are long-running or have high
+task/object generation rate could risk high memory pressure, potentially leading
+to out-of-memory (OOM) errors.
+
+Here, we describe an experimental feature that transparently flushes metadata
+entries out of Redis memory.
+
+Requirements
+------------
+
+As of early July 2018, the automatic memory management feature requires building
+Ray from source. We are planning on eliminating this step in the near future by
+releasing official wheels.
+
+Building Ray
+~~~~~~~~~~~~
+
+First, follow `instructions to build Ray from source
+<installation.html#building-ray-from-source>`__ to install prerequisites. After
+the prerequisites are installed, instead of doing the regular ``pip install`` as
+referenced in that document, pass an additional special flag,
+``RAY_USE_NEW_GCS=on``:
+
+.. code-block:: bash
+
+ git clone https://github.com/ray-project/ray.git
+ cd ray/python
+ RAY_USE_NEW_GCS=on pip install -e . --verbose # Add --user if you see a permission denied error.
+
+Running Ray applications
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+At run time the environment variables ``RAY_USE_NEW_GCS=on`` and
+``RAY_USE_XRAY=1`` are required.
+
+.. code-block:: bash
+
+ export RAY_USE_NEW_GCS=on
+ export RAY_USE_XRAY=1
+ python my_ray_script.py # Or launch python/ipython.
+
+Activate memory flushing
+------------------------
+
+After building Ray using the method above, simply add these two lines after
+``ray.init()`` to activate automatic memory flushing:
+
+.. code-block:: python
+
+ ray.init(...)
+
+ policy = ray.experimental.SimpleGcsFlushPolicy()
+ ray.experimental.set_flushing_policy(policy)
+
+ # My awesome Ray application logic follows.
+
+Paramaters of the flushing policy
+---------------------------------
+
+There are three `user-configurable parameters
+<https://github.com/ray-project/ray/blob/8190ff1fd0c4b82f73e2c1c0f21de6bda494718c/python/ray/experimental/gcs_flush_policy.py#L31>`_
+of the ``SimpleGcsFlushPolicy``:
+
+* ``flush_when_at_least_bytes``: Wait until this many bytes of memory usage
+ accumulated in the redis server before flushing kicks in.
+* ``flush_period_secs``: Issue a flush to the Redis server every this many
+ seconds.
+* ``flush_num_entries_each_time``: A hint to the system on the number of entries
+ to flush on each request.
+
+The default values should serve to be non-invasive for lightweight Ray
+applications. ``flush_when_at_least_bytes`` is set to ``(1<<31)`` or 2GB,
+``flush_period_secs`` to 10, and ``flush_num_entries_each_time`` to 10000:
+
+.. code-block:: python
+
+ # Default parameters.
+ ray.experimental.SimpleGcsFlushPolicy(
+ flush_when_at_least_bytes=(1 << 31),
+ flush_period_secs=10,
+ flush_num_entries_each_time=10000)
+
+In particular, these default values imply that
+
+1. the Redis server would accumulate memory usage up to 2GB without any entries
+being flushed, then the flushing would kick in; and
+
+2. generally, "older" metadata entries would be flushed first, and the Redis
+server would always keep the most recent window of metadata of 2GB in size.
+
+**For advanced users.** Advanced users can tune the above parameters to their
+applications' needs; note that the desired flush rate is equal to (flush
+period) * (num entries each flush).
| https://api.github.com/repos/ray-project/ray/pulls/2344 | 2018-07-04T18:10:30Z | 2018-07-06T06:44:38Z | 2018-07-06T06:44:38Z | 2018-07-06T06:44:38Z | 1,085 | ray-project/ray | 19,750 | |
✏️ Fix typos and rewordings in `docs/en/docs/tutorial/body-nested-models.md` | diff --git a/docs/en/docs/tutorial/body-nested-models.md b/docs/en/docs/tutorial/body-nested-models.md
index 3a1052397910c..387f0de9aaed7 100644
--- a/docs/en/docs/tutorial/body-nested-models.md
+++ b/docs/en/docs/tutorial/body-nested-models.md
@@ -183,18 +183,18 @@ This would mean that **FastAPI** would expect a body similar to:
Again, doing just that declaration, with **FastAPI** you get:
-* Editor support (completion, etc), even for nested models
+* Editor support (completion, etc.), even for nested models
* Data conversion
* Data validation
* Automatic documentation
## Special types and validation
-Apart from normal singular types like `str`, `int`, `float`, etc. You can use more complex singular types that inherit from `str`.
+Apart from normal singular types like `str`, `int`, `float`, etc. you can use more complex singular types that inherit from `str`.
To see all the options you have, checkout the docs for <a href="https://pydantic-docs.helpmanual.io/usage/types/" class="external-link" target="_blank">Pydantic's exotic types</a>. You will see some examples in the next chapter.
-For example, as in the `Image` model we have a `url` field, we can declare it to be instead of a `str`, a Pydantic's `HttpUrl`:
+For example, as in the `Image` model we have a `url` field, we can declare it to be an instance of Pydantic's `HttpUrl` instead of a `str`:
=== "Python 3.10+"
@@ -218,7 +218,7 @@ The string will be checked to be a valid URL, and documented in JSON Schema / Op
## Attributes with lists of submodels
-You can also use Pydantic models as subtypes of `list`, `set`, etc:
+You can also use Pydantic models as subtypes of `list`, `set`, etc.:
=== "Python 3.10+"
@@ -238,7 +238,7 @@ You can also use Pydantic models as subtypes of `list`, `set`, etc:
{!> ../../../docs_src/body_nested_models/tutorial006.py!}
```
-This will expect (convert, validate, document, etc) a JSON body like:
+This will expect (convert, validate, document, etc.) a JSON body like:
```JSON hl_lines="11"
{
@@ -334,15 +334,15 @@ But you don't have to worry about them either, incoming dicts are converted auto
## Bodies of arbitrary `dict`s
-You can also declare a body as a `dict` with keys of some type and values of other type.
+You can also declare a body as a `dict` with keys of some type and values of some other type.
-Without having to know beforehand what are the valid field/attribute names (as would be the case with Pydantic models).
+This way, you don't have to know beforehand what the valid field/attribute names are (as would be the case with Pydantic models).
This would be useful if you want to receive keys that you don't already know.
---
-Other useful case is when you want to have keys of other type, e.g. `int`.
+Another useful case is when you want to have keys of another type (e.g., `int`).
That's what we are going to see here.
| Tiny changes related to typos and style. | https://api.github.com/repos/tiangolo/fastapi/pulls/10468 | 2023-10-18T18:17:38Z | 2023-10-20T08:58:03Z | 2023-10-20T08:58:03Z | 2023-10-20T08:58:14Z | 778 | tiangolo/fastapi | 23,084 |
Improve compatibility of wsgi_flask_app example on OS X | diff --git a/examples/simple/wsgi_flask_app.py b/examples/simple/wsgi_flask_app.py
index bbde69137e..b34fbc8378 100644
--- a/examples/simple/wsgi_flask_app.py
+++ b/examples/simple/wsgi_flask_app.py
@@ -15,9 +15,9 @@ def hello_world() -> str:
addons = [
- # Host app at the magic domain "proxapp.local" on port 80. Requests to this
+ # Host app at the magic domain "example.com" on port 80. Requests to this
# domain and port combination will now be routed to the WSGI app instance.
- wsgiapp.WSGIApp(app, "proxapp.local", 80)
+ wsgiapp.WSGIApp(app, "example.com", 80)
# SSL works too, but the magic domain needs to be resolvable from the mitmproxy machine due to mitmproxy's design.
# mitmproxy will connect to said domain and use serve its certificate (unless --no-upstream-cert is set)
# but won't send any data.
| ### Problem
`example/simple/wsgi_flask_app.py` uses a magic domain `proxapp.local` which cannot be resolved on OS X hosts. This is because Apple uses `.local` as a reserved domain for device discovery in Bonjour's mDNS.
Reference: https://support.apple.com/en-us/HT207511
### How to reproduce
Environment: OS X 10.5.2 + mitmproxy 5.1.1
1. Directly load `wsgi_flask_app.py` example on a OS X host
```
> mitmproxy --mode transparent --showhost -s examples/simple/wsgi_flask_app.py
```
2. Visit `http://proxapp.local` in Chrome. An `ERR_NAME_NOT_RESOLVED` error is shown.
It seems OS X will always route `.local` domain to Bonjour's mDNS before passing it through a mitmproxy (even if it's running in global transparent mode).
3. Open `wsgi_flask_app.py` and change the magic domain to `example.com`
4. Open chrome and visit `example.com`, it shows `Hello world!`
### Fix
Use `example.com` as the default domain. | https://api.github.com/repos/mitmproxy/mitmproxy/pulls/3963 | 2020-04-29T03:56:41Z | 2020-04-29T09:22:55Z | 2020-04-29T09:22:55Z | 2020-04-29T15:28:45Z | 246 | mitmproxy/mitmproxy | 28,297 |
Gate - borrow & repay for margin | diff --git a/ts/src/base/Exchange.ts b/ts/src/base/Exchange.ts
index e014415b29b7..c8188726d539 100644
--- a/ts/src/base/Exchange.ts
+++ b/ts/src/base/Exchange.ts
@@ -4565,7 +4565,7 @@ export default class Exchange {
}
}
- checkRequiredMarginArgument (methodName: string, symbol: string, marginMode: string) {
+ checkRequiredMarginArgument (methodName: string, symbol: Str, marginMode: string) {
/**
* @ignore
* @method
diff --git a/ts/src/gate.ts b/ts/src/gate.ts
index d4020a6c710e..ca71a75c3d07 100644
--- a/ts/src/gate.ts
+++ b/ts/src/gate.ts
@@ -850,6 +850,7 @@ export default class gate extends Exchange {
'AUTO_TRIGGER_PRICE_LESS_LAST': InvalidOrder, // {"label":"AUTO_TRIGGER_PRICE_LESS_LAST","message":"invalid argument: Trigger.Price must < last_price"}
'AUTO_TRIGGER_PRICE_GREATE_LAST': InvalidOrder, // {"label":"AUTO_TRIGGER_PRICE_GREATE_LAST","message":"invalid argument: Trigger.Price must > last_price"}
'POSITION_HOLDING': BadRequest,
+ 'USER_LOAN_EXCEEDED': BadRequest, // {"label":"USER_LOAN_EXCEEDED","message":"Max loan amount per user would be exceeded"}
},
'broad': {},
},
@@ -5586,6 +5587,217 @@ export default class gate extends Exchange {
return tiers;
}
+ async repayMargin (code: string, amount, symbol: Str = undefined, params = {}) {
+ /**
+ * @method
+ * @name gate#repayMargin
+ * @description repay borrowed margin and interest
+ * @see https://www.gate.io/docs/apiv4/en/#repay-cross-margin-loan
+ * @see https://www.gate.io/docs/apiv4/en/#repay-a-loan
+ * @param {string} code unified currency code of the currency to repay
+ * @param {float} amount the amount to repay
+ * @param {string} symbol unified market symbol, required for isolated margin
+ * @param {object} [params] extra parameters specific to the exchange API endpoint
+ * @param {string} [params.mode] 'all' or 'partial' payment mode, extra parameter required for isolated margin
+ * @param {string} [params.id] '34267567' loan id, extra parameter required for isolated margin
+ * @returns {object} a [margin loan structure]{@link https://docs.ccxt.com/#/?id=margin-loan-structure}
+ */
+ let marginMode = undefined;
+ [ marginMode, params ] = this.handleOptionAndParams (params, 'repayMargin', 'marginMode');
+ this.checkRequiredArgument ('repayMargin', marginMode, 'marginMode', [ 'cross', 'isolated' ]);
+ this.checkRequiredMarginArgument ('repayMargin', symbol, marginMode);
+ await this.loadMarkets ();
+ const currency = this.currency (code);
+ const request = {
+ 'currency': currency['id'].toUpperCase (),
+ 'amount': this.currencyToPrecision (code, amount),
+ };
+ let response = undefined;
+ if ((marginMode === 'cross') && (symbol === undefined)) {
+ response = await this.privateMarginPostCrossRepayments (this.extend (request, params));
+ } else if ((marginMode === 'isolated') || (symbol !== undefined)) {
+ if (symbol === undefined) {
+ throw new BadRequest (this.id + ' repayMargin() requires a symbol argument for isolated margin');
+ }
+ const market = this.market (symbol);
+ request['currency_pair'] = market['id'];
+ request['type'] = 'repay';
+ response = await this.privateMarginPostUniLoans (this.extend (request, params));
+ }
+ //
+ // Cross
+ //
+ // [
+ // {
+ // "id": "17",
+ // "create_time": 1620381696159,
+ // "update_time": 1620381696159,
+ // "currency": "EOS",
+ // "amount": "110.553635",
+ // "text": "web",
+ // "status": 2,
+ // "repaid": "110.506649705159",
+ // "repaid_interest": "0.046985294841",
+ // "unpaid_interest": "0.0000074393366667"
+ // }
+ // ]
+ //
+ // Isolated
+ //
+ // {
+ // "id": "34267567",
+ // "create_time": "1656394778",
+ // "expire_time": "1657258778",
+ // "status": "finished",
+ // "side": "borrow",
+ // "currency": "USDT",
+ // "rate": "0.0002",
+ // "amount": "100",
+ // "days": 10,
+ // "auto_renew": false,
+ // "currency_pair": "LTC_USDT",
+ // "left": "0",
+ // "repaid": "100",
+ // "paid_interest": "0.003333333333",
+ // "unpaid_interest": "0"
+ // }
+ //
+ if (marginMode === 'cross') {
+ response = response[0];
+ }
+ return this.parseMarginLoan (response, currency);
+ }
+
+ async borrowMargin (code: string, amount, symbol: Str = undefined, params = {}) {
+ /**
+ * @method
+ * @name gate#borrowMargin
+ * @description create a loan to borrow margin
+ * @see https://www.gate.io/docs/apiv4/en/#create-a-cross-margin-borrow-loan
+ * @see https://www.gate.io/docs/developers/apiv4/en/#marginuni
+ * @param {string} code unified currency code of the currency to borrow
+ * @param {float} amount the amount to borrow
+ * @param {string} symbol unified market symbol, required for isolated margin
+ * @param {object} [params] extra parameters specific to the exchange API endpoint
+ * @param {string} [params.rate] '0.0002' or '0.002' extra parameter required for isolated margin
+ * @returns {object} a [margin loan structure]{@link https://docs.ccxt.com/#/?id=margin-loan-structure}
+ */
+ let marginMode = undefined;
+ [ marginMode, params ] = this.handleOptionAndParams (params, 'borrowMargin', 'marginMode');
+ this.checkRequiredArgument ('borrowMargin', marginMode, 'marginMode', [ 'cross', 'isolated' ]);
+ this.checkRequiredMarginArgument ('borrowMargin', symbol, marginMode);
+ await this.loadMarkets ();
+ const currency = this.currency (code);
+ const request = {
+ 'currency': currency['id'].toUpperCase (),
+ 'amount': this.currencyToPrecision (code, amount),
+ };
+ let response = undefined;
+ if ((marginMode === 'cross') && (symbol === undefined)) {
+ response = await this.privateMarginPostCrossLoans (this.extend (request, params));
+ } else if ((marginMode === 'isolated') || (symbol !== undefined)) {
+ if (symbol === undefined) {
+ throw new BadRequest (this.id + ' borrowMargin() requires a symbol argument for isolated margin');
+ }
+ const market = this.market (symbol);
+ request['currency_pair'] = market['id'];
+ request['type'] = 'borrow';
+ response = await this.privateMarginPostUniLoans (this.extend (request, params));
+ }
+ //
+ // Cross
+ //
+ // {
+ // "id": "17",
+ // "create_time": 1620381696159,
+ // "update_time": 1620381696159,
+ // "currency": "EOS",
+ // "amount": "110.553635",
+ // "text": "web",
+ // "status": 2,
+ // "repaid": "110.506649705159",
+ // "repaid_interest": "0.046985294841",
+ // "unpaid_interest": "0.0000074393366667"
+ // }
+ //
+ // Isolated
+ //
+ // {
+ // "id": "34267567",
+ // "create_time": "1656394778",
+ // "expire_time": "1657258778",
+ // "status": "loaned",
+ // "side": "borrow",
+ // "currency": "USDT",
+ // "rate": "0.0002",
+ // "amount": "100",
+ // "days": 10,
+ // "auto_renew": false,
+ // "currency_pair": "LTC_USDT",
+ // "left": "0",
+ // "repaid": "0",
+ // "paid_interest": "0",
+ // "unpaid_interest": "0.003333333333"
+ // }
+ //
+ return this.parseMarginLoan (response, currency);
+ }
+
+ parseMarginLoan (info, currency: Currency = undefined) {
+ //
+ // Cross
+ //
+ // {
+ // "id": "17",
+ // "create_time": 1620381696159,
+ // "update_time": 1620381696159,
+ // "currency": "EOS",
+ // "amount": "110.553635",
+ // "text": "web",
+ // "status": 2,
+ // "repaid": "110.506649705159",
+ // "repaid_interest": "0.046985294841",
+ // "unpaid_interest": "0.0000074393366667"
+ // }
+ //
+ // Isolated
+ //
+ // {
+ // "id": "34267567",
+ // "create_time": "1656394778",
+ // "expire_time": "1657258778",
+ // "status": "loaned",
+ // "side": "borrow",
+ // "currency": "USDT",
+ // "rate": "0.0002",
+ // "amount": "100",
+ // "days": 10,
+ // "auto_renew": false,
+ // "currency_pair": "LTC_USDT",
+ // "left": "0",
+ // "repaid": "0",
+ // "paid_interest": "0",
+ // "unpaid_interest": "0.003333333333"
+ // }
+ //
+ const marginMode = this.safeString2 (this.options, 'defaultMarginMode', 'marginMode', 'cross');
+ let timestamp = this.safeInteger (info, 'create_time');
+ if (marginMode === 'isolated') {
+ timestamp = this.safeTimestamp (info, 'create_time');
+ }
+ const currencyId = this.safeString (info, 'currency');
+ const marketId = this.safeString (info, 'currency_pair');
+ return {
+ 'id': this.safeInteger (info, 'id'),
+ 'currency': this.safeCurrencyCode (currencyId, currency),
+ 'amount': this.safeNumber (info, 'amount'),
+ 'symbol': this.safeSymbol (marketId, undefined, '_', 'margin'),
+ 'timestamp': timestamp,
+ 'datetime': this.iso8601 (timestamp),
+ 'info': info,
+ };
+ }
+
sign (path, api = [], method = 'GET', params = {}, headers = undefined, body = undefined) {
const authentication = api[0]; // public, private
const type = api[1]; // spot, margin, future, delivery
diff --git a/ts/src/test/static/request/gate.json b/ts/src/test/static/request/gate.json
index 3c5e419db3ca..5839e377dfad 100644
--- a/ts/src/test/static/request/gate.json
+++ b/ts/src/test/static/request/gate.json
@@ -239,6 +239,58 @@
],
"output": "{\"currency\":\"usdt\",\"amount\":\"1\",\"from\":\"futures\",\"to\":\"spot\",\"settle\":\"usdt\"}"
}
+ ],
+ "borrowMargin": [
+ {
+ "description": "borrow cross margin",
+ "method": "borrowMargin",
+ "url": "https://api.gateio.ws/api/v4/margin/cross/loans",
+ "input": [
+ "USDT",
+ "1",
+ null,
+ {"marginMode":"cross"}
+ ],
+ "output": "{\"currency\":\"USDT\",\"amount\":\"1\"}"
+ },
+ {
+ "description": "borrow isolated margin",
+ "method": "borrowMargin",
+ "url": "https://api.gateio.ws/api/v4/margin/uni/loans",
+ "input": [
+ "USDT",
+ "1",
+ "LTC/USDT",
+ {"marginMode":"isolated"}
+ ],
+ "output": "{\"currency\":\"USDT\",\"amount\":\"1\",\"currency_pair\":\"LTC_USDT\",\"type\":\"borrow\"}"
+ }
+ ],
+ "repayMargin" : [
+ {
+ "description": "repay cross margin",
+ "method": "repayMargin",
+ "url": "https://api.gateio.ws/api/v4/margin/cross/repayments",
+ "input": [
+ "USDT",
+ "1",
+ null,
+ {"marginMode":"cross"}
+ ],
+ "output": "{\"currency\":\"USDT\",\"amount\":\"1\"}"
+ },
+ {
+ "description": "repay isolated margin",
+ "method": "repayMargin",
+ "url": "https://api.gateio.ws/api/v4/margin/uni/loans",
+ "input": [
+ "USDT",
+ "1",
+ "LTC/USDT",
+ {"marginMode":"isolated"}
+ ],
+ "output": "{\"currency\":\"USDT\",\"amount\":\"1\",\"currency_pair\":\"LTC_USDT\",\"type\":\"repay\"}"
+ }
]
}
}
| await e.borrowMargin ('USDT', 1, undefined, {marginMode: 'cross'});
```
{
id: 17835350,
currency: "USDT",
amount: 4,
symbol: undefined,
timestamp: 1700893271174,
datetime: "2023-11-25T06:21:11.174Z",
info: {
id: "17835350",
create_time: "1700893271174",
update_time: "1700898941442",
currency: "USDT",
amount: "4",
text: "apiv4",
status: "2",
repaid: "0",
repaid_interest: "0",
unpaid_interest: "0",
},
}
```
await e.repayMargin ('USDT', 2, undefined, {marginMode: 'cross'});
```
{
id: 22712625,
currency: "USDT",
amount: 1,
symbol: undefined,
timestamp: 1700893271174,
datetime: "2023-11-25T06:21:11.174Z",
info: {
id: "22712625",
create_time: "1700893271174",
update_time: "1700898887015",
currency: "USDT",
amount: "1",
text: "apiv4",
status: "3",
repaid: "2",
repaid_interest: "0",
unpaid_interest: "0",
},
}
``` | https://api.github.com/repos/ccxt/ccxt/pulls/20124 | 2023-11-25T08:49:50Z | 2023-11-27T10:16:01Z | 2023-11-27T10:16:01Z | 2023-11-27T10:16:02Z | 3,383 | ccxt/ccxt | 13,842 |
Added a new podcast… | diff --git a/blogs.md b/blogs.md
index 5a5e284d..9a0ece2f 100644
--- a/blogs.md
+++ b/blogs.md
@@ -20,6 +20,7 @@ Podcasts
* [TWIMLAI](https://twimlai.com/shows/)
* [Machine Learning Guide](http://ocdevel.com/podcasts/machine-learning)
* [DataTalks.Club](https://anchor.fm/datatalksclub)
+* [Super Data Science Podcast with Jon Krohn](https://www.youtube.com/@SuperDataScienceWithJonKrohn)
Newsletters
-----------
| I have added a new podcast "Super Data Science Podcast with Joh Krohn" under the Podcasts section and this is a really great podcast for ML enthusiasts. | https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/922 | 2023-03-17T16:51:27Z | 2023-04-12T00:39:02Z | 2023-04-12T00:39:02Z | 2023-04-12T00:39:02Z | 144 | josephmisiti/awesome-machine-learning | 52,328 |
Use explicit device name for Stookalert | diff --git a/homeassistant/components/stookalert/binary_sensor.py b/homeassistant/components/stookalert/binary_sensor.py
index d3920d3f0e42..1d074bba9c2d 100644
--- a/homeassistant/components/stookalert/binary_sensor.py
+++ b/homeassistant/components/stookalert/binary_sensor.py
@@ -36,6 +36,7 @@ class StookalertBinarySensor(BinarySensorEntity):
_attr_attribution = "Data provided by rivm.nl"
_attr_device_class = BinarySensorDeviceClass.SAFETY
_attr_has_entity_name = True
+ _attr_name = None
def __init__(self, client: stookalert.stookalert, entry: ConfigEntry) -> None:
"""Initialize a Stookalert device."""
| <!--
You are amazing! Thanks for contributing to our project!
Please, DO NOT DELETE ANY TEXT from this template! (unless instructed).
-->
## Proposed change
<!--
Describe the big picture of your changes here to communicate to the
maintainers why we should accept this pull request. If it fixes a bug
or resolves a feature request, be sure to link to that issue in the
additional information section.
-->
Use explicit device name for Stookalert
## Type of change
<!--
What type of change does your PR introduce to Home Assistant?
NOTE: Please, check only 1! box!
If your PR requires multiple boxes to be checked, you'll most likely need to
split it into multiple PRs. This makes things easier and faster to code review.
-->
- [ ] Dependency upgrade
- [x] Bugfix (non-breaking change which fixes an issue)
- [ ] New integration (thank you!)
- [ ] New feature (which adds functionality to an existing integration)
- [ ] Deprecation (breaking change to happen in the future)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [ ] Code quality improvements to existing code or addition of tests
## Additional information
<!--
Details are important, and help maintainers processing your PR.
Please be sure to fill out additional details, if applicable.
-->
- This PR fixes or closes issue: fixes #
- This PR is related to issue:
- Link to documentation pull request:
## Checklist
<!--
Put an `x` in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
-->
- [ ] The code change is tested and works locally.
- [x] Local tests pass. **Your PR cannot be merged unless tests pass**
- [x] There is no commented out code in this PR.
- [x] I have followed the [development checklist][dev-checklist]
- [x] I have followed the [perfect PR recommendations][perfect-pr]
- [x] The code has been formatted using Black (`black --fast homeassistant tests`)
- [ ] Tests have been added to verify that the new code works.
If user exposed functionality or configuration variables are added/changed:
- [ ] Documentation added/updated for [www.home-assistant.io][docs-repository]
If the code communicates with devices, web services, or third-party tools:
- [ ] The [manifest file][manifest-docs] has all fields filled out correctly.
Updated and included derived files by running: `python3 -m script.hassfest`.
- [ ] New or updated dependencies have been added to `requirements_all.txt`.
Updated by running `python3 -m script.gen_requirements_all`.
- [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
- [ ] Untested files have been added to `.coveragerc`.
<!--
This project is very active and we have a high turnover of pull requests.
Unfortunately, the number of incoming pull requests is higher than what our
reviewers can review and merge so there is a long backlog of pull requests
waiting for review. You can help here!
By reviewing another pull request, you will help raise the code quality of
that pull request and the final review will be faster. This way the general
pace of pull request reviews will go up and your wait time will go down.
When picking a pull request to review, try to choose one that hasn't yet
been reviewed.
Thanks for helping out!
-->
To help with the load of incoming pull requests:
- [ ] I have reviewed two other [open pull requests][prs] in this repository.
[prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure
<!--
Thank you for contributing <3
Below, some useful links you could explore:
-->
[dev-checklist]: https://developers.home-assistant.io/docs/development_checklist/
[manifest-docs]: https://developers.home-assistant.io/docs/creating_integration_manifest/
[quality-scale]: https://developers.home-assistant.io/docs/integration_quality_scale_index/
[docs-repository]: https://github.com/home-assistant/home-assistant.io
[perfect-pr]: https://developers.home-assistant.io/docs/review-process/#creating-the-perfect-pr
| https://api.github.com/repos/home-assistant/core/pulls/96755 | 2023-07-17T10:34:28Z | 2023-07-17T10:58:51Z | 2023-07-17T10:58:51Z | 2023-07-20T09:39:00Z | 175 | home-assistant/core | 39,034 |
Fix Failing test for JSON Formatter on Python 3.8 | diff --git a/airflow/utils/log/es_task_handler.py b/airflow/utils/log/es_task_handler.py
index d54cb1231f39b..47f970f6041f5 100644
--- a/airflow/utils/log/es_task_handler.py
+++ b/airflow/utils/log/es_task_handler.py
@@ -208,7 +208,6 @@ def set_context(self, ti):
if self.json_format:
self.formatter = JSONFormatter(
- self.formatter._fmt, # pylint: disable=protected-access
json_fields=self.json_fields,
extras={
'dag_id': str(ti.dag_id),
diff --git a/tests/utils/log/test_es_task_handler.py b/tests/utils/log/test_es_task_handler.py
index 408e7cf245cfb..b4e8fac344704 100644
--- a/tests/utils/log/test_es_task_handler.py
+++ b/tests/utils/log/test_es_task_handler.py
@@ -254,9 +254,8 @@ def test_set_context(self):
self.assertTrue(self.es_task_handler.mark_end_on_close)
def test_set_context_w_json_format_and_write_stdout(self):
- self.es_task_handler.formatter = mock.MagicMock()
- self.es_task_handler.formatter._fmt = mock.MagicMock()
- self.es_task_handler.formatter._fmt.find = mock.MagicMock(return_value=1)
+ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
+ self.es_task_handler.formatter = formatter
self.es_task_handler.write_stdout = True
self.es_task_handler.json_format = True
self.es_task_handler.set_context(self.ti)
| This test is failing on Master:
```
tests/utils/log/test_es_task_handler.py ..................F
_ TestElasticsearchTaskHandler.test_set_context_w_json_format_and_write_stdout _
self = <tests.utils.log.test_es_task_handler.TestElasticsearchTaskHandler testMethod=test_set_context_w_json_format_and_write_stdout>
def test_set_context_w_json_format_and_write_stdout(self):
self.es_task_handler.formatter = mock.MagicMock()
self.es_task_handler.formatter._fmt = mock.MagicMock()
self.es_task_handler.formatter._fmt.find = mock.MagicMock(return_value=1)
self.es_task_handler.write_stdout = True
self.es_task_handler.json_format = True
> self.es_task_handler.set_context(self.ti)
tests/utils/log/test_es_task_handler.py:262:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
airflow/utils/log/es_task_handler.py:210: in set_context
self.formatter = JSONFormatter(
airflow/utils/log/json_formatter.py:35: in __init__
super().__init__(fmt, datefmt, style)
/usr/local/lib/python3.8/logging/__init__.py:576: in __init__
self._style.validate()
```
Also, there is no need of passing the default `fmt` value explicitly as it is the default.
cc @andriisoldatenko
---
Make sure to mark the boxes below before creating PR: [x]
- [x] Description above provides context of the change
- [x] Unit tests coverage for changes (not needed for documentation changes)
- [x] Target Github ISSUE in description if exists
- [x] Commits follow "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)"
- [x] Relevant documentation is updated including usage instructions.
- [x] I will engage committers as explained in [Contribution Workflow Example](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#contribution-workflow-example).
---
In case of fundamental code change, Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)) is needed.
In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
In case of backwards incompatible changes please leave a note in [UPDATING.md](https://github.com/apache/airflow/blob/master/UPDATING.md).
Read the [Pull Request Guidelines](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#pull-request-guidelines) for more information.
| https://api.github.com/repos/apache/airflow/pulls/9278 | 2020-06-13T17:21:27Z | 2020-06-13T18:09:13Z | 2020-06-13T18:09:13Z | 2020-06-13T18:09:17Z | 361 | apache/airflow | 14,614 |
Doc improvements for url_for() | diff --git a/flask/helpers.py b/flask/helpers.py
index 501a2f811c..7e20c97d47 100644
--- a/flask/helpers.py
+++ b/flask/helpers.py
@@ -113,7 +113,7 @@ def generate():
yield '!'
return Response(generate())
- Alternatively it can also be used around a specific generator:
+ Alternatively it can also be used around a specific generator::
from flask import stream_with_context, request, Response
@@ -305,7 +305,9 @@ def external_url_handler(error, endpoint, **values):
:param endpoint: the endpoint of the URL (name of the function)
:param values: the variable arguments of the URL rule
- :param _external: if set to `True`, an absolute URL is generated.
+ :param _external: if set to `True`, an absolute URL is generated. Server
+ address can be changed via `SERVER_NAME` configuration variable which
+ defaults to `localhost`.
:param _anchor: if provided this is added as anchor to the URL.
:param _method: if provided this explicitly specifies an HTTP method.
"""
| Recently I had to ask from IRC how to generate proper full url using `url_for()`, so I think that its docs should at least mention `SERVER_NAME`.
Also fixes a formatting error in another docstring.
| https://api.github.com/repos/pallets/flask/pulls/571 | 2012-08-01T08:33:33Z | 2012-08-01T16:34:50Z | 2012-08-01T16:34:50Z | 2020-11-14T07:08:20Z | 259 | pallets/flask | 20,238 |
Fixed usage of deploy_template for deploying cloudformation stack | diff --git a/localstack/services/cloudformation/cloudformation_listener.py b/localstack/services/cloudformation/cloudformation_listener.py
index 80a98a1fe1664..eed091ee13dfa 100644
--- a/localstack/services/cloudformation/cloudformation_listener.py
+++ b/localstack/services/cloudformation/cloudformation_listener.py
@@ -98,7 +98,7 @@ def execute_change_set(req_data):
TemplateBody=template)
# now run the actual deployment
- template_deployer.deploy_template(template)
+ template_deployer.deploy_template(template, stack_name)
response = make_response('ExecuteChangeSet')
return response
| Fixes error when deploying cloudformation stack:
TypeError: deploy_template() takes exactly 2 arguments (1 given) | https://api.github.com/repos/localstack/localstack/pulls/374 | 2017-10-04T17:30:39Z | 2017-10-08T00:37:33Z | 2017-10-08T00:37:33Z | 2017-10-08T00:37:33Z | 137 | localstack/localstack | 29,116 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.