title stringlengths 2 169 | diff stringlengths 235 19.5k | body stringlengths 0 30.5k | url stringlengths 48 84 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 | updated_at stringlengths 20 20 | diff_len float64 101 3.99k | repo_name stringclasses 83
values | __index_level_0__ int64 15 52.7k |
|---|---|---|---|---|---|---|---|---|---|---|
Enable ruff N999 rule | diff --git a/DIRECTORY.md b/DIRECTORY.md
index 01667c9feee8..f6d6cb463faa 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -351,7 +351,7 @@
* [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py)
* [Longest Common Substring](dynamic_programming/longest_common_substring.py)
* [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py)
- * [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py)
+ * [Longest Increasing Subsequence O Nlogn](dynamic_programming/longest_increasing_subsequence_o_nlogn.py)
* [Longest Palindromic Subsequence](dynamic_programming/longest_palindromic_subsequence.py)
* [Matrix Chain Multiplication](dynamic_programming/matrix_chain_multiplication.py)
* [Matrix Chain Order](dynamic_programming/matrix_chain_order.py)
@@ -465,7 +465,7 @@
* [Dijkstra Alternate](graphs/dijkstra_alternate.py)
* [Dijkstra Binary Grid](graphs/dijkstra_binary_grid.py)
* [Dinic](graphs/dinic.py)
- * [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py)
+ * [Directed And Undirected Weighted Graph](graphs/directed_and_undirected_weighted_graph.py)
* [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py)
* [Eulerian Path And Circuit For Undirected Graph](graphs/eulerian_path_and_circuit_for_undirected_graph.py)
* [Even Tree](graphs/even_tree.py)
@@ -792,7 +792,6 @@
* [Minimum Cut](networking_flow/minimum_cut.py)
## Neural Network
- * [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py)
* Activation Functions
* [Binary Step](neural_network/activation_functions/binary_step.py)
* [Exponential Linear Unit](neural_network/activation_functions/exponential_linear_unit.py)
@@ -809,6 +808,7 @@
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Input Data](neural_network/input_data.py)
* [Simple Neural Network](neural_network/simple_neural_network.py)
+ * [Two Hidden Layers Neural Network](neural_network/two_hidden_layers_neural_network.py)
## Other
* [Activity Selection](other/activity_selection.py)
diff --git a/dynamic_programming/longest_increasing_subsequence_o(nlogn).py b/dynamic_programming/longest_increasing_subsequence_o_nlogn.py
similarity index 100%
rename from dynamic_programming/longest_increasing_subsequence_o(nlogn).py
rename to dynamic_programming/longest_increasing_subsequence_o_nlogn.py
diff --git a/graphs/directed_and_undirected_(weighted)_graph.py b/graphs/directed_and_undirected_weighted_graph.py
similarity index 100%
rename from graphs/directed_and_undirected_(weighted)_graph.py
rename to graphs/directed_and_undirected_weighted_graph.py
diff --git a/neural_network/2_hidden_layers_neural_network.py b/neural_network/two_hidden_layers_neural_network.py
similarity index 100%
rename from neural_network/2_hidden_layers_neural_network.py
rename to neural_network/two_hidden_layers_neural_network.py
diff --git a/pyproject.toml b/pyproject.toml
index 5187491e5ee7..db1e860278bf 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -8,7 +8,6 @@ lint.ignore = [ # `ruff rule S101` for a description of that rule
"G004", # Logging statement uses f-string
"ICN001", # `matplotlib.pyplot` should be imported as `plt` -- FIX ME
"INP001", # File `x/y/z.py` is part of an implicit namespace package. Add an `__init__.py`. -- FIX ME
- "N999", # Invalid module name -- FIX ME
"NPY002", # Replace legacy `np.random.choice` call with `np.random.Generator` -- FIX ME
"PGH003", # Use specific rule codes when ignoring type issues -- FIX ME
"PLC1901", # `{}` can be simplified to `{}` as an empty string is falsey
| ### Describe your change:
Enable ruff `N999` rule
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Add or change doctests? -- Note: Please avoid changing both code and tests in a single pull request.
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| https://api.github.com/repos/TheAlgorithms/Python/pulls/11331 | 2024-03-25T21:20:10Z | 2024-03-28T17:26:41Z | 2024-03-28T17:26:41Z | 2024-03-28T17:27:39Z | 1,042 | TheAlgorithms/Python | 30,216 |
fix state value extract for issues/1067 | diff --git a/metagpt/utils/repair_llm_raw_output.py b/metagpt/utils/repair_llm_raw_output.py
index b8756e8c6..17e095c5f 100644
--- a/metagpt/utils/repair_llm_raw_output.py
+++ b/metagpt/utils/repair_llm_raw_output.py
@@ -340,7 +340,9 @@ def extract_state_value_from_output(content: str) -> str:
content (str): llm's output from `Role._think`
"""
content = content.strip() # deal the output cases like " 0", "0\n" and so on.
- pattern = r"([0-9])" # TODO find the number using a more proper method not just extract from content using pattern
+ pattern = (
+ r"(?<!-)[0-9]" # TODO find the number using a more proper method not just extract from content using pattern
+ )
matches = re.findall(pattern, content, re.DOTALL)
matches = list(set(matches))
state = matches[0] if len(matches) > 0 else "-1"
|
**Features**
<!-- Clear and direct description of the submit features. -->
<!-- If it's a bug fix, please also paste the issue link. -->
- fix state value extract for issues/1067
**Feature Docs**
<!-- The RFC, tutorial, or use cases about the feature if it's a pretty big update. If not, there is no need to fill. -->
**Influence**
<!-- Tell me the impact of the new feature and I'll focus on it. -->
**Result**
<!-- The screenshot/log of unittest/running result -->
**Other**
<!-- Something else about this PR. --> | https://api.github.com/repos/geekan/MetaGPT/pulls/1069 | 2024-03-21T13:59:09Z | 2024-03-21T14:01:30Z | 2024-03-21T14:01:29Z | 2024-03-21T14:01:30Z | 250 | geekan/MetaGPT | 16,996 |
Add `tensorrt>=7.0.0` checks | diff --git a/export.py b/export.py
index 6cf1db2c45b..a0cb5fdc567 100644
--- a/export.py
+++ b/export.py
@@ -61,8 +61,8 @@
from models.yolo import Detect
from utils.activations import SiLU
from utils.datasets import LoadImages
-from utils.general import (LOGGER, check_dataset, check_img_size, check_requirements, colorstr, file_size, print_args,
- url2file)
+from utils.general import (LOGGER, check_dataset, check_img_size, check_requirements, check_version, colorstr,
+ file_size, print_args, url2file)
from utils.torch_utils import select_device
@@ -174,14 +174,14 @@ def export_engine(model, im, file, train, half, simplify, workspace=4, verbose=F
check_requirements(('tensorrt',))
import tensorrt as trt
- opset = (12, 13)[trt.__version__[0] == '8'] # test on TensorRT 7.x and 8.x
- if opset == 12: # TensorRT 7 handling https://github.com/ultralytics/yolov5/issues/6012
+ if trt.__version__[0] == 7: # TensorRT 7 handling https://github.com/ultralytics/yolov5/issues/6012
grid = model.model[-1].anchor_grid
model.model[-1].anchor_grid = [a[..., :1, :1, :] for a in grid]
- export_onnx(model, im, file, opset, train, False, simplify)
+ export_onnx(model, im, file, 12, train, False, simplify) # opset 12
model.model[-1].anchor_grid = grid
else: # TensorRT >= 8
- export_onnx(model, im, file, opset, train, False, simplify)
+ check_version(trt.__version__, '7.0.0', hard=True) # require tensorrt>=8.0.0
+ export_onnx(model, im, file, 13, train, False, simplify) # opset 13
onnx = file.with_suffix('.onnx')
assert onnx.exists(), f'failed to export ONNX file: {onnx}'
diff --git a/models/common.py b/models/common.py
index 284dd2bb3af..836314568f6 100644
--- a/models/common.py
+++ b/models/common.py
@@ -337,7 +337,7 @@ def __init__(self, weights='yolov5s.pt', device=None, dnn=False, data=None):
elif engine: # TensorRT
LOGGER.info(f'Loading {w} for TensorRT inference...')
import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download
- check_version(trt.__version__, '8.0.0', verbose=True) # version requirement
+ check_version(trt.__version__, '7.0.0', hard=True) # require tensorrt>=7.0.0
Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr'))
logger = trt.Logger(trt.Logger.INFO)
with open(w, 'rb') as f, trt.Runtime(logger) as runtime:
|
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Upgraded TensorRT version compatibility and cleaned up export conditions.
### 📊 Key Changes
- Introduced a new utility function `check_version` to enforce version requirements.
- Simplified ONNX export logic based on TensorRT version checks.
- Changed required TensorRT version from 8.0.0 to 7.0.0 onwards, dropping the earlier distinction between versions for ONNX opset selection.
### 🎯 Purpose & Impact
- Ensures the codebase is using appropriate functionality based on the user's installed version of TensorRT. 🔄
- Simplifies the exporting process when converting models to ONNX format, which could lead to fewer bugs and easier maintenance. 🛠️
- Broadens the compatibility to include TensorRT 7.0.0 and above, allowing users with older versions to utilize the tool without needing to upgrade their TensorRT (unless already below this threshold). 📈 | https://api.github.com/repos/ultralytics/yolov5/pulls/6193 | 2022-01-04T21:22:36Z | 2022-01-04T21:39:13Z | 2022-01-04T21:39:13Z | 2024-01-19T13:39:50Z | 749 | ultralytics/yolov5 | 25,352 |
Option to make images generated from a given manual seed consistent across CUDA and MPS devices | diff --git a/modules/devices.py b/modules/devices.py
index 52c3e7cd773..3bc86a6a9d1 100644
--- a/modules/devices.py
+++ b/modules/devices.py
@@ -92,14 +92,18 @@ def cond_cast_float(input):
def randn(seed, shape):
+ from modules.shared import opts
+
torch.manual_seed(seed)
- if device.type == 'mps':
+ if opts.use_cpu_randn or device.type == 'mps':
return torch.randn(shape, device=cpu).to(device)
return torch.randn(shape, device=device)
def randn_without_seed(shape):
- if device.type == 'mps':
+ from modules.shared import opts
+
+ if opts.use_cpu_randn or device.type == 'mps':
return torch.randn(shape, device=cpu).to(device)
return torch.randn(shape, device=device)
diff --git a/modules/sd_samplers_common.py b/modules/sd_samplers_common.py
index a1aac7cf0aa..e6a372d52f8 100644
--- a/modules/sd_samplers_common.py
+++ b/modules/sd_samplers_common.py
@@ -60,3 +60,12 @@ def store_latent(decoded):
class InterruptedException(BaseException):
pass
+
+if opts.use_cpu_randn:
+ import torchsde._brownian.brownian_interval
+
+ def torchsde_randn(size, dtype, device, seed):
+ generator = torch.Generator(devices.cpu).manual_seed(int(seed))
+ return torch.randn(size, dtype=dtype, device=devices.cpu, generator=generator).to(device)
+
+ torchsde._brownian.brownian_interval._randn = torchsde_randn
diff --git a/modules/sd_samplers_kdiffusion.py b/modules/sd_samplers_kdiffusion.py
index e9f08518fdc..13f4567aecb 100644
--- a/modules/sd_samplers_kdiffusion.py
+++ b/modules/sd_samplers_kdiffusion.py
@@ -190,7 +190,7 @@ def randn_like(self, x):
if noise.shape == x.shape:
return noise
- if x.device.type == 'mps':
+ if opts.use_cpu_randn or x.device.type == 'mps':
return torch.randn_like(x, device=devices.cpu).to(x.device)
else:
return torch.randn_like(x)
diff --git a/modules/shared.py b/modules/shared.py
index 5fd0eecbd1f..59b037d5897 100644
--- a/modules/shared.py
+++ b/modules/shared.py
@@ -331,6 +331,7 @@ def list_samplers():
"comma_padding_backtrack": OptionInfo(20, "Increase coherency by padding from the last comma within n tokens when using more than 75 tokens", gr.Slider, {"minimum": 0, "maximum": 74, "step": 1 }),
"CLIP_stop_at_last_layers": OptionInfo(1, "Clip skip", gr.Slider, {"minimum": 1, "maximum": 12, "step": 1}),
"upcast_attn": OptionInfo(False, "Upcast cross attention layer to float32"),
+ "use_cpu_randn": OptionInfo(False, "Use CPU for random number generation to make manual seeds generate the same image across platforms. This may change existing seeds."),
}))
options_templates.update(options_section(('compatibility', "Compatibility"), {
| **Describe what this pull request is trying to achieve.**
Implements an option that, when enabled, causes a given manual seed to generate identical or very similar images across both CUDA and MPS devices.
Fixes https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9613
**Additional notes and description of your changes**
This change set makes minor modifications to existing or historical code paths contributed by @brkirch . CPU is already used for random number generation on macOS due to the MPS device's historical inability to deterministically generate random numbers.
The proposed changes add an option, `use_cpu_randn`, that will follow those code paths when the option is set, even if the GPU device is not MPS.
Enabling this option has no impact on the images an MPS device will generate, but will change a CUDA device's output to closely match MPS.
**Environment this was tested in**
- OS: Windows 10, Linux (Arch), macOS 13.3.1
- Browser: Chrome, Edge, Firefox
- Graphics card: NVIDIA RTX 2080 8GB, Apple M1 Max 10cpu/32gpu
Samplers tested for similar output between CUDA and MPS: DPM++ 2M Karras, DPM++ SDE, Euler a
**Screenshots or videos of your changes**
The new option is added to the bottom of the Stable Diffusion tab in Settings.
Before addition of new option:
<img width="1152" alt="Screenshot 2023-04-19 at 01 30 48" src="https://user-images.githubusercontent.com/1689220/232975775-588031af-3ef5-4cc3-a712-0a45d348af78.png">
After addition of new option:
<img width="1153" alt="Screenshot 2023-04-19 at 01 29 40" src="https://user-images.githubusercontent.com/1689220/232975688-f98cc6b9-2929-482e-9d9a-2acdbf74e9c9.png">
(These screenshots were taken on an instance configured to use a theme from the catppuccin extension. This PR does not propose any theme-related changes.)
MPS generated image:

CUDA generated image with `use_cpu_randn` disabled:

CUDA generated image with `use_cpu_randn` enabled:

| https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/9734 | 2023-04-19T05:48:02Z | 2023-04-29T08:16:06Z | 2023-04-29T08:16:06Z | 2023-04-29T08:16:06Z | 777 | AUTOMATIC1111/stable-diffusion-webui | 39,810 |
modules/api/api.py: add api endpoint to refresh embeddings list | diff --git a/modules/api/api.py b/modules/api/api.py
index b3d74e513a3..b6bb9d06ab1 100644
--- a/modules/api/api.py
+++ b/modules/api/api.py
@@ -230,6 +230,7 @@ def __init__(self, app: FastAPI, queue_lock: Lock):
self.add_api_route("/sdapi/v1/realesrgan-models", self.get_realesrgan_models, methods=["GET"], response_model=list[models.RealesrganItem])
self.add_api_route("/sdapi/v1/prompt-styles", self.get_prompt_styles, methods=["GET"], response_model=list[models.PromptStyleItem])
self.add_api_route("/sdapi/v1/embeddings", self.get_embeddings, methods=["GET"], response_model=models.EmbeddingsResponse)
+ self.add_api_route("/sdapi/v1/refresh-embeddings", self.refresh_embeddings, methods=["POST"])
self.add_api_route("/sdapi/v1/refresh-checkpoints", self.refresh_checkpoints, methods=["POST"])
self.add_api_route("/sdapi/v1/refresh-vae", self.refresh_vae, methods=["POST"])
self.add_api_route("/sdapi/v1/create/embedding", self.create_embedding, methods=["POST"], response_model=models.CreateResponse)
@@ -643,6 +644,10 @@ def convert_embeddings(embeddings):
"skipped": convert_embeddings(db.skipped_embeddings),
}
+ def refresh_embeddings(self):
+ with self.queue_lock:
+ sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
+
def refresh_checkpoints(self):
with self.queue_lock:
shared.refresh_checkpoints()
| ## Description
This PR adds a simple API endpoint to refresh/reload the loaded embeddings.
Its useful when using automatic1111 without wanting to disrupt it when adding new files via the filesystem.
Resolves #13994
## Checklist:
- [X] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [X] I have performed a self-review of my own code
- [X] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [X] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
| https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/14715 | 2024-01-21T13:13:12Z | 2024-01-23T19:35:41Z | 2024-01-23T19:35:41Z | 2024-01-23T19:37:38Z | 364 | AUTOMATIC1111/stable-diffusion-webui | 39,696 |
is should not be used for comparing numbers | diff --git a/requests/models.py b/requests/models.py
index 6ed2b59946..2850950e86 100644
--- a/requests/models.py
+++ b/requests/models.py
@@ -575,7 +575,7 @@ def content(self):
raise RuntimeError(
'The content for this response was already consumed')
- if self.status_code is 0:
+ if self.status_code == 0:
self._content = None
else:
self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
| https://api.github.com/repos/psf/requests/pulls/1276 | 2013-04-01T06:20:50Z | 2013-04-02T03:42:23Z | 2013-04-02T03:42:23Z | 2023-05-23T23:14:22Z | 126 | psf/requests | 32,811 | |
[mypy] Fix web_programming directory | diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index f544c02b1c35..76c6357fe0ca 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -23,7 +23,7 @@ jobs:
python -m pip install mypy pytest-cov -r requirements.txt
# FIXME: #4052 fix mypy errors in the exclude directories and remove them below
- run: mypy --ignore-missing-imports
- --exclude '(arithmetic_analysis|ciphers|conversions|data_structures|digital_image_processing|dynamic_programming|graphs|hashes|linear_algebra|maths|matrix|other|project_euler|scripts|searches|strings|web_programming*)/$' .
+ --exclude '(arithmetic_analysis|ciphers|conversions|data_structures|digital_image_processing|dynamic_programming|graphs|hashes|linear_algebra|maths|matrix|other|project_euler|scripts|searches|strings*)/$' .
- name: Run tests
run: pytest --doctest-modules --ignore=project_euler/ --ignore=scripts/ --cov-report=term-missing:skip-covered --cov=. .
- if: ${{ success() }}
diff --git a/web_programming/currency_converter.py b/web_programming/currency_converter.py
index 6aed2a5578a5..447595b0b646 100644
--- a/web_programming/currency_converter.py
+++ b/web_programming/currency_converter.py
@@ -9,7 +9,7 @@
URL_BASE = "https://www.amdoren.com/api/currency.php"
TESTING = os.getenv("CI", False)
-API_KEY = os.getenv("AMDOREN_API_KEY")
+API_KEY = os.getenv("AMDOREN_API_KEY", "")
if not API_KEY and not TESTING:
raise KeyError("Please put your API key in an environment variable.")
diff --git a/web_programming/emails_from_url.py b/web_programming/emails_from_url.py
index 01dee274f015..0571ac3313a3 100644
--- a/web_programming/emails_from_url.py
+++ b/web_programming/emails_from_url.py
@@ -8,18 +8,19 @@
import re
from html.parser import HTMLParser
+from typing import Optional
from urllib import parse
import requests
class Parser(HTMLParser):
- def __init__(self, domain: str):
- HTMLParser.__init__(self)
- self.data = []
+ def __init__(self, domain: str) -> None:
+ super().__init__()
+ self.urls: list[str] = []
self.domain = domain
- def handle_starttag(self, tag: str, attrs: str) -> None:
+ def handle_starttag(self, tag: str, attrs: list[tuple[str, Optional[str]]]) -> None:
"""
This function parse html to take takes url from tags
"""
@@ -29,10 +30,10 @@ def handle_starttag(self, tag: str, attrs: str) -> None:
for name, value in attrs:
# If href is defined, and not empty nor # print it.
if name == "href" and value != "#" and value != "":
- # If not already in data.
- if value not in self.data:
+ # If not already in urls.
+ if value not in self.urls:
url = parse.urljoin(self.domain, value)
- self.data.append(url)
+ self.urls.append(url)
# Get main domain name (example.com)
@@ -59,7 +60,7 @@ def get_sub_domain_name(url: str) -> str:
return parse.urlparse(url).netloc
-def emails_from_url(url: str = "https://github.com") -> list:
+def emails_from_url(url: str = "https://github.com") -> list[str]:
"""
This function takes url and return all valid urls
"""
@@ -78,7 +79,7 @@ def emails_from_url(url: str = "https://github.com") -> list:
# Get links and loop through
valid_emails = set()
- for link in parser.data:
+ for link in parser.urls:
# open URL.
# read = requests.get(link)
try:
| A subtask of #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| https://api.github.com/repos/TheAlgorithms/Python/pulls/4297 | 2021-03-28T01:58:51Z | 2021-03-31T03:18:07Z | 2021-03-31T03:18:07Z | 2021-03-31T05:21:08Z | 957 | TheAlgorithms/Python | 29,593 |
add custom tool example | diff --git a/examples/di/custom_tool.py b/examples/di/custom_tool.py
new file mode 100644
index 000000000..cbe7380c7
--- /dev/null
+++ b/examples/di/custom_tool.py
@@ -0,0 +1,36 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+"""
+@Time : 2024/3/22 10:54
+@Author : alexanderwu
+@File : custom_tool.py
+"""
+
+from metagpt.roles.di.data_interpreter import DataInterpreter
+from metagpt.tools.tool_registry import register_tool
+
+
+@register_tool()
+def magic_function(arg1: str, arg2: int) -> dict:
+ """
+ The magic function that does something.
+
+ Args:
+ arg1 (str): ...
+ arg2 (int): ...
+
+ Returns:
+ dict: ...
+ """
+ return {"arg1": arg1 * 3, "arg2": arg2 * 5}
+
+
+async def main():
+ di = DataInterpreter(tools=["magic_function"])
+ await di.run("Just call the magic function with arg1 'A' and arg2 2. Tell me the result.")
+
+
+if __name__ == "__main__":
+ import asyncio
+
+ asyncio.run(main())
| **Features**
<!-- Clear and direct description of the submit features. -->
<!-- If it's a bug fix, please also paste the issue link. -->
- add a example of DataInterpreter for using custom tool.
**Feature Docs**
<!-- The RFC, tutorial, or use cases about the feature if it's a pretty big update. If not, there is no need to fill. -->
**Influence**
<!-- Tell me the impact of the new feature and I'll focus on it. -->
Solve https://github.com/geekan/MetaGPT/issues/1034
**Result**
<!-- The screenshot/log of unittest/running result -->

**Other**
<!-- Something else about this PR. --> | https://api.github.com/repos/geekan/MetaGPT/pulls/1071 | 2024-03-22T03:01:16Z | 2024-03-22T03:20:04Z | 2024-03-22T03:20:04Z | 2024-03-22T03:20:04Z | 307 | geekan/MetaGPT | 16,778 |
ui: Always display Wi-Fi list when navigating to Network panel | diff --git a/selfdrive/ui/qt/network/networking.cc b/selfdrive/ui/qt/network/networking.cc
index 98ecc90fe33466..090b9b578c4a9d 100644
--- a/selfdrive/ui/qt/network/networking.cc
+++ b/selfdrive/ui/qt/network/networking.cc
@@ -105,6 +105,7 @@ void Networking::showEvent(QShowEvent *event) {
}
void Networking::hideEvent(QHideEvent *event) {
+ main_layout->setCurrentWidget(wifiScreen);
wifi->stop();
}
| Currently, the "Advanced" panel remains displayed in the `Network` panel when we navigate back from it after exiting the settings or navigating to another panel.
We reset the `Network` panel to the Wi-Fi list when the "Advanced" panel under `Network` is opened and:
- Exiting the settings menu
- Navigating to another settings panel | https://api.github.com/repos/commaai/openpilot/pulls/30333 | 2023-10-26T02:46:58Z | 2023-10-29T17:06:35Z | 2023-10-29T17:06:35Z | 2023-10-29T17:12:05Z | 122 | commaai/openpilot | 9,603 |
add warnings and clarity to config documentation | diff --git a/docs/using.rst b/docs/using.rst
index aae8efbf2fb..87d56df608d 100644
--- a/docs/using.rst
+++ b/docs/using.rst
@@ -538,8 +538,15 @@ commands into your individual environment.
Modifying the Renewal Configuration File
----------------------------------------
+When a certificate is issued, by default Certbot creates a renewal configuration file that
+tracks the options that were selected when Certbot was run. This allows Certbot
+to use those same options again when it comes time for renewal. These renewal
+configuration files are located at ``/etc/letsencrypt/renewal/CERTNAME``.
+
For advanced certificate management tasks, it is possible to manually modify the certificate's
-renewal configuration file, located at ``/etc/letsencrypt/renewal/CERTNAME``.
+renewal configuration file, but this is discouraged since it can easily break Certbot's
+ability to renew your certificates. If you choose to modify the renewal configuration file
+we advise you to test its validity with the ``certbot renew --dry-run`` command.
.. warning:: Modifying any files in ``/etc/letsencrypt`` can damage them so Certbot can no longer properly manage its certificates, and we do not recommend doing so.
@@ -790,7 +797,12 @@ of Certbot that you would like to run.
Configuration file
==================
-It is possible to specify configuration file with
+Certbot accepts a global configuration file that applies its options to all invocations
+of Certbot. Certificate specific configuration choices should be set in the ``.conf``
+files that can be found in ``/etc/letsencrypt/renewal``.
+
+By default no cli.ini file is created, after creating one
+it is possible to specify the location of this configuration file with
``certbot-auto --config cli.ini`` (or shorter ``-c cli.ini``). An
example configuration file is shown below:
@@ -804,6 +816,13 @@ By default, the following locations are searched:
``~/.config/letsencrypt/cli.ini`` if ``$XDG_CONFIG_HOME`` is not
set).
+Since this configuration file applies to all invocations of certbot it is incorrect
+to list domains in it. Listing domains in cli.ini may prevent renewal from working.
+Additionally due to how arguments in cli.ini are parsed, options which wish to
+not be set should not be listed. Options set to false will instead be read
+as being set to true by older versions of Certbot, since they have been listed
+in the config file.
+
.. keep it up to date with constants.py
.. _log-rotation:
| In progress PR for #4152 | https://api.github.com/repos/certbot/certbot/pulls/4991 | 2017-08-03T23:13:24Z | 2017-08-21T19:30:04Z | 2017-08-21T19:30:04Z | 2017-08-21T19:30:07Z | 591 | certbot/certbot | 1,885 |
hitbtc v3: add safeTrade | diff --git a/js/hitbtc3.js b/js/hitbtc3.js
index dc00ac229371..706fbb153856 100644
--- a/js/hitbtc3.js
+++ b/js/hitbtc3.js
@@ -738,52 +738,61 @@ module.exports = class hitbtc3 extends Exchange {
}
parseTrade (trade, market = undefined) {
- // createMarketOrder
//
- // { fee: "0.0004644",
- // id: 386394956,
- // price: "0.4644",
- // quantity: "1",
- // timestamp: "2018-10-25T16:41:44.780Z" }
+ // createOrder (market)
+ //
+ // {
+ // id: '1569252895',
+ // position_id: '0',
+ // quantity: '10',
+ // price: '0.03919424',
+ // fee: '0.000979856000',
+ // timestamp: '2022-01-25T19:38:36.153Z',
+ // taker: true
+ // }
//
// fetchTrades
//
- // { id: 974786185,
- // price: '0.032462',
- // qty: '0.3673',
- // side: 'buy',
- // timestamp: '2020-10-16T12:57:39.846Z' }
+ // {
+ // id: 974786185,
+ // price: '0.032462',
+ // qty: '0.3673',
+ // side: 'buy',
+ // timestamp: '2020-10-16T12:57:39.846Z'
+ // }
//
// fetchMyTrades
//
- // { id: 277210397,
- // clientOrderId: '6e102f3e7f3f4e04aeeb1cdc95592f1a',
- // orderId: 28102855393,
- // symbol: 'ETHBTC',
- // side: 'sell',
- // quantity: '0.002',
- // price: '0.073365',
- // fee: '0.000000147',
- // timestamp: '2018-04-28T18:39:55.345Z',
- // taker: true }
+ // {
+ // id: 277210397,
+ // clientOrderId: '6e102f3e7f3f4e04aeeb1cdc95592f1a',
+ // orderId: 28102855393,
+ // symbol: 'ETHBTC',
+ // side: 'sell',
+ // quantity: '0.002',
+ // price: '0.073365',
+ // fee: '0.000000147',
+ // timestamp: '2018-04-28T18:39:55.345Z',
+ // taker: true
+ // }
//
const timestamp = this.parse8601 (trade['timestamp']);
const marketId = this.safeString (trade, 'symbol');
market = this.safeMarket (marketId, market);
const symbol = market['symbol'];
let fee = undefined;
- const feeCost = this.safeNumber (trade, 'fee');
+ const feeCostString = this.safeString (trade, 'fee');
const taker = this.safeValue (trade, 'taker');
let takerOrMaker = undefined;
if (taker !== undefined) {
takerOrMaker = taker ? 'taker' : 'maker';
}
- if (feeCost !== undefined) {
+ if (feeCostString !== undefined) {
const info = this.safeValue (market, 'info', {});
const feeCurrency = this.safeString (info, 'fee_currency');
const feeCurrencyCode = this.safeCurrencyCode (feeCurrency);
fee = {
- 'cost': feeCost,
+ 'cost': feeCostString,
'currency': feeCurrencyCode,
};
}
@@ -793,12 +802,9 @@ module.exports = class hitbtc3 extends Exchange {
const orderId = this.safeString (trade, 'clientOrderId');
const priceString = this.safeString (trade, 'price');
const amountString = this.safeString2 (trade, 'quantity', 'qty');
- const price = this.parseNumber (priceString);
- const amount = this.parseNumber (amountString);
- const cost = this.parseNumber (Precise.stringMul (priceString, amountString));
const side = this.safeString (trade, 'side');
const id = this.safeString (trade, 'id');
- return {
+ return this.safeTrade ({
'info': trade,
'id': id,
'order': orderId,
@@ -808,11 +814,11 @@ module.exports = class hitbtc3 extends Exchange {
'type': undefined,
'side': side,
'takerOrMaker': takerOrMaker,
- 'price': price,
- 'amount': amount,
- 'cost': cost,
+ 'price': priceString,
+ 'amount': amountString,
+ 'cost': undefined,
'fee': fee,
- };
+ }, market);
}
async fetchTransactionsHelper (types, code, since, limit, params) {
| https://api.github.com/repos/ccxt/ccxt/pulls/11657 | 2022-01-25T20:30:41Z | 2022-01-25T21:18:48Z | 2022-01-25T21:18:48Z | 2022-04-22T13:54:18Z | 1,261 | ccxt/ccxt | 13,597 | |
[Bot] Synchronize Submodule References | diff --git a/examples b/examples
index 3d902cbb8440..10ddf8a1767c 160000
--- a/examples
+++ b/examples
@@ -1 +1 @@
-Subproject commit 3d902cbb8440c7fd733672503d7a62caab707ebc
+Subproject commit 10ddf8a1767c54515f01672b53a78a276415c7d5
diff --git a/inference b/inference
index 1ee7074afa97..fa363d3c567a 160000
--- a/inference
+++ b/inference
@@ -1 +1 @@
-Subproject commit 1ee7074afa97a0094dc3e284a224f227706c4a09
+Subproject commit fa363d3c567a6157d7a422820428b31f3e43736f
| Automated PR to update submodule commits | https://api.github.com/repos/hpcaitech/ColossalAI/pulls/1159 | 2022-06-23T00:01:32Z | 2022-06-29T07:11:36Z | 2022-06-29T07:11:36Z | 2022-06-29T07:11:36Z | 203 | hpcaitech/ColossalAI | 11,513 |
Fix print_json indent type | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 6a0e3e518..eb6b62623 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -22,6 +22,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Fix syntax lexer guessing.
- Fixed Pretty measure not respecting expand_all https://github.com/Textualize/rich/issues/1998
- Collapsed definitions for single-character spinners, to save memory and reduce import time.
+- Fix print_json indent type in __init__.py
### Changed
diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index 8203d0f7a..140f77f43 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -30,6 +30,7 @@ The following people have contributed to the development of Rich:
- [Clément Robert](https://github.com/neutrinoceros)
- [Brian Rutledge](https://github.com/bhrutledge)
- [Tushar Sadhwani](https://github.com/tusharsadhwani)
+- [Paul Sanders](https://github.com/sanders41)
- [Tim Savage](https://github.com/timsavage)
- [Nicolas Simonds](https://github.com/0xDEC0DE)
- [Aaron Stephens](https://github.com/aaronst)
diff --git a/rich/__init__.py b/rich/__init__.py
index ed11f5d7e..01faa6e6b 100644
--- a/rich/__init__.py
+++ b/rich/__init__.py
@@ -1,7 +1,7 @@
"""Rich text and beautiful formatting in the terminal."""
import os
-from typing import Callable, IO, TYPE_CHECKING, Any, Optional
+from typing import Callable, IO, TYPE_CHECKING, Any, Optional, Union
from ._extension import load_ipython_extension
@@ -73,7 +73,7 @@ def print_json(
json: Optional[str] = None,
*,
data: Any = None,
- indent: int = 2,
+ indent: Union[None, int, str] = 2,
highlight: bool = True,
skip_keys: bool = False,
ensure_ascii: bool = True,
| ## Type of changes
Closes #2028
- [x] Bug fix
- [ ] New feature
- [ ] Documentation / docstrings
- [ ] Tests
- [ ] Other
## Checklist
- [x] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [x] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [ ] I've added tests for new code.
- [x] I accept that @willmcgugan may be pedantic in the code review.
## Description
Changes the type for `print_json` indent in the __init__.py file to `Union[None, int, str]`
| https://api.github.com/repos/Textualize/rich/pulls/2029 | 2022-03-07T00:56:32Z | 2022-03-07T11:19:50Z | 2022-03-07T11:19:49Z | 2022-03-08T00:30:37Z | 530 | Textualize/rich | 48,588 |
fix(grouping): Shorten tag keys | diff --git a/src/sentry/eventstore/models.py b/src/sentry/eventstore/models.py
index e63fa74008a55..a9ffeab94d2c5 100644
--- a/src/sentry/eventstore/models.py
+++ b/src/sentry/eventstore/models.py
@@ -373,11 +373,9 @@ def get_hashes(self, force_config=None) -> CalculatedHashes:
)
if flat_hashes:
- sentry_sdk.set_tag("event.get_hashes.first_flat_variant_name", flat_hashes[0][0])
+ sentry_sdk.set_tag("get_hashes.flat_variant", flat_hashes[0][0])
if hierarchical_hashes:
- sentry_sdk.set_tag(
- "event.get_hashes.first_hierarchical_variant_name", hierarchical_hashes[0][0]
- )
+ sentry_sdk.set_tag("get_hashes.hierarchical_variant", hierarchical_hashes[0][0])
flat_hashes = [hash_ for _, hash_ in flat_hashes]
hierarchical_hashes = [hash_ for _, hash_ in hierarchical_hashes]
| Those keys are currently getting truncated as our maximum character
count is 32 | https://api.github.com/repos/getsentry/sentry/pulls/29878 | 2021-11-09T16:00:54Z | 2021-11-10T09:37:41Z | 2021-11-10T09:37:41Z | 2021-11-25T12:00:59Z | 226 | getsentry/sentry | 44,412 |
Stops the generation immediately when using the "Maximum number of tokens/second" setting | diff --git a/modules/text_generation.py b/modules/text_generation.py
index a3755c10a7..db897e66b4 100644
--- a/modules/text_generation.py
+++ b/modules/text_generation.py
@@ -96,7 +96,7 @@ def _generate_reply(question, state, stopping_strings=None, is_chat=False, escap
last_update = cur_time
yield reply
- if stop_found:
+ if stop_found or (state['max_tokens_second'] > 0 and shared.stop_everything):
break
if not is_chat:
| Hello,
When using the "Maximum number of tokens/second" setting, if you start to generate and then press the "Stop" button, it doesn't really stop immediately, it's due to the fact that it's waiting for the whole generation (the one with the full speed) to display all its tokens.
This simple code fixes that and makes the Stop button an immediate stop in this situation. | https://api.github.com/repos/oobabooga/text-generation-webui/pulls/3952 | 2023-09-16T07:32:05Z | 2023-09-18T17:27:07Z | 2023-09-18T17:27:07Z | 2023-09-18T17:27:07Z | 124 | oobabooga/text-generation-webui | 26,748 |
adds Mastodon instances and pr0gramm.com | diff --git a/data.json b/data.json
index 30094af9e..d44ede84c 100644
--- a/data.json
+++ b/data.json
@@ -327,6 +327,14 @@
"urlMain": "https://www.championat.com/",
"username_claimed": "blue",
"username_unclaimed": "noonewouldeverusethis7"
+ },
+ "chaos.social": {
+ "errorType": "status_code",
+ "rank": 1655,
+ "url": "https://chaos.social/@{}",
+ "urlMain": "https://chaos.social/",
+ "username_claimed": "rixx",
+ "username_unclaimed": "noonewouldeverusethis7"
},
"Chatujme.cz": {
"errorMsg": "Neexistujic\u00ed profil",
@@ -1045,7 +1053,39 @@
"username_claimed": "jcs",
"username_unclaimed": "noonewouldeverusethis7"
},
- "Mastodon": {
+ "mastodon.cloud": {
+ "errorType": "status_code",
+ "rank": 1655,
+ "url": "https://mastodon.cloud/@{}",
+ "urlMain": "https://mastodon.cloud/",
+ "username_claimed": "TheAdmin",
+ "username_unclaimed": "noonewouldeverusethis7"
+ },
+ "mastodon.social": {
+ "errorType": "status_code",
+ "rank": 1655,
+ "url": "https://mastodon.social/@{}",
+ "urlMain": "https://chaos.social/",
+ "username_claimed": "Gargron",
+ "username_unclaimed": "noonewouldeverusethis7"
+ },
+ "mastodon.technology": {
+ "errorType": "status_code",
+ "rank": 1655,
+ "url": "https://mastodon.technology/@{}",
+ "urlMain": "https://mastodon.xyz/",
+ "username_claimed": "ashfurrow",
+ "username_unclaimed": "noonewouldeverusethis7"
+ },
+ "mastodon.xyz": {
+ "errorType": "status_code",
+ "rank": 1655,
+ "url": "https://mastodon.xyz/@{}",
+ "urlMain": "https://mastodon.xyz/",
+ "username_claimed": "TheKinrar",
+ "username_unclaimed": "noonewouldeverusethis7"
+ },
+ "mstdn.io": {
"errorType": "status_code",
"rank": 792347,
"url": "https://mstdn.io/@{}",
@@ -1518,6 +1558,14 @@
"urlMain": "https://www.smule.com/",
"username_claimed": "blue",
"username_unclaimed": "noonewouldeverusethis7"
+ },
+ "social.tchncs.de": {
+ "errorType": "status_code",
+ "rank": 1655,
+ "url": "https://social.tchncs.de/@{}",
+ "urlMain": "https://social.tchncs.de/",
+ "username_claimed": "Milan",
+ "username_unclaimed": "noonewouldeverusethis7"
},
"SoundCloud": {
"errorType": "status_code",
@@ -2390,6 +2438,14 @@
"urlMain": "https://pikabu.ru/",
"username_claimed": "blue",
"username_unclaimed": "noonewouldeverusethis7"
+ },
+ "pr0gramm": {
+ "errorType": "status_code",
+ "rank": 1655,
+ "url": "https://pr0gramm.com/api/profile/info?name={}",
+ "urlMain": "https://pr0gramm.com/",
+ "username_claimed": "cha0s",
+ "username_unclaimed": "noonewouldeverusethis123123123123123123"
},
"pvpru": {
"errorType": "status_code",
@@ -2490,4 +2546,4 @@
"username_claimed": "blue",
"username_unclaimed": "noonewouldeverusethis7"
}
-}
\ No newline at end of file
+}
| Adds some popular mastodon instances (chaos.social, mastodon.social, mastodon.xyz, pawoo.net, social.tchncs.de) and changes the existing 'Mastodon' instance to its own name 'mstdn.io", because mstdn.io is not the whole mastodon fediverse. Also adds imageboard pr0gramm.com | https://api.github.com/repos/sherlock-project/sherlock/pulls/521 | 2020-01-14T08:08:55Z | 2020-01-16T14:14:58Z | 2020-01-16T14:14:58Z | 2020-01-16T14:14:59Z | 1,032 | sherlock-project/sherlock | 36,654 |
Add install docs for openSUSE | diff --git a/docs/install.rst b/docs/install.rst
index cf93cc58ea..b37d9c9153 100644
--- a/docs/install.rst
+++ b/docs/install.rst
@@ -110,6 +110,20 @@ libraries. This was tested on a fully patched installation of Fedora 24.
Make sure to have an up-to-date version of pip by running ``pip3 install -U pip``.
+.. _install-source-opensuse:
+
+Installation from Source on openSUSE
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This was tested on a fully patched installation of openSUSE Tumbleweed.
+Please note that openSUSE Leap 42.2 only comes with Python 3.4.x, whereas mitmproxy requires Python 3.5 or above.
+You can check you Python version by running ``python3 --version``.
+
+.. code:: bash
+
+ sudo zypper install python3-pip python3-devel libffi-devel openssl-devel gcc-c++
+ sudo pip3 install mitmproxy
+
.. _install-source-windows:
| My main VM happily runs on Tumbleweed for a while now, so I can maintain this. | https://api.github.com/repos/mitmproxy/mitmproxy/pulls/2124 | 2017-03-08T16:43:03Z | 2017-03-08T19:06:35Z | 2017-03-08T19:06:35Z | 2017-03-08T19:06:38Z | 244 | mitmproxy/mitmproxy | 28,211 |
Add Pearson dictionary API | diff --git a/README.md b/README.md
index 32fe32960f..899c36e9ef 100644
--- a/README.md
+++ b/README.md
@@ -173,6 +173,7 @@ For information on contributing to this project, please see the [contributing gu
| Open Government, USA | United States Government Open Data | No |[Go!](https://www.data.gov/) |
| Open Government, Canada | Canadian Government Open Data | No |[Go!](http://open.canada.ca/en) |
| Open Government Data, India | Indian Government Open Data | `token` | [Go!](https://data.gov.in/) |
+| Pearson | Dictionary Data API | `apiKey` query string |[Go!](http://developer.pearson.com/apis/dictionaries) |
| Quandl API | Stock Market Data | No |[Go!](https://www.quandl.com/) |
| Scoop.it | Content Curation Service | `apiKey` query string |[Go!](https://www.scoop.it/dev) |
| Wikipedia | Mediawiki API | No |[Go!](https://www.mediawiki.org/wiki/API:Main_page) |
| [Pearson | Dictionary Data API](http://developer.pearson.com/apis/dictionaries)
| https://api.github.com/repos/public-apis/public-apis/pulls/261 | 2017-01-06T06:14:14Z | 2017-01-06T09:28:42Z | 2017-01-06T09:28:42Z | 2017-01-06T09:28:42Z | 254 | public-apis/public-apis | 36,059 |
[Doctests] Make TFRoberta-like meaningfull | diff --git a/src/transformers/models/roberta/modeling_tf_roberta.py b/src/transformers/models/roberta/modeling_tf_roberta.py
index 4ae381451c5fb..bbdf7ebf33011 100644
--- a/src/transformers/models/roberta/modeling_tf_roberta.py
+++ b/src/transformers/models/roberta/modeling_tf_roberta.py
@@ -1076,6 +1076,9 @@ def get_prefix_bias_name(self):
checkpoint=_CHECKPOINT_FOR_DOC,
output_type=TFMaskedLMOutput,
config_class=_CONFIG_FOR_DOC,
+ mask="<mask>",
+ expected_output="' Paris'",
+ expected_loss=0.1,
)
def call(
self,
@@ -1331,9 +1334,11 @@ def __init__(self, config, *inputs, **kwargs):
@add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
@add_code_sample_docstrings(
processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
+ checkpoint="cardiffnlp/twitter-roberta-base-emotion",
output_type=TFSequenceClassifierOutput,
config_class=_CONFIG_FOR_DOC,
+ expected_output="'optimism'",
+ expected_loss=0.08,
)
def call(
self,
@@ -1543,9 +1548,11 @@ def __init__(self, config, *inputs, **kwargs):
@add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
@add_code_sample_docstrings(
processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
+ checkpoint="ydshieh/roberta-large-ner-english",
output_type=TFTokenClassifierOutput,
config_class=_CONFIG_FOR_DOC,
+ expected_output="['O', 'ORG', 'ORG', 'O', 'O', 'O', 'O', 'O', 'LOC', 'O', 'LOC', 'LOC']",
+ expected_loss=0.01,
)
def call(
self,
@@ -1628,9 +1635,11 @@ def __init__(self, config, *inputs, **kwargs):
@add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
@add_code_sample_docstrings(
processor_class=_TOKENIZER_FOR_DOC,
- checkpoint=_CHECKPOINT_FOR_DOC,
+ checkpoint="ydshieh/roberta-base-squad2",
output_type=TFQuestionAnsweringModelOutput,
config_class=_CONFIG_FOR_DOC,
+ expected_output="' puppet'",
+ expected_loss=0.86,
)
def call(
self,
diff --git a/src/transformers/utils/doc.py b/src/transformers/utils/doc.py
index 17f8adeb26307..394d2aaa2fed7 100644
--- a/src/transformers/utils/doc.py
+++ b/src/transformers/utils/doc.py
@@ -618,15 +618,26 @@ def _prepare_output_docstrings(output_type, config_class, min_indent=None):
>>> tokenizer = {processor_class}.from_pretrained("{checkpoint}")
>>> model = {model_class}.from_pretrained("{checkpoint}")
- >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
- >>> input_ids = inputs["input_ids"]
- >>> inputs["labels"] = tf.reshape(
- ... tf.constant([1] * tf.size(input_ids).numpy()), (-1, tf.size(input_ids))
- >>> ) # Batch size 1
+ >>> inputs = tokenizer(
+ ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
+ ... )
- >>> outputs = model(inputs)
- >>> loss = outputs.loss
- >>> logits = outputs.logits
+ >>> logits = model(**inputs).logits
+ >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
+
+ >>> # Note that tokens are classified rather then input words which means that
+ >>> # there might be more predicted token classes than words.
+ >>> # Multiple token classes might account for the same word
+ >>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
+ >>> predicted_tokens_classes
+ {expected_output}
+ ```
+
+ ```python
+ >>> labels = predicted_token_class_ids
+ >>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
+ >>> round(float(loss), 2)
+ {expected_loss}
```
"""
@@ -641,13 +652,26 @@ def _prepare_output_docstrings(output_type, config_class, min_indent=None):
>>> model = {model_class}.from_pretrained("{checkpoint}")
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
- >>> input_dict = tokenizer(question, text, return_tensors="tf")
- >>> outputs = model(input_dict)
- >>> start_logits = outputs.start_logits
- >>> end_logits = outputs.end_logits
- >>> all_tokens = tokenizer.convert_ids_to_tokens(input_dict["input_ids"].numpy()[0])
- >>> answer = " ".join(all_tokens[tf.math.argmax(start_logits, 1)[0] : tf.math.argmax(end_logits, 1)[0] + 1])
+ >>> inputs = tokenizer(question, text, return_tensors="tf")
+ >>> outputs = model(**inputs)
+
+ >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
+ >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
+
+ >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
+ >>> tokenizer.decode(predict_answer_tokens)
+ {expected_output}
+ ```
+
+ ```python
+ >>> # target is "nice puppet"
+ >>> target_start_index, target_end_index = tf.constant([14]), tf.constant([15])
+
+ >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
+ >>> loss = tf.math.reduce_mean(outputs.loss)
+ >>> round(float(loss), 2)
+ {expected_loss}
```
"""
@@ -662,11 +686,23 @@ def _prepare_output_docstrings(output_type, config_class, min_indent=None):
>>> model = {model_class}.from_pretrained("{checkpoint}")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
- >>> inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1
- >>> outputs = model(inputs)
- >>> loss = outputs.loss
- >>> logits = outputs.logits
+ >>> logits = model(**inputs).logits
+
+ >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
+ >>> model.config.id2label[predicted_class_id]
+ {expected_output}
+ ```
+
+ ```python
+ >>> # To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
+ >>> num_labels = len(model.config.id2label)
+ >>> model = {model_class}.from_pretrained("{checkpoint}", num_labels=num_labels)
+
+ >>> labels = tf.constant(1)
+ >>> loss = model(**inputs, labels=labels).loss
+ >>> round(float(loss), 2)
+ {expected_loss}
```
"""
@@ -681,11 +717,24 @@ def _prepare_output_docstrings(output_type, config_class, min_indent=None):
>>> model = {model_class}.from_pretrained("{checkpoint}")
>>> inputs = tokenizer("The capital of France is {mask}.", return_tensors="tf")
- >>> inputs["labels"] = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
+ >>> logits = model(**inputs).logits
- >>> outputs = model(inputs)
- >>> loss = outputs.loss
- >>> logits = outputs.logits
+ >>> # retrieve index of {mask}
+ >>> mask_token_index = tf.where(inputs.input_ids == tokenizer.mask_token_id)[0][1]
+
+ >>> predicted_token_id = tf.math.argmax(logits[0, mask_token_index], axis=-1)
+ >>> tokenizer.decode(predicted_token_id)
+ {expected_output}
+ ```
+
+ ```python
+ >>> labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
+ >>> # mask labels of non-{mask} tokens
+ >>> labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
+
+ >>> outputs = model(**inputs, labels=labels)
+ >>> round(float(outputs.loss), 2)
+ {expected_loss}
```
"""
diff --git a/utils/documentation_tests.txt b/utils/documentation_tests.txt
index 7d31045184719..b5d9f8570cbc4 100644
--- a/utils/documentation_tests.txt
+++ b/utils/documentation_tests.txt
@@ -30,6 +30,7 @@ src/transformers/models/poolformer/modeling_poolformer.py
src/transformers/models/resnet/modeling_resnet.py
src/transformers/models/resnet/modeling_resnet.py
src/transformers/models/roberta/modeling_roberta.py
+src/transformers/models/roberta/modeling_tf_roberta.py
src/transformers/models/segformer/modeling_segformer.py
src/transformers/models/sew/modeling_sew.py
src/transformers/models/sew_d/modeling_sew_d.py
| # What does this PR do?
Similar to #16363, but for TF
I made sure the doctests run without failure, but 2 models currently use `from_pt=True` --> need to convert and upload the TF ckpts.
@patrickvonplaten | https://api.github.com/repos/huggingface/transformers/pulls/16370 | 2022-03-23T19:20:37Z | 2022-03-24T09:26:28Z | 2022-03-24T09:26:28Z | 2022-05-05T10:30:09Z | 2,176 | huggingface/transformers | 12,498 |
MAINT adding Yao Xiao in the core contributor | diff --git a/doc/authors.rst b/doc/authors.rst
index ddad9803ee8ab..0ba69d8afa60d 100644
--- a/doc/authors.rst
+++ b/doc/authors.rst
@@ -98,6 +98,10 @@
<p>Nelle Varoquaux</p>
</div>
<div>
+ <a href='https://github.com/Charlie-XIAO'><img src='https://avatars.githubusercontent.com/u/108576690?v=4' class='avatar' /></a> <br />
+ <p>Yao Xiao</p>
+ </div>
+ <div>
<a href='https://github.com/rth'><img src='https://avatars.githubusercontent.com/u/630936?v=4' class='avatar' /></a> <br />
<p>Roman Yurchak</p>
</div>
diff --git a/doc/communication_team.rst b/doc/communication_team.rst
index 48a876bd35725..30e4f1169cfc9 100644
--- a/doc/communication_team.rst
+++ b/doc/communication_team.rst
@@ -11,6 +11,6 @@
</div>
<div>
<a href='https://github.com/francoisgoupil'><img src='https://avatars.githubusercontent.com/u/98105626?v=4' class='avatar' /></a> <br />
- <p>francoisgoupil</p>
+ <p>François Goupil</p>
</div>
</div>
diff --git a/doc/documentation_team.rst b/doc/documentation_team.rst
index 935a995a7c00e..e7f13e5fe218f 100644
--- a/doc/documentation_team.rst
+++ b/doc/documentation_team.rst
@@ -13,4 +13,8 @@
<a href='https://github.com/lucyleeow'><img src='https://avatars.githubusercontent.com/u/23182829?v=4' class='avatar' /></a> <br />
<p>Lucy Liu</p>
</div>
+ <div>
+ <a href='https://github.com/Charlie-XIAO'><img src='https://avatars.githubusercontent.com/u/108576690?v=4' class='avatar' /></a> <br />
+ <p>Yao Xiao</p>
+ </div>
</div>
| Follow-up of the announcement of adding @Charlie-XIAO as a core contributor.
@Charlie-XIAO I added you the maintainer team and as well to the documentation one to acknowledge the work to redesign of the webpage with the switch to pydata-sphinx-theme.
I checked the rendering and everything looked OK locally. | https://api.github.com/repos/scikit-learn/scikit-learn/pulls/28463 | 2024-02-19T07:51:28Z | 2024-02-19T09:20:31Z | 2024-02-19T09:20:31Z | 2024-02-19T09:26:38Z | 545 | scikit-learn/scikit-learn | 46,620 |
2021 Camry fw | diff --git a/selfdrive/car/toyota/values.py b/selfdrive/car/toyota/values.py
index d989b750cd9476..211b61133d8d1d 100644
--- a/selfdrive/car/toyota/values.py
+++ b/selfdrive/car/toyota/values.py
@@ -471,10 +471,12 @@ class CAR:
b'8965B33630\x00\x00\x00\x00\x00\x00',
],
(Ecu.esp, 0x7b0, None): [
+ b'\x01F152606370\x00\x00\x00\x00\x00\x00',
b'\x01F152606400\x00\x00\x00\x00\x00\x00',
],
(Ecu.engine, 0x700, None): [
b'\x018966306Q5000\x00\x00\x00\x00',
+ b'\x018966306T3100\x00\x00\x00\x00',
],
(Ecu.fwdRadar, 0x750, 15): [
b'\x018821F6201200\x00\x00\x00\x00',
| https://api.github.com/repos/commaai/openpilot/pulls/20110 | 2021-02-18T19:56:31Z | 2021-02-18T20:11:51Z | 2021-02-18T20:11:51Z | 2021-02-18T20:11:52Z | 258 | commaai/openpilot | 9,572 | |
Update dataset_traversal.py | diff --git a/ppocr/data/rec/dataset_traversal.py b/ppocr/data/rec/dataset_traversal.py
index 5efba512c0..ebee624ab7 100755
--- a/ppocr/data/rec/dataset_traversal.py
+++ b/ppocr/data/rec/dataset_traversal.py
@@ -237,7 +237,7 @@ def __call__(self, process_id):
def get_device_num():
if self.use_gpu:
- gpus = os.environ.get("CUDA_VISIBLE_DEVICES", 1)
+ gpus = os.environ.get("CUDA_VISIBLE_DEVICES", '1')
gpu_num = len(gpus.split(','))
return gpu_num
else:
| CUDA_VISIBLE_DEVICES 默认值必须为str, 否则split会报错 | https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/557 | 2020-08-18T07:09:58Z | 2020-08-19T05:52:24Z | 2020-08-19T05:52:24Z | 2020-08-19T05:52:24Z | 149 | PaddlePaddle/PaddleOCR | 41,780 |
Added json.dumps parameters | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 5c8f4baa1..7e0012015 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,11 +5,16 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
-## [10.12.1] - unreleased
+## [10.13.0] - unreleased
+
+### Added
+
+- Added json.dumps parameters to print_json https://github.com/willmcgugan/rich/issues/1638
### Fixed
- Fixed an edge case bug when console module try to detect if they are in a tty at the end of a pytest run
+- Fixed issue with TERM env vars that have more than one hyphen https://github.com/willmcgugan/rich/issues/1640
## [10.12.0] - 2021-10-06
diff --git a/pyproject.toml b/pyproject.toml
index 6d1d265b1..de5d21641 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -2,7 +2,7 @@
name = "rich"
homepage = "https://github.com/willmcgugan/rich"
documentation = "https://rich.readthedocs.io/en/latest/"
-version = "10.12.0"
+version = "10.13.0"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
authors = ["Will McGugan <willmcgugan@gmail.com>"]
license = "MIT"
diff --git a/rich/__init__.py b/rich/__init__.py
index 604fa04cf..ed11f5d7e 100644
--- a/rich/__init__.py
+++ b/rich/__init__.py
@@ -1,7 +1,7 @@
"""Rich text and beautiful formatting in the terminal."""
import os
-from typing import IO, TYPE_CHECKING, Any, Optional
+from typing import Callable, IO, TYPE_CHECKING, Any, Optional
from ._extension import load_ipython_extension
@@ -75,6 +75,12 @@ def print_json(
data: Any = None,
indent: int = 2,
highlight: bool = True,
+ skip_keys: bool = False,
+ ensure_ascii: bool = True,
+ check_circular: bool = True,
+ allow_nan: bool = True,
+ default: Optional[Callable[[Any], Any]] = None,
+ sort_keys: bool = False,
) -> None:
"""Pretty prints JSON. Output will be valid JSON.
@@ -83,9 +89,27 @@ def print_json(
data (Any): If json is not supplied, then encode this data.
indent (int, optional): Number of spaces to indent. Defaults to 2.
highlight (bool, optional): Enable highlighting of output: Defaults to True.
+ skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False.
+ ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False.
+ check_circular (bool, optional): Check for circular references. Defaults to True.
+ allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True.
+ default (Callable, optional): A callable that converts values that can not be encoded
+ in to something that can be JSON encoded. Defaults to None.
+ sort_keys (bool, optional): Sort dictionary keys. Defaults to False.
"""
- get_console().print_json(json, data=data, indent=indent, highlight=highlight)
+ get_console().print_json(
+ json,
+ data=data,
+ indent=indent,
+ highlight=highlight,
+ skip_keys=skip_keys,
+ ensure_ascii=ensure_ascii,
+ check_circular=check_circular,
+ allow_nan=allow_nan,
+ default=default,
+ sort_keys=sort_keys,
+ )
def inspect(
diff --git a/rich/console.py b/rich/console.py
index b188dc98e..8abc382fe 100644
--- a/rich/console.py
+++ b/rich/console.py
@@ -1627,6 +1627,12 @@ def print_json(
data: Any = None,
indent: int = 2,
highlight: bool = True,
+ skip_keys: bool = False,
+ ensure_ascii: bool = True,
+ check_circular: bool = True,
+ allow_nan: bool = True,
+ default: Optional[Callable[[Any], Any]] = None,
+ sort_keys: bool = False,
) -> None:
"""Pretty prints JSON. Output will be valid JSON.
@@ -1635,17 +1641,44 @@ def print_json(
data (Any): If json is not supplied, then encode this data.
indent (int, optional): Number of spaces to indent. Defaults to 2.
highlight (bool, optional): Enable highlighting of output: Defaults to True.
+ skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False.
+ ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False.
+ check_circular (bool, optional): Check for circular references. Defaults to True.
+ allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True.
+ default (Callable, optional): A callable that converts values that can not be encoded
+ in to something that can be JSON encoded. Defaults to None.
+ sort_keys (bool, optional): Sort dictionary keys. Defaults to False.
"""
from rich.json import JSON
if json is None:
- json_renderable = JSON.from_data(data, indent=indent, highlight=highlight)
+ json_renderable = JSON.from_data(
+ data,
+ indent=indent,
+ highlight=highlight,
+ skip_keys=skip_keys,
+ ensure_ascii=ensure_ascii,
+ check_circular=check_circular,
+ allow_nan=allow_nan,
+ default=default,
+ sort_keys=sort_keys,
+ )
else:
if not isinstance(json, str):
raise TypeError(
f"json must be str. Did you mean print_json(data={json!r}) ?"
)
- json_renderable = JSON(json, indent=indent, highlight=highlight)
+ json_renderable = JSON(
+ json,
+ indent=indent,
+ highlight=highlight,
+ skip_keys=skip_keys,
+ ensure_ascii=ensure_ascii,
+ check_circular=check_circular,
+ allow_nan=allow_nan,
+ default=default,
+ sort_keys=sort_keys,
+ )
self.print(json_renderable)
def update_screen(
diff --git a/rich/json.py b/rich/json.py
index a63f1754d..4f3199fd1 100644
--- a/rich/json.py
+++ b/rich/json.py
@@ -12,11 +12,38 @@ class JSON:
json (str): JSON encoded data.
indent (int, optional): Number of characters to indent by. Defaults to 2.
highlight (bool, optional): Enable highlighting. Defaults to True.
+ skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False.
+ ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False.
+ check_circular (bool, optional): Check for circular references. Defaults to True.
+ allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True.
+ default (Callable, optional): A callable that converts values that can not be encoded
+ in to something that can be JSON encoded. Defaults to None.
+ sort_keys (bool, optional): Sort dictionary keys. Defaults to False.
"""
- def __init__(self, json: str, indent: int = 2, highlight: bool = True) -> None:
+ def __init__(
+ self,
+ json: str,
+ indent: int = 2,
+ highlight: bool = True,
+ skip_keys: bool = False,
+ ensure_ascii: bool = True,
+ check_circular: bool = True,
+ allow_nan: bool = True,
+ default: Optional[Callable[[Any], Any]] = None,
+ sort_keys: bool = False,
+ ) -> None:
data = loads(json)
- json = dumps(data, indent=indent)
+ json = dumps(
+ data,
+ indent=indent,
+ skipkeys=skip_keys,
+ ensure_ascii=ensure_ascii,
+ check_circular=check_circular,
+ allow_nan=allow_nan,
+ default=default,
+ sort_keys=sort_keys,
+ )
highlighter = JSONHighlighter() if highlight else NullHighlighter()
self.text = highlighter(json)
self.text.no_wrap = True
@@ -28,7 +55,12 @@ def from_data(
data: Any,
indent: int = 2,
highlight: bool = True,
+ skip_keys: bool = False,
+ ensure_ascii: bool = True,
+ check_circular: bool = True,
+ allow_nan: bool = True,
default: Optional[Callable[[Any], Any]] = None,
+ sort_keys: bool = False,
) -> "JSON":
"""Encodes a JSON object from arbitrary data.
@@ -37,12 +69,28 @@ def from_data(
indent (int, optional): Number of characters to indent by. Defaults to 2.
highlight (bool, optional): Enable highlighting. Defaults to True.
default (Callable, optional): Optional callable which will be called for objects that cannot be serialized. Defaults to None.
+ skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False.
+ ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False.
+ check_circular (bool, optional): Check for circular references. Defaults to True.
+ allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True.
+ default (Callable, optional): A callable that converts values that can not be encoded
+ in to something that can be JSON encoded. Defaults to None.
+ sort_keys (bool, optional): Sort dictionary keys. Defaults to False.
Returns:
JSON: New JSON object from the given data.
"""
json_instance: "JSON" = cls.__new__(cls)
- json = dumps(data, indent=indent, default=default)
+ json = dumps(
+ data,
+ indent=indent,
+ skipkeys=skip_keys,
+ ensure_ascii=ensure_ascii,
+ check_circular=check_circular,
+ allow_nan=allow_nan,
+ default=default,
+ sort_keys=sort_keys,
+ )
highlighter = JSONHighlighter() if highlight else NullHighlighter()
json_instance.text = highlighter(json)
json_instance.text.no_wrap = True
diff --git a/tests/test_console.py b/tests/test_console.py
index fe3ce5b0d..6660ec9d1 100644
--- a/tests/test_console.py
+++ b/tests/test_console.py
@@ -141,6 +141,15 @@ def test_print_json_data():
assert result == expected
+def test_print_json_ensure_ascii():
+ console = Console(file=io.StringIO(), color_system="truecolor")
+ console.print_json(data={"foo": "💩"}, ensure_ascii=False)
+ result = console.file.getvalue()
+ print(repr(result))
+ expected = '\x1b[1m{\x1b[0m\n \x1b[1;34m"foo"\x1b[0m: \x1b[32m"💩"\x1b[0m\n\x1b[1m}\x1b[0m\n'
+ assert result == expected
+
+
def test_log():
console = Console(
file=io.StringIO(),
@@ -714,4 +723,4 @@ def _mock_isatty():
@pytest.mark.skipif(sys.platform == "win32", reason="not relevant on Windows")
def test_detect_color_system():
console = Console(_environ={"TERM": "rxvt-unicode-256color"}, force_terminal=True)
- assert console._detect_color_system() == ColorSystem.EIGHT_BIT
\ No newline at end of file
+ assert console._detect_color_system() == ColorSystem.EIGHT_BIT
| Closes https://github.com/willmcgugan/rich/issues/1638 | https://api.github.com/repos/Textualize/rich/pulls/1644 | 2021-11-05T10:55:40Z | 2021-11-05T11:13:51Z | 2021-11-05T11:13:51Z | 2021-11-05T11:13:54Z | 2,812 | Textualize/rich | 48,397 |
Rename --renew-by-default to --force-renewal | diff --git a/docs/using.rst b/docs/using.rst
index ebc3ef6acb4..9ee16dffd22 100644
--- a/docs/using.rst
+++ b/docs/using.rst
@@ -167,18 +167,19 @@ interested, you can also :ref:`write your own plugin <dev-plugin>`.
Renewal
=======
-.. note:: Let's Encrypt CA issues short lived certificates (90
+.. note:: Let's Encrypt CA issues short-lived certificates (90
days). Make sure you renew the certificates at least once in 3
months.
In order to renew certificates simply call the ``letsencrypt`` (or
-letsencrypt-auto_) again, and use the same values when prompted. You
-can automate it slightly by passing necessary flags on the CLI (see
-`--help all`), or even further using the :ref:`config-file`. The
-``--renew-by-default`` flag may be helpful for automating renewal. If
-you're sure that UI doesn't prompt for any details you can add the
-command to ``crontab`` (make it less than every 90 days to avoid
-problems, say every month).
+letsencrypt-auto_) again, and use the same values when prompted. You can
+automate it slightly by passing necessary flags on the CLI (see `--help
+all`), or even further using the :ref:`config-file`. The ``--force-renew``
+flag may be helpful for automating renewal; it causes the expiration time
+of the certificate(s) to be ignored when considering renewal. If you're
+sure that UI doesn't prompt for any details you can add the command to
+``crontab`` (make it less than every 90 days to avoid problems, say
+every month).
Please note that the CA will send notification emails to the address
you provide if you do not renew certificates that are about to expire.
diff --git a/letsencrypt/cli.py b/letsencrypt/cli.py
index 00d45b700a5..e0a07a94b4e 100644
--- a/letsencrypt/cli.py
+++ b/letsencrypt/cli.py
@@ -282,7 +282,7 @@ def _treat_as_renewal(config, domains):
def _should_renew(config, lineage):
"Return true if any of the circumstances for automatic renewal apply."
if config.renew_by_default:
- logger.info("Auto-renewal forced with --renew-by-default...")
+ logger.info("Auto-renewal forced with --force-renewal...")
return True
if lineage.should_autorenew(interactive=True):
logger.info("Cert is due for renewal, auto-renewing...")
@@ -1401,10 +1401,12 @@ def prepare_and_parse_args(plugins, args):
version="%(prog)s {0}".format(letsencrypt.__version__),
help="show program's version number and exit")
helpful.add(
- "automation", "--renew-by-default", action="store_true",
- help="Select renewal by default when domains are a superset of a "
- "previously attained cert (often --keep-until-expiring is "
- "more appropriate). Implies --expand.")
+ "automation", "--force-renewal", "--renew-by-default",
+ action="store_true", dest="renew_by_default", help="If a certificate "
+ "already exists for the requested domains, renew it now, "
+ "regardless of whether it is near expiry. (Often "
+ "--keep-until-expiring is more appropriate). Also implies "
+ "--expand.")
helpful.add(
"automation", "--agree-tos", dest="tos", action="store_true",
help="Agree to the Let's Encrypt Subscriber Agreement")
| https://api.github.com/repos/certbot/certbot/pulls/2278 | 2016-01-26T01:18:07Z | 2016-02-10T00:37:12Z | 2016-02-10T00:37:12Z | 2016-05-06T19:22:16Z | 833 | certbot/certbot | 2,580 | |
Correct purpose of "When is this MAC address used" | diff --git a/README.md b/README.md
index af63d356a..85dca4ce0 100644
--- a/README.md
+++ b/README.md
@@ -116,7 +116,7 @@ Packets that are sent on the ethernet are always coming from a MAC address and s
<details>
<summary>When is this MAC address used?: ff:ff:ff:ff:ff:ff</summary><br><b>
-When a device sends a packet to the broadcast MAC address (FF:FF:FF:FF:FF:FF), it is delivered to all stations on the local network. It needs to be used in order for all devices to receive your packet at the datalink layer.
+When a device sends a packet to the broadcast MAC address (FF:FF:FF:FF:FF:FF), it is delivered to all stations on the local network. Ethernet broadcasts are used to resolve IP addresses to MAC addresses (by ARP) at the datalink layer .
</b></details>
<details>
| Second sentence was a tautology in fact, without disclosing the purpose of use. | https://api.github.com/repos/bregman-arie/devops-exercises/pulls/179 | 2021-11-16T10:14:20Z | 2021-11-16T13:42:01Z | 2021-11-16T13:42:01Z | 2021-11-16T13:42:01Z | 226 | bregman-arie/devops-exercises | 17,434 |
Minor filename correction in graphs folder | diff --git a/graphs/graph.py b/graphs/graph.py
deleted file mode 100644
index 0c981c39d320..000000000000
--- a/graphs/graph.py
+++ /dev/null
@@ -1,44 +0,0 @@
-#!/usr/bin/python
-# encoding=utf8
-
-from __future__ import print_function
-# Author: OMKAR PATHAK
-
-# We can use Python's dictionary for constructing the graph.
-
-class AdjacencyList(object):
- def __init__(self):
- self.List = {}
-
- def addEdge(self, fromVertex, toVertex):
- # check if vertex is already present
- if fromVertex in self.List.keys():
- self.List[fromVertex].append(toVertex)
- else:
- self.List[fromVertex] = [toVertex]
-
- def printList(self):
- for i in self.List:
- print((i,'->',' -> '.join([str(j) for j in self.List[i]])))
-
-if __name__ == '__main__':
- al = AdjacencyList()
- al.addEdge(0, 1)
- al.addEdge(0, 4)
- al.addEdge(4, 1)
- al.addEdge(4, 3)
- al.addEdge(1, 0)
- al.addEdge(1, 4)
- al.addEdge(1, 3)
- al.addEdge(1, 2)
- al.addEdge(2, 3)
- al.addEdge(3, 4)
-
- al.printList()
-
- # OUTPUT:
- # 0 -> 1 -> 4
- # 1 -> 0 -> 4 -> 3 -> 2
- # 2 -> 3
- # 3 -> 4
- # 4 -> 1 -> 3
diff --git a/graphs/graph_list.py b/graphs/graph_list.py
index d67bc96c4a81..0c981c39d320 100644
--- a/graphs/graph_list.py
+++ b/graphs/graph_list.py
@@ -1,31 +1,44 @@
-from __future__ import print_function
-
-
-class Graph:
- def __init__(self, vertex):
- self.vertex = vertex
- self.graph = [[0] for i in range(vertex)]
-
- def add_edge(self, u, v):
- self.graph[u - 1].append(v - 1)
-
- def show(self):
- for i in range(self.vertex):
- print('%d: '% (i + 1), end=' ')
- for j in self.graph[i]:
- print('%d-> '% (j + 1), end=' ')
- print(' ')
-
-
-
-g = Graph(100)
-
-g.add_edge(1,3)
-g.add_edge(2,3)
-g.add_edge(3,4)
-g.add_edge(3,5)
-g.add_edge(4,5)
-
-
-g.show()
+#!/usr/bin/python
+# encoding=utf8
+from __future__ import print_function
+# Author: OMKAR PATHAK
+
+# We can use Python's dictionary for constructing the graph.
+
+class AdjacencyList(object):
+ def __init__(self):
+ self.List = {}
+
+ def addEdge(self, fromVertex, toVertex):
+ # check if vertex is already present
+ if fromVertex in self.List.keys():
+ self.List[fromVertex].append(toVertex)
+ else:
+ self.List[fromVertex] = [toVertex]
+
+ def printList(self):
+ for i in self.List:
+ print((i,'->',' -> '.join([str(j) for j in self.List[i]])))
+
+if __name__ == '__main__':
+ al = AdjacencyList()
+ al.addEdge(0, 1)
+ al.addEdge(0, 4)
+ al.addEdge(4, 1)
+ al.addEdge(4, 3)
+ al.addEdge(1, 0)
+ al.addEdge(1, 4)
+ al.addEdge(1, 3)
+ al.addEdge(1, 2)
+ al.addEdge(2, 3)
+ al.addEdge(3, 4)
+
+ al.printList()
+
+ # OUTPUT:
+ # 0 -> 1 -> 4
+ # 1 -> 0 -> 4 -> 3 -> 2
+ # 2 -> 3
+ # 3 -> 4
+ # 4 -> 1 -> 3
| Removed the (incorrectly named) redundant file graph_list.py and renamed graph.py to graph_list.py | https://api.github.com/repos/TheAlgorithms/Python/pulls/820 | 2019-05-18T15:14:54Z | 2019-05-21T06:06:06Z | 2019-05-21T06:06:06Z | 2019-05-21T06:06:06Z | 1,043 | TheAlgorithms/Python | 29,942 |
Updated RijksMuseum Doc Link | diff --git a/README.md b/README.md
index b2fcab633e..f07a0a8940 100644
--- a/README.md
+++ b/README.md
@@ -193,7 +193,7 @@ API | Description | Auth | HTTPS | CORS |
| [Noun Project](http://api.thenounproject.com/index.html) | Icons | `OAuth` | No | Unknown |
| [PHP-Noise](https://php-noise.com/) | Noise Background Image Generator | No | Yes | Yes |
| [Pixel Encounter](https://pixelencounter.com/api) | SVG Icon Generator | No | Yes | No |
-| [Rijksmuseum](https://www.rijksmuseum.nl/en/api) | Art | `apiKey` | Yes | Unknown |
+| [Rijksmuseum](https://data.rijksmuseum.nl/object-metadata/api/) | RijksMuseum Data | `apiKey` | Yes | Unknown |
| [Word Cloud](https://wordcloudapi.com/) | Easily create word clouds | `apiKey` | Yes | Unknown |
**[⬆ Back to Index](#index)**
| The previous link was redirected to the website of the museum. Which had no documentation or reference to the APIs.
<!-- Thank you for taking the time to work on a Pull Request for this project! -->
<!-- To ensure your PR is dealt with swiftly please check the following: -->
- [x] My submission is formatted according to the guidelines in the [contributing guide](/CONTRIBUTING.md)
- [x] My addition is ordered alphabetically
- [x] My submission has a useful description
- [x] The description does not have more than 100 characters
- [x] The description does not end with punctuation
- [ ] Each table column is padded with one space on either side
- [x] I have searched the repository for any relevant issues or pull requests
- [ ] Any category I am creating has the minimum requirement of 3 items
- [x] All changes have been [squashed][squash-link] into a single commit
[squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
| https://api.github.com/repos/public-apis/public-apis/pulls/2782 | 2021-10-29T08:48:08Z | 2021-10-30T00:20:32Z | 2021-10-30T00:20:32Z | 2021-10-30T00:20:32Z | 249 | public-apis/public-apis | 35,283 |
Improve package import compatibility | diff --git a/paddleocr.py b/paddleocr.py
index af0145b48b..ba707d6651 100644
--- a/paddleocr.py
+++ b/paddleocr.py
@@ -27,9 +27,17 @@
import numpy as np
from pathlib import Path
-tools = importlib.import_module('.', 'tools')
-ppocr = importlib.import_module('.', 'ppocr')
-ppstructure = importlib.import_module('.', 'ppstructure')
+def _import_file(module_name, file_path, make_importable=False):
+ spec = importlib.util.spec_from_file_location(module_name, file_path)
+ module = importlib.util.module_from_spec(spec)
+ spec.loader.exec_module(module)
+ if make_importable:
+ sys.modules[module_name] = module
+ return module
+
+tools = _import_file('tools', os.path.join(__dir__, 'tools/__init__.py'), make_importable=True)
+ppocr = importlib.import_module('ppocr', 'paddleocr')
+ppstructure = importlib.import_module('ppstructure', 'paddleocr')
from tools.infer import predict_system
from ppocr.utils.logging import get_logger
| This update is to fix PaddleOCR package import compatibility issue.
For instance, at moment, user cannot import both paddleocr & detectron2 at same time due to conflicting importing issues.
With this enhancement, paddleocr can be successfully imported along side with detectron2. | https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/10052 | 2023-05-29T09:02:41Z | 2023-05-29T10:34:48Z | 2023-05-29T10:34:48Z | 2023-05-31T04:23:52Z | 258 | PaddlePaddle/PaddleOCR | 42,531 |
Removes duplicate code block | diff --git a/docs/helm-chart/manage-dags-files.rst b/docs/helm-chart/manage-dags-files.rst
index a30d79ec8a04a..eab9c2eac3346 100644
--- a/docs/helm-chart/manage-dags-files.rst
+++ b/docs/helm-chart/manage-dags-files.rst
@@ -88,15 +88,6 @@ seconds. The other pods will read the synced DAGs. Not all volume plugins have s
Refer `Persistent Volume Access Modes <https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes>`__
for details.
-.. code-block:: bash
-
- helm upgrade --install airflow apache-airflow/airflow \
- --set dags.persistence.enabled=true \
- --set dags.gitSync.enabled=true
- # you can also override the other persistence or gitSync values
- # by setting the dags.persistence.* and dags.gitSync.* values
- # Please refer to values.yaml for details
-
.. code-block:: bash
helm upgrade --install airflow apache-airflow/airflow \
| There's are two code blocks with identical text in the helm-chart docs. This commit removes one of them.
<!--
Thank you for contributing! Please make sure that your code changes
are covered with tests. And in case of new features or big changes
remember to adjust the documentation.
Feel free to ping committers for the review!
In case of existing issue, reference it using one of the following:
closes: #ISSUE
related: #ISSUE
How to write a good git commit message:
http://chris.beams.io/posts/git-commit/
-->
---
**^ Add meaningful description above**
Read the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst#pull-request-guidelines)** for more information.
In case of fundamental code change, Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)) is needed.
In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
In case of backwards incompatible changes please leave a note in a newsfragement file, named `{pr_number}.significant.rst`, in [newsfragments](https://github.com/apache/airflow/tree/main/newsfragments).
| https://api.github.com/repos/apache/airflow/pulls/23952 | 2022-05-26T23:53:46Z | 2022-05-27T01:56:32Z | 2022-05-27T01:56:32Z | 2022-08-05T23:22:37Z | 244 | apache/airflow | 14,484 |
example for binance margin sell | diff --git a/examples/js/margin-borrow-buy-sell-repay-example.js b/examples/js/margin-borrow-buy-sell-repay-example.js
new file mode 100644
index 000000000000..6644c93c0542
--- /dev/null
+++ b/examples/js/margin-borrow-buy-sell-repay-example.js
@@ -0,0 +1,53 @@
+'use strict';
+const ccxt = require ('../../ccxt.js');
+
+// AUTO-TRANSPILE //
+
+// Note: examples or implementations are subject to possible change in future. Please subscribe to CCXT-announcements telegram/discord channel to be informed of related & important updates.
+
+async function example () {
+ const exchange = new ccxt['binance']({'apiKey': 'xxx', 'secret': 'xxx'});
+ // set target symbol
+ const symbol = 'BUSD/USDT';
+ // which asset you want to use for margin-borrow collateral
+ const collateral_coin = 'USDT';
+ // which coin to borrow
+ const borrow_coin = 'BUSD';
+ // how many coins to sell
+ const amount_to_trade = 20;
+ // at what limit-price you want to sell (set undefined/null/None if market-order)
+ const limit_price = 0.98;
+ // what is the target margin. This might be obtainable from exchange automatically, using available endpoints, but for example purposes, we set here manually
+ const margin_magnitude = 10;
+ // for example purposes, let's also check available balance at first
+ const balance_margin = await exchange.fetchBalance ({'defaultType': 'margin', 'marginMode': 'isolated'}); // 'isolated' or 'cross'.
+ // if we don't have enought coins, then we have to borrow at first
+ if (balance_margin[symbol][borrow_coin]['free'] < amount_to_trade) {
+ console.log ('hmm, I dont have enough margin balance, I should borrow at first.');
+ // To initate a borrow, at first, check if we have enough collateral
+ const needed_collateral_amount = amount_to_trade/ (margin_magnitude - 1); // as we sell-short, we need '-1' to keep for collateral currency
+ // if we don't have any collateral, then at first, we need to transfer it from spot
+ if (balance_margin[symbol][collateral_coin]['free'] < needed_collateral_amount) {
+ console.log ('hmm, at first I should transfer some collateral from spot');
+ // let's check if we have spot balance at all
+ const balance_spot = await exchange.fetchBalance ({'type': 'spot'});
+ if (balance_spot[collateral_coin]['free'] < needed_collateral_amount) {
+ console.log ('hmm, I neither do have enough balance on spot');
+ return;
+ } else {
+ console.log ('Transferring some ' + collateral_coin + ' to margin account');
+ await exchange.transfer (collateral_coin, needed_collateral_amount, 'spot', 'isolated', {'symbol': symbol});
+ console.log ('Transfer complete.');
+ }
+ }
+ // now, as we have enough margin collateral, initiate borrow
+ console.log ('Initiating margin borrow');
+ const borrowResult = await exchange.borrowMargin ('BUSD', amount_to_trade, 'BUSD/USDT', {'marginMode': 'isolated'});
+ console.log ('Borrow complete.');
+ }
+ console.log ("Submitting order.");
+ const order = await exchange.createOrder (symbol, 'market', 'sell', amount_to_trade, limit_price, {'type': 'margin', 'marginMode': 'isolated'});
+ console.log ("Order was submitted !");
+}
+
+example();
\ No newline at end of file
| fix #14825
(ideally, if another PR, which transpiles examples, will be merged, then this example will be transpiled into multiple langs) | https://api.github.com/repos/ccxt/ccxt/pulls/14843 | 2022-09-01T13:50:32Z | 2022-09-02T18:17:24Z | 2022-09-02T18:17:24Z | 2022-09-02T21:29:31Z | 846 | ccxt/ccxt | 13,850 |
Added random task and moved other tasks to /tasks/all | diff --git a/website/src/components/Dashboard/TaskOption.tsx b/website/src/components/Dashboard/TaskOption.tsx
index 1c070e1708..43bcabeaab 100644
--- a/website/src/components/Dashboard/TaskOption.tsx
+++ b/website/src/components/Dashboard/TaskOption.tsx
@@ -1,11 +1,9 @@
import { Box, Flex, GridItem, Heading, SimpleGrid, Text, useColorModeValue } from "@chakra-ui/react";
import Link from "next/link";
-import { TaskCategory, TaskTypes } from "../Tasks/TaskTypes";
+import { TaskTypes } from "../Tasks/TaskTypes";
-const displayTaskCategories = [TaskCategory.Create, TaskCategory.Evaluate, TaskCategory.Label];
-
-export const TaskOption = () => {
+export const TaskOption = ({ displayTaskCategories }) => {
const backgroundColor = useColorModeValue("white", "gray.700");
return (
diff --git a/website/src/components/Tasks/LabelTask.tsx b/website/src/components/Tasks/LabelTask.tsx
index 9c08932cd2..04a9c1d304 100644
--- a/website/src/components/Tasks/LabelTask.tsx
+++ b/website/src/components/Tasks/LabelTask.tsx
@@ -24,7 +24,10 @@ export const LabelTask = ({
const onSliderChange = (values: number[]) => {
console.assert(valid_labels.length === sliderValues.length);
const labels = Object.fromEntries(valid_labels.map((label, i) => [label, sliderValues[i]]));
- onReplyChanged({ content: { labels, text: task.reply, message_id: task.message_id }, state: "VALID" });
+ onReplyChanged({
+ content: { labels, text: task.reply || task.prompt, message_id: task.message_id },
+ state: "VALID",
+ });
setSliderValues(values);
};
diff --git a/website/src/components/Tasks/TaskTypes.tsx b/website/src/components/Tasks/TaskTypes.tsx
index f2e00df5c0..fd37536ea1 100644
--- a/website/src/components/Tasks/TaskTypes.tsx
+++ b/website/src/components/Tasks/TaskTypes.tsx
@@ -1,4 +1,5 @@
export enum TaskCategory {
+ Tasks = "Tasks",
Create = "Create",
Evaluate = "Evaluate",
Label = "Label",
@@ -18,6 +19,15 @@ export interface TaskInfo {
}
export const TaskTypes: TaskInfo[] = [
+ // general/random
+ {
+ label: "Grab a task",
+ desc: "Help us improve Open Assistant by grabbing a task ...",
+ category: TaskCategory.Tasks,
+ pathname: "/tasks/random",
+ type: "random",
+ update_type: "random",
+ },
// create
{
label: "Create Initial Prompts",
diff --git a/website/src/pages/dashboard.tsx b/website/src/pages/dashboard.tsx
index ece9521230..d296592dd4 100644
--- a/website/src/pages/dashboard.tsx
+++ b/website/src/pages/dashboard.tsx
@@ -1,6 +1,7 @@
import Head from "next/head";
import { LeaderboardTable, TaskOption } from "src/components/Dashboard";
import { getDashboardLayout } from "src/components/Layout";
+import { TaskCategory } from "src/components/Tasks/TaskTypes";
const Dashboard = () => {
return (
@@ -9,7 +10,7 @@ const Dashboard = () => {
<title>Dashboard - Open Assistant</title>
<meta name="description" content="Chat with Open Assistant and provide feedback." />
</Head>
- <TaskOption />
+ <TaskOption displayTaskCategories={[TaskCategory.Tasks]} />
<LeaderboardTable />
</>
);
diff --git a/website/src/pages/tasks/all.tsx b/website/src/pages/tasks/all.tsx
index 6e4e926b3c..ed0659c8bb 100644
--- a/website/src/pages/tasks/all.tsx
+++ b/website/src/pages/tasks/all.tsx
@@ -1,6 +1,7 @@
import Head from "next/head";
import { TaskOption } from "src/components/Dashboard";
import { getDashboardLayout } from "src/components/Layout";
+import { TaskCategory } from "src/components/Tasks/TaskTypes";
const AllTasks = () => {
return (
@@ -9,7 +10,7 @@ const AllTasks = () => {
<title>All Tasks - Open Assistant</title>
<meta name="description" content="All tasks for Open Assistant." />
</Head>
- <TaskOption />
+ <TaskOption displayTaskCategories={[TaskCategory.Create, TaskCategory.Evaluate, TaskCategory.Label]} />
</>
);
};
diff --git a/website/src/pages/tasks/random.tsx b/website/src/pages/tasks/random.tsx
new file mode 100644
index 0000000000..e61fe3253c
--- /dev/null
+++ b/website/src/pages/tasks/random.tsx
@@ -0,0 +1,36 @@
+import { Container } from "@chakra-ui/react";
+import Head from "next/head";
+import { useEffect } from "react";
+import { LoadingScreen } from "src/components/Loading/LoadingScreen";
+import { Task } from "src/components/Tasks/Task";
+import { useGenericTaskAPI } from "src/hooks/tasks/useGenericTaskAPI";
+
+const RandomTask = () => {
+ const { tasks, isLoading, trigger, reset } = useGenericTaskAPI("random");
+
+ useEffect(() => {
+ if (tasks.length == 0) {
+ reset();
+ }
+ }, [tasks]);
+
+ if (isLoading) {
+ return <LoadingScreen text="Loading..." />;
+ }
+
+ if (tasks.length == 0) {
+ return <Container className="p-6 text-center text-gray-800">No tasks found...</Container>;
+ }
+
+ return (
+ <>
+ <Head>
+ <title>Random Task</title>
+ <meta name="description" content="Random Task." />
+ </Head>
+ <Task key={tasks[0].task.id} frontendId={tasks[0].id} task={tasks[0].task} trigger={trigger} mutate={reset} />
+ </>
+ );
+};
+
+export default RandomTask;
| https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/621 | 2023-01-11T09:32:37Z | 2023-01-11T09:41:54Z | 2023-01-11T09:41:54Z | 2023-01-11T15:23:07Z | 1,433 | LAION-AI/Open-Assistant | 37,701 | |
commands: verify command function signatures before call | diff --git a/mitmproxy/command.py b/mitmproxy/command.py
index c9776bc325..eae3d80cb2 100644
--- a/mitmproxy/command.py
+++ b/mitmproxy/command.py
@@ -190,10 +190,19 @@ def parsearg(manager: CommandManager, spec: str, argtype: type) -> typing.Any:
raise exceptions.CommandError("Unsupported argument type: %s" % argtype)
+def verify_arg_signature(f: typing.Callable, args: list, kwargs: dict) -> None:
+ sig = inspect.signature(f)
+ try:
+ sig.bind(*args, **kwargs)
+ except TypeError as v:
+ raise exceptions.CommandError("Argument mismatch: %s" % v.args[0])
+
+
def command(path):
def decorator(function):
@functools.wraps(function)
def wrapper(*args, **kwargs):
+ verify_arg_signature(function, args, kwargs)
return function(*args, **kwargs)
wrapper.__dict__["command_path"] = path
return wrapper
diff --git a/test/mitmproxy/test_command.py b/test/mitmproxy/test_command.py
index 87432163a4..43b9774224 100644
--- a/test/mitmproxy/test_command.py
+++ b/test/mitmproxy/test_command.py
@@ -163,3 +163,10 @@ def test_decorator():
with taddons.context() as tctx:
tctx.master.addons.add(a)
assert tctx.master.commands.call("cmd1 bar") == "ret bar"
+
+
+def test_verify_arg_signature():
+ with pytest.raises(exceptions.CommandError):
+ command.verify_arg_signature(lambda: None, [1, 2], {})
+ print('hello there')
+ command.verify_arg_signature(lambda a, b: None, [1, 2], {})
\ No newline at end of file
| Fixes #2652, and many other possible crashes on user input. | https://api.github.com/repos/mitmproxy/mitmproxy/pulls/2659 | 2017-12-10T20:13:45Z | 2017-12-11T09:03:08Z | 2017-12-11T09:03:08Z | 2018-05-15T21:49:18Z | 412 | mitmproxy/mitmproxy | 27,931 |
Add hasOwnProperty | diff --git a/blns.base64.json b/blns.base64.json
index 5a58673..b37058b 100644
--- a/blns.base64.json
+++ b/blns.base64.json
@@ -12,6 +12,7 @@
"VHJ1ZQo=",
"RmFsc2UK",
"Tm9uZQo=",
+ "aGFzT3duUHJvcGVydHkK",
"XFw=",
"MAo=",
"XFxcXAo=",
diff --git a/blns.base64.txt b/blns.base64.txt
index 1162816..ba4a8c7 100644
--- a/blns.base64.txt
+++ b/blns.base64.txt
@@ -14,6 +14,7 @@ ZmFsc2UK
VHJ1ZQo=
RmFsc2UK
Tm9uZQo=
+aGFzT3duUHJvcGVydHkK
XAo=
# Numeric Strings
diff --git a/blns.json b/blns.json
index 4b48bd8..a2ac16b 100644
--- a/blns.json
+++ b/blns.json
@@ -12,6 +12,7 @@
"True",
"False",
"None",
+ "hasOwnProperty",
"\\",
"\\\\",
"0",
diff --git a/blns.txt b/blns.txt
index 4c9e81a..81a366c 100644
--- a/blns.txt
+++ b/blns.txt
@@ -14,6 +14,7 @@ false
True
False
None
+hasOwnProperty
\
\\
| It’s common in JavaScript to use input as a key in an object and check for the existence of keys using `obj.hasOwnProperty(key)` instead of `Object.prototype.hasOwnProperty.call(obj, key)`.
| https://api.github.com/repos/minimaxir/big-list-of-naughty-strings/pulls/92 | 2016-02-10T04:58:07Z | 2016-02-10T05:19:59Z | 2016-02-10T05:19:59Z | 2016-02-10T05:19:59Z | 392 | minimaxir/big-list-of-naughty-strings | 4,794 |
Refactored github action | diff --git a/.github/workflows/PR_CI.yml b/.github/workflows/PR_CI.yml
deleted file mode 100644
index 7dccfbecdc57..000000000000
--- a/.github/workflows/PR_CI.yml
+++ /dev/null
@@ -1,90 +0,0 @@
-name: Pull Request CI and Build
-
-on: pull_request
-
-jobs:
- cancel_previous_workflow:
- runs-on: ubuntu-latest
- if: github.event.pull_request.draft == false && (github.base_ref == 'main' || github.base_ref == 'develop') && github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI'
- steps:
- - name: Cancel Previous Runs
- uses: styfle/cancel-workflow-action@0.9.1
- with:
- access_token: ${{ github.token }}
-
- assign_reviewer:
- name: Assign Reviewer for PR
- runs-on: ubuntu-latest
- if: github.event.pull_request.draft == false && (github.base_ref == 'main' || github.base_ref == 'develop') && github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI' && toJson(github.event.pull_request.requested_reviewers) == '[]'
- steps:
- - uses: kentaro-m/auto-assign-action@v1.2.1
- with:
- configuration-path: '.github/reviewer_list.yml'
-
- build:
- name: Build and Test Colossal-AI
- if: ${{ always() }} && github.event.pull_request.draft == false && (github.base_ref == 'main' || github.base_ref == 'develop') && github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI'
- needs: [cancel_previous_workflow, assign_reviewer]
- runs-on: [self-hosted, gpu]
- container:
- image: nvcr.io/nvidia/pytorch:21.07-py3
- options: --gpus all --rm --ipc=host -v /data/scratch/cifar-10:/data/scratch/cifar-10
- timeout-minutes: 20
- steps:
- - name: Setup Environment
- run: |
- export https_proxy=http://172.17.0.1:7890 http_proxy=http://172.17.0.1:7890 all_proxy=socks5://172.17.0.1:7890
- - name: Install dependencies
- run: |
- pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
- pip install -U pip setuptools wheel --user
- pip install pytest tensorboard deepspeed apex
- - uses: actions/checkout@v2
- - name: Install Colossal-AI
- run: |
- pip install -r requirements/requirements.txt
- pip install -v --no-cache-dir .
- - name: Unit Testing
- run: |
- pytest tests
- env:
- DATA: /data/scratch/cifar-10
-
- format_check:
- name: Format Check
- if: github.event.pull_request.draft == false && github.base_ref == 'main' && github.head_ref == 'develop' && github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI'
- needs: [build]
- runs-on: ubuntu-latest
- steps:
- - name: Checkout repo
- uses: actions/checkout@v2
-
- - name: autoyapf
- id: autoyapf
- uses: mritunjaysharma394/autoyapf@v2
- with:
- args: --style google --recursive --in-place .
-
- - name: Check for modified files
- id: git-check
- run: echo ::set-output name=modified::$(if git diff-index --quiet HEAD --; then echo "false"; else echo "true"; fi)
-
- - name: Push changes
- if: steps.git-check.outputs.modified == 'true'
- run: |
- git config --global user.name 'github-actions'
- git config --global user.email 'github-actions@github.com'
- git remote set-url origin https://x-access-token:${{ secrets.GITHUB_TOKEN }}@github.com/${{ github.repository }}
- git commit -am "Automated autoyapf fixes"
-
- - name: Create Pull Request
- # if: steps.format.outputs.has-changes == 'true'
- uses: peter-evans/create-pull-request@v3
- with:
- title: '[Bot] Automated PR to fix formatting errors'
- body: |
- Automated PR to fix formatting errors
- committer: GitHub <noreply@github.com>
- author: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
- assignees: ${{ github.actor }}
- reviewers: frankleeeee
diff --git a/.github/workflows/assign_reviewer.yml b/.github/workflows/assign_reviewer.yml
new file mode 100644
index 000000000000..871768938639
--- /dev/null
+++ b/.github/workflows/assign_reviewer.yml
@@ -0,0 +1,19 @@
+name: Assign Reviewers for Team
+
+on:
+ pull_request:
+ types: [opened]
+
+jobs:
+ assign_reviewer:
+ name: Assign Reviewer for PR
+ runs-on: ubuntu-latest
+ if: |
+ github.event.pull_request.draft == false &&
+ (github.base_ref == 'main' || github.base_ref == 'develop')
+ && github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI'
+ && toJson(github.event.pull_request.requested_reviewers) == '[]'
+ steps:
+ - uses: kentaro-m/auto-assign-action@v1.2.1
+ with:
+ configuration-path: '.github/reviewer_list.yml'
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
new file mode 100644
index 000000000000..08b3fc09abc2
--- /dev/null
+++ b/.github/workflows/build.yml
@@ -0,0 +1,38 @@
+name: Build
+
+on:
+ pull_request:
+ types: [synchronize, labeled]
+
+jobs:
+ build:
+ name: Build and Test Colossal-AI
+ if: |
+ github.event.pull_request.draft == false &&
+ (github.base_ref == 'main' || github.base_ref == 'develop') &&
+ github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI' &&
+ contains( github.event.pull_request.labels.*.name, 'Run Build and Test')
+ runs-on: [self-hosted, gpu]
+ container:
+ image: nvcr.io/nvidia/pytorch:21.07-py3
+ options: --gpus all --rm --ipc=host -v /data/scratch/cifar-10:/data/scratch/cifar-10
+ timeout-minutes: 20
+ steps:
+ - name: Setup Environment
+ run: |
+ export https_proxy=http://172.17.0.1:7890 http_proxy=http://172.17.0.1:7890 all_proxy=socks5://172.17.0.1:7890
+ - name: Install dependencies
+ run: |
+ pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
+ pip install -U pip setuptools wheel --user
+ pip install pytest tensorboard deepspeed apex
+ - uses: actions/checkout@v2
+ - name: Install Colossal-AI
+ run: |
+ pip install -r requirements/requirements.txt
+ pip install -v --no-cache-dir .
+ - name: Unit Testing
+ run: |
+ pytest tests
+ env:
+ DATA: /data/scratch/cifar-10
diff --git a/.github/workflows/close_inactive.yml b/.github/workflows/close_inactive.yml
index 988d9e3bc5ba..e7dec4430930 100644
--- a/.github/workflows/close_inactive.yml
+++ b/.github/workflows/close_inactive.yml
@@ -1,4 +1,5 @@
name: Close inactive issues
+
on:
schedule:
- cron: "0 0 * * *"
diff --git a/.github/workflows/compatibility_test.yml b/.github/workflows/compatibility_test.yml
new file mode 100644
index 000000000000..6b6a4304b5c5
--- /dev/null
+++ b/.github/workflows/compatibility_test.yml
@@ -0,0 +1,46 @@
+name: Compatibility Test
+
+on: workflow_dispatch
+
+jobs:
+ build:
+ name: Test for PyTorch compatibility
+ if: (github.base_ref == 'main' || github.base_ref == 'develop') && github.repository == 'hpcaitech/ColossalAI'
+ runs-on: [self-hosted, aws]
+ strategy:
+ fail-fast: false
+ matrix:
+ # 20.07: PyTorch 1.6.0a0+9907a3e + Python 3.6
+ # 20.10: PyTorch 1.7.0a0+7036e91 + Python 3.8
+ # 20.12: PyTorch 1.8.0a0+1606899 + Python 3.8
+ # 21.06: PyTorch 1.9.0a0+c3d40fd + Python 3.8
+ container: ["nvcr.io/nvidia/pytorch:20.07-py3",
+ "nvcr.io/nvidia/pytorch:20.10-py3",
+ "nvcr.io/nvidia/pytorch:20.12-py3",
+ "nvcr.io/nvidia/pytorch:21.06-py3"]
+ container:
+ image: ${{ matrix.container }}
+ options: --gpus all --rm --ipc=host -v /data/scratch/cifar-10:/data/scratch/cifar-10
+ timeout-minutes: 120
+ steps:
+ - name: Setup Environment
+ run: |
+ export https_proxy=http://172.17.0.1:7890 http_proxy=http://172.17.0.1:7890 all_proxy=socks5://172.17.0.1:7890
+ - name: Install dependencies
+ run: |
+ pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
+ pip install -U pip setuptools wheel --user
+ pip install pytest tensorboard deepspeed apex
+ - uses: actions/checkout@v2
+ - name: Install Colossal-AI
+ run: |
+ pip install -r requirements/requirements.txt
+ pip install -v --no-cache-dir .
+ - name: Unit Testing
+ run: |
+ pytest tests
+ env:
+ DATA: /data/scratch/cifar-10
+
+
+
diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml
new file mode 100644
index 000000000000..c274bfa73924
--- /dev/null
+++ b/.github/workflows/release.yml
@@ -0,0 +1,31 @@
+name: Publish to PyPI
+
+on: workflow_dispatch
+
+jobs:
+ build-n-publish:
+ if: (github.base_ref == 'main' || github.base_ref == 'develop') && github.repository == 'hpcaitech/ColossalAI' && contains('["FrankLeeeee", "ver217", "feifeibear", "kurisusnowdeng"]', github.actor)
+ name: Build and publish Python 🐍 distributions 📦 to PyPI or Test PyPI
+ runs-on: ubuntu-latest
+ timeout-minutes: 20
+ steps:
+ - uses: actions/checkout@v2
+ - uses: actions/setup-python@v2
+ with:
+ python-version: '3.7.4'
+ # publish to PyPI if executed on the main branch
+ # publish to Test PyPI if executed on the develop branch
+ - name: Publish package to Test PyPI
+ if: github.base_ref == 'develop'
+ uses: pypa/gh-action-pypi-publish@release/v1
+ with:
+ user: __token__
+ password: ${{ secrets.TEST_PYPI_API_TOKEN }}
+ verbose: true
+ - name: Publish package to PyPI
+ if: github.base_ref == 'main'
+ uses: pypa/gh-action-pypi-publish@release/v1
+ with:
+ user: __token__
+ password: ${{ secrets.PYPI_API_TOKEN }}
+ verbose: true
diff --git a/.github/workflows/submodule.yml b/.github/workflows/submodule.yml
index d19892c39383..2447284e8eec 100644
--- a/.github/workflows/submodule.yml
+++ b/.github/workflows/submodule.yml
@@ -1,4 +1,5 @@
name: Synchronize Submodule
+
on:
workflow_dispatch:
schedule:
@@ -12,6 +13,7 @@ jobs:
- name: Checkout
uses: actions/checkout@v2
with:
+ ref: 'develop'
submodules: true
- name: echo
| I have refactored the github action for better DevOps experience
1. assign reviewers when the PR is opened (this can only apply to users who have write permission to this repository and does not work for PR from a forked repository due to security reasons)
2. build and unit testing are only run when the label 'Run Build and Test' is added. The CI will be triggered on synchronization after the label is added as well.
3. release package to Test PyPI (for develop branch) and PyPI (for main branch) automatically. This CI is triggered on manual workflow dispatch.
4. compatibility checking with different torch versions. This CI is triggered on manual workflow dispatch.
5. removed formatting CI as pre-commit is the standard way for auto-formatting | https://api.github.com/repos/hpcaitech/ColossalAI/pulls/303 | 2022-03-03T08:07:53Z | 2022-03-03T09:45:09Z | 2022-03-03T09:45:09Z | 2022-03-17T01:51:29Z | 3,120 | hpcaitech/ColossalAI | 11,438 |
bpo-36370: Check for PyErr_Occurred() after PyImport_GetModule(). (GH-12504) | diff --git a/Python/ceval.c b/Python/ceval.c
index 40320bf3570304..28e923219d389c 100644
--- a/Python/ceval.c
+++ b/Python/ceval.c
@@ -4948,7 +4948,7 @@ import_from(PyObject *v, PyObject *name)
}
x = PyImport_GetModule(fullmodname);
Py_DECREF(fullmodname);
- if (x == NULL) {
+ if (x == NULL && !PyErr_Occurred()) {
goto error;
}
Py_DECREF(pkgname);
@@ -4971,7 +4971,7 @@ import_from(PyObject *v, PyObject *name)
"cannot import name %R from %R (unknown location)",
name, pkgname_or_unknown
);
- /* NULL check for errmsg done by PyErr_SetImportError. */
+ /* NULL checks for errmsg and pkgname done by PyErr_SetImportError. */
PyErr_SetImportError(errmsg, pkgname, NULL);
}
else {
@@ -4979,7 +4979,7 @@ import_from(PyObject *v, PyObject *name)
"cannot import name %R from %R (%S)",
name, pkgname_or_unknown, pkgpath
);
- /* NULL check for errmsg done by PyErr_SetImportError. */
+ /* NULL checks for errmsg and pkgname done by PyErr_SetImportError. */
PyErr_SetImportError(errmsg, pkgname, pkgpath);
}
diff --git a/Python/import.c b/Python/import.c
index bf3a99414fb860..c00c3aa640b078 100644
--- a/Python/import.c
+++ b/Python/import.c
@@ -966,11 +966,10 @@ exec_code_in_module(PyObject *name, PyObject *module_dict, PyObject *code_object
Py_DECREF(v);
m = PyImport_GetModule(name);
- if (m == NULL) {
+ if (m == NULL && !PyErr_Occurred()) {
PyErr_Format(PyExc_ImportError,
"Loaded module %R not found in sys.modules",
name);
- return NULL;
}
return m;
@@ -1735,6 +1734,10 @@ PyImport_ImportModuleLevelObject(PyObject *name, PyObject *globals,
}
mod = PyImport_GetModule(abs_name);
+ if (mod == NULL && PyErr_Occurred()) {
+ goto error;
+ }
+
if (mod != NULL && mod != Py_None) {
_Py_IDENTIFIER(__spec__);
_Py_IDENTIFIER(_lock_unlock_module);
@@ -1810,9 +1813,11 @@ PyImport_ImportModuleLevelObject(PyObject *name, PyObject *globals,
final_mod = PyImport_GetModule(to_return);
Py_DECREF(to_return);
if (final_mod == NULL) {
- PyErr_Format(PyExc_KeyError,
- "%R not in sys.modules as expected",
- to_return);
+ if (!PyErr_Occurred()) {
+ PyErr_Format(PyExc_KeyError,
+ "%R not in sys.modules as expected",
+ to_return);
+ }
goto error;
}
}
@@ -1875,6 +1880,10 @@ PyImport_ReloadModule(PyObject *m)
PyObject *reloaded_module = NULL;
PyObject *imp = _PyImport_GetModuleId(&PyId_imp);
if (imp == NULL) {
+ if (PyErr_Occurred()) {
+ return NULL;
+ }
+
imp = PyImport_ImportModule("imp");
if (imp == NULL) {
return NULL;
diff --git a/Python/pylifecycle.c b/Python/pylifecycle.c
index 994a94f1402a71..0c6d6eaf4e44cd 100644
--- a/Python/pylifecycle.c
+++ b/Python/pylifecycle.c
@@ -2145,8 +2145,10 @@ wait_for_thread_shutdown(void)
PyObject *result;
PyObject *threading = _PyImport_GetModuleId(&PyId_threading);
if (threading == NULL) {
- /* threading not imported */
- PyErr_Clear();
+ if (PyErr_Occurred()) {
+ PyErr_WriteUnraisable(NULL);
+ }
+ /* else: threading not imported */
return;
}
result = _PyObject_CallMethodId(threading, &PyId__shutdown, NULL);
diff --git a/Python/sysmodule.c b/Python/sysmodule.c
index 4351a7fb370d0d..3df4d44a7ce708 100644
--- a/Python/sysmodule.c
+++ b/Python/sysmodule.c
@@ -283,7 +283,9 @@ sys_displayhook(PyObject *module, PyObject *o)
builtins = _PyImport_GetModuleId(&PyId_builtins);
if (builtins == NULL) {
- PyErr_SetString(PyExc_RuntimeError, "lost builtins module");
+ if (!PyErr_Occurred()) {
+ PyErr_SetString(PyExc_RuntimeError, "lost builtins module");
+ }
return NULL;
}
Py_DECREF(builtins);
|
<!-- issue-number: [bpo-36370](https://bugs.python.org/issue36370) -->
https://bugs.python.org/issue36370
<!-- /issue-number -->
| https://api.github.com/repos/python/cpython/pulls/12504 | 2019-03-22T23:42:42Z | 2019-03-25T20:50:59Z | 2019-03-25T20:50:59Z | 2019-03-26T18:23:19Z | 1,142 | python/cpython | 4,394 |
Allow = symbols in variable values in host inventory | diff --git a/lib/ansible/inventory/ini.py b/lib/ansible/inventory/ini.py
index 9398a591b7ba49..0a6a8f5ea97d7d 100644
--- a/lib/ansible/inventory/ini.py
+++ b/lib/ansible/inventory/ini.py
@@ -168,7 +168,7 @@ def _parse_group_variables(self):
if line.find("=") == -1:
raise errors.AnsibleError("variables assigned to group must be in key=value form")
else:
- (k,v) = line.split("=")
+ (k,v) = line.split("=",1)
group.set_variable(k,v)
| Setting a variable in the host inventory file to an ssh authorized key causes this error:
```
.../ansible/lib/ansible/inventory/ini.py", line 171, in _parse_group_variables
(k,v) = line.split("=")
ValueError: too many values to unpack
```
Resolved by using `split("=",1)`.
| https://api.github.com/repos/ansible/ansible/pulls/743 | 2012-08-01T03:40:05Z | 2012-08-01T03:41:31Z | 2012-08-01T03:41:31Z | 2019-04-24T17:34:20Z | 153 | ansible/ansible | 48,943 |
Change save_format from jpg top png, because jpg is a loss format. | diff --git a/keras/preprocessing/image.py b/keras/preprocessing/image.py
index 889827cb4b2..76b550d24ef 100644
--- a/keras/preprocessing/image.py
+++ b/keras/preprocessing/image.py
@@ -443,7 +443,7 @@ def __init__(self,
'Received arg: ', zoom_range)
def flow(self, x, y=None, batch_size=32, shuffle=True, seed=None,
- save_to_dir=None, save_prefix='', save_format='jpeg'):
+ save_to_dir=None, save_prefix='', save_format='png'):
return NumpyArrayIterator(
x, y, self,
batch_size=batch_size,
@@ -460,7 +460,7 @@ def flow_from_directory(self, directory,
batch_size=32, shuffle=True, seed=None,
save_to_dir=None,
save_prefix='',
- save_format='jpeg',
+ save_format='png',
follow_links=False):
return DirectoryIterator(
directory, self,
@@ -752,7 +752,7 @@ class NumpyArrayIterator(Iterator):
def __init__(self, x, y, image_data_generator,
batch_size=32, shuffle=False, seed=None,
data_format=None,
- save_to_dir=None, save_prefix='', save_format='jpeg'):
+ save_to_dir=None, save_prefix='', save_format='png'):
if y is not None and len(x) != len(y):
raise ValueError('X (images tensor) and y (labels) '
'should have the same length. '
@@ -860,7 +860,7 @@ def __init__(self, directory, image_data_generator,
classes=None, class_mode='categorical',
batch_size=32, shuffle=True, seed=None,
data_format=None,
- save_to_dir=None, save_prefix='', save_format='jpeg',
+ save_to_dir=None, save_prefix='', save_format='png',
follow_links=False):
if data_format is None:
data_format = K.image_data_format()
| https://api.github.com/repos/keras-team/keras/pulls/6638 | 2017-05-15T14:06:04Z | 2017-05-16T14:57:51Z | 2017-05-16T14:57:51Z | 2017-05-16T14:57:51Z | 444 | keras-team/keras | 47,406 | |
Fixed all the false positives | diff --git a/removed_sites.json b/removed_sites.json
index 68fa162ec..3e3b6f644 100644
--- a/removed_sites.json
+++ b/removed_sites.json
@@ -791,5 +791,31 @@
"urlMain": "https://mastodon.xyz/",
"username_claimed": "ashfurrow",
"username_unclaimed": "noonewouldeverusethis7"
+ },
+ "Arduino": {
+ "errorMsg":"<title>Arduino Cloud</title>",
+ "errorType": "message",
+ "regexCheck": "^(?![_-])[A-Za-z0-9_-]{3,}$",
+ "url": "https://projecthub.arduino.cc/{}",
+ "urlMain": "https://www.arduino.cc/",
+ "username_claimed": "blue",
+ "username_unclaimed": "noonewould"
+ },
+ "zoomit": {
+ "errorMsg": "\u0645\u062a\u0627\u0633\u0641\u0627\u0646\u0647 \u0635\u0641\u062d\u0647 \u06cc\u0627\u0641\u062a \u0646\u0634\u062f",
+ "errorType": "message",
+ "url": "https://www.zoomit.ir/user/{}",
+ "urlMain": "https://www.zoomit.ir",
+ "username_claimed": "kossher",
+ "username_unclaimed": "noonewouldeverusethis7"
+ },
+ "Facebook": {
+ "errorType": "status_code",
+ "regexCheck": "^[a-zA-Z0-9\\.]{3,49}(?<!\\.com|\\.org|\\.net)$",
+ "url": "https://www.facebook.com/{}",
+ "urlMain": "https://www.facebook.com/",
+ "urlProbe": "https://www.facebook.com/{}/videos/",
+ "username_claimed": "hackerman",
+ "username_unclaimed": "noonewouldeverusethis7"
}
}
diff --git a/removed_sites.md b/removed_sites.md
index 0fe1991c8..81e999e24 100644
--- a/removed_sites.md
+++ b/removed_sites.md
@@ -1577,4 +1577,48 @@ As of 18.12.2022, mastodon.technology has no A/AAAA records and the [website was
"username_claimed": "ashfurrow",
"username_unclaimed": "noonewouldeverusethis7"
},
+```
+
+
+## Aruino
+As of 04.02.2023, Arduino returns false positives. Finding a fix is doable but takes some time. Will be fixed later
+
+```json
+"Arduino": {
+ "errorMsg":"<title>Arduino Cloud</title>",
+ "errorType": "message",
+ "regexCheck": "^(?![_-])[A-Za-z0-9_-]{3,}$",
+ "url": "https://projecthub.arduino.cc/{}",
+ "urlMain": "https://www.arduino.cc/",
+ "username_claimed": "blue",
+ "username_unclaimed": "noonewould"
+ },
+
+```
+
+## Zoomit
+As of 04.02.2023, Zoomit return false positves. An attempt at finding a fix was made but a lot of time was used without luck. Therefore, it wont be prioritized at the moment.
+```json
+ "zoomit": {
+ "errorMsg": "\u0645\u062a\u0627\u0633\u0641\u0627\u0646\u0647 \u0635\u0641\u062d\u0647 \u06cc\u0627\u0641\u062a \u0646\u0634\u062f",
+ "errorType": "message",
+ "url": "https://www.zoomit.ir/user/{}",
+ "urlMain": "https://www.zoomit.ir",
+ "username_claimed": "kossher",
+ "username_unclaimed": "noonewouldeverusethis7"
+ },
+```
+
+## Facebook
+As of 04.02.2023, Facebook returns false positives because we get prompted with the login screen to view the data
+```json
+"Facebook": {
+ "errorType": "status_code",
+ "regexCheck": "^[a-zA-Z0-9\\.]{3,49}(?<!\\.com|\\.org|\\.net)$",
+ "url": "https://www.facebook.com/{}",
+ "urlMain": "https://www.facebook.com/",
+ "urlProbe": "https://www.facebook.com/{}/videos/",
+ "username_claimed": "hackerman",
+ "username_unclaimed": "noonewouldeverusethis7"
+ },
```
\ No newline at end of file
diff --git a/sherlock/resources/data.json b/sherlock/resources/data.json
index d2174be80..c5b950965 100644
--- a/sherlock/resources/data.json
+++ b/sherlock/resources/data.json
@@ -119,14 +119,6 @@
"username_claimed": "blue",
"username_unclaimed": "noonewould"
},
- "Arduino": {
- "errorType": "status_code",
- "regexCheck": "^(?![_-])[A-Za-z0-9_-]{3,}$",
- "url": "https://create.arduino.cc/projecthub/{}",
- "urlMain": "https://www.arduino.cc/",
- "username_claimed": "blue",
- "username_unclaimed": "noonewould"
- },
"ArtStation": {
"errorType": "status_code",
"url": "https://www.artstation.com/{}",
@@ -652,15 +644,6 @@
"username_claimed": "blue",
"username_unclaimed": "noonewouldeverusethis7"
},
- "Facebook": {
- "errorType": "status_code",
- "regexCheck": "^[a-zA-Z0-9\\.]{3,49}(?<!\\.com|\\.org|\\.net)$",
- "url": "https://www.facebook.com/{}",
- "urlMain": "https://www.facebook.com/",
- "urlProbe": "https://www.facebook.com/{}/videos/",
- "username_claimed": "hackerman",
- "username_unclaimed": "noonewouldeverusethis7"
- },
"Fameswap": {
"errorType": "status_code",
"url": "https://fameswap.com/user/{}",
@@ -1914,7 +1897,7 @@
"username_unclaimed": "noonewouldeverusethis7"
},
"Strava": {
- "errorMsg": "Strava | Run and Cycling Tracking on the Social Network for Athletes",
+ "errorMsg": "Strava | Running, Cycling & Hiking App - Train, Track & Share",
"errorType": "message",
"url": "https://www.strava.com/athletes/{}",
"urlMain": "https://www.strava.com/",
@@ -2018,8 +2001,7 @@
"username_unclaimed": "noonewouldeverusethis7"
},
"TrashboxRU": {
- "errorMsg": "\u041f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044c \u043d\u0435 \u043d\u0430\u0439\u0434\u0435\u043d",
- "errorType": "message",
+ "errorType": "status_code",
"regexCheck": "^[A-Za-z0-9_-]{3,16}$",
"url": "https://trashbox.ru/users/{}",
"urlMain": "https://trashbox.ru/",
@@ -2375,9 +2357,11 @@
"username_unclaimed": "noonewouldeverusethis7"
},
"dailykos": {
- "errorType": "status_code",
+ "errorType": "message",
+ "errorMsg": "{\"result\":true,\"message\":null}",
"url": "https://www.dailykos.com/user/{}",
"urlMain": "https://www.dailykos.com",
+ "urlProbe": "https://www.dailykos.com/signup/check_nickname?nickname={}",
"username_claimed": "blue",
"username_unclaimed": "noonewouldeverusethis7"
},
@@ -2562,11 +2546,10 @@
"username_unclaimed": "noonewouldeverusethis77777"
},
"koo": {
- "errorMsg": "is available",
+ "errorMsg": "This profile does not exist",
"errorType": "message",
"url": "https://www.kooapp.com/profile/{}",
"urlMain": "https://www.kooapp.com",
- "urlProbe": "https://www.kooapp.com/apiV1/users/handle/{}/valid",
"username_claimed": "john",
"username_unclaimed": "noonewouldeverusethis7"
},
@@ -2620,6 +2603,13 @@
"username_claimed": "Gargron",
"username_unclaimed": "noonewouldeverusethis7"
},
+ "mastodon.technology": {
+ "errorType": "status_code",
+ "url": "https://mastodon.technology/@{}",
+ "urlMain": "https://mastodon.xyz/",
+ "username_claimed": "ashfurrow",
+ "username_unclaimed": "noonewouldeverusethis7"
+ },
"mastodon.xyz": {
"errorType": "status_code",
"url": "https://mastodon.xyz/@{}",
@@ -2835,14 +2825,6 @@
"username_claimed": "janusz-nowak",
"username_unclaimed": "kto-by-sie-tak-nazwal-69"
},
- "zoomit": {
- "errorMsg": "\u0645\u062a\u0627\u0633\u0641\u0627\u0646\u0647 \u0635\u0641\u062d\u0647 \u06cc\u0627\u0641\u062a \u0646\u0634\u062f",
- "errorType": "message",
- "url": "https://www.zoomit.ir/user/{}",
- "urlMain": "https://www.zoomit.ir",
- "username_claimed": "kossher",
- "username_unclaimed": "noonewouldeverusethis7"
- },
"Youtube Channel": {
"errorType": "status_code",
"errorCode": 404,
| Sherlock should not return any false positives now :+1: | https://api.github.com/repos/sherlock-project/sherlock/pulls/1675 | 2023-02-04T17:14:22Z | 2023-02-04T17:16:43Z | 2023-02-04T17:16:43Z | 2023-02-04T18:20:03Z | 2,470 | sherlock-project/sherlock | 36,494 |
add faq 2021-05-24 | diff --git a/README_ch.md b/README_ch.md
index b3ecd6fdf3..0cdbb53a6f 100755
--- a/README_ch.md
+++ b/README_ch.md
@@ -4,11 +4,11 @@
PaddleOCR旨在打造一套丰富、领先、且实用的OCR工具库,助力使用者训练出更好的模型,并应用落地。
## 注意
PaddleOCR同时支持动态图与静态图两种编程范式
-- 动态图版本:release/2.1(默认分支,开发分支为dygraph分支),需将paddle版本升级至2.0.0([快速安装](./doc/doc_ch/installation.md))
+- 动态图版本:release/2.1(默认分支,开发分支为dygraph分支),需将paddle版本升级至2.0.0或以上版本([快速安装](./doc/doc_ch/installation.md))
- 静态图版本:develop分支
**近期更新**
-- 2021.5.17 [FAQ](./doc/doc_ch/FAQ.md)新增5个高频问题,总数223个,每周一都会更新,欢迎大家持续关注。
+- 2021.5.24 [FAQ](./doc/doc_ch/FAQ.md)新增5个高频问题,总数228个,每周一都会更新,欢迎大家持续关注。
- PaddleOCR研发团队对最新发版内容技术深入解读,4月13日晚上19:00,[直播地址](https://live.bilibili.com/21689802)。
- 2021.4.8 release 2.1版本,新增AAAI 2021论文[端到端识别算法PGNet](./doc/doc_ch/pgnet.md)开源,[多语言模型](./doc/doc_ch/multi_languages.md)支持种类增加到80+。
- 2021.2.8 正式发布PaddleOCRv2.0(branch release/2.0)并设置为推荐用户使用的默认分支. 发布的详细内容,请参考: https://github.com/PaddlePaddle/PaddleOCR/releases/tag/v2.0.0
@@ -104,8 +104,8 @@ PaddleOCR同时支持动态图与静态图两种编程范式
- [效果展示](#效果展示)
- FAQ
- [【精选】OCR精选10个问题](./doc/doc_ch/FAQ.md)
- - [【理论篇】OCR通用43个问题](./doc/doc_ch/FAQ.md)
- - [【实战篇】PaddleOCR实战170个问题](./doc/doc_ch/FAQ.md)
+ - [【理论篇】OCR通用44个问题](./doc/doc_ch/FAQ.md)
+ - [【实战篇】PaddleOCR实战174个问题](./doc/doc_ch/FAQ.md)
- [技术交流群](#欢迎加入PaddleOCR技术交流群)
- [参考文献](./doc/doc_ch/reference.md)
- [许可证书](#许可证书)
diff --git a/doc/doc_ch/FAQ.md b/doc/doc_ch/FAQ.md
index 3d2d9c9423..5e0a297369 100755
--- a/doc/doc_ch/FAQ.md
+++ b/doc/doc_ch/FAQ.md
@@ -9,14 +9,14 @@
## PaddleOCR常见问题汇总(持续更新)
-* [近期更新(2021.5.17)](#近期更新)
+* [近期更新(2021.5.24)](#近期更新)
* [【精选】OCR精选10个问题](#OCR精选10个问题)
-* [【理论篇】OCR通用43个问题](#OCR通用问题)
+* [【理论篇】OCR通用44个问题](#OCR通用问题)
* [基础知识13题](#基础知识)
* [数据集9题](#数据集2)
- * [模型训练调优21题](#模型训练调优2)
-* [【实战篇】PaddleOCR实战170个问题](#PaddleOCR实战问题)
- * [使用咨询68题](#使用咨询)
+ * [模型训练调优22题](#模型训练调优2)
+* [【实战篇】PaddleOCR实战174个问题](#PaddleOCR实战问题)
+ * [使用咨询72题](#使用咨询)
* [数据集18题](#数据集3)
* [模型训练调优36题](#模型训练调优3)
* [预测部署48题](#预测部署3)
@@ -24,38 +24,34 @@
<a name="近期更新"></a>
## 近期更新(2021.5.17)
-### Q3.1.66: iaa里面添加的数据增强方式,是每张图像训练都会做增强还是随机的?如何添加一个数据增强方法?
+### Q2.3.22: 目前知识蒸馏有哪些主要的实践思路?
-**A**:iaa增强的训练配置参考:[链接](https://github.com/PaddlePaddle/PaddleOCR/blob/0ccc1720c252beb277b9e522a1b228eb6abffb8a/configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml#L82)
-其中{ 'type': Fliplr, 'args': { 'p': 0.5 } } p是概率。新增数据增强,可以参考这个[方法](https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.1/doc/doc_ch/add_new_algorithm.md#%E6%95%B0%E6%8D%AE%E5%8A%A0%E8%BD%BD%E5%92%8C%E5%A4%84%E7%90%86)
+**A**:知识蒸馏即利用教师模型指导学生模型的训练,目前有3种主要的蒸馏思路:
+1. 基于输出结果的蒸馏,即让学生模型学习教师模型的软标签(分类或者OCR识别等任务中)或者概率热度图(分割等任务中)。
+2. 基于特征图的蒸馏,即让学生模型学习教师模型中间层的特征图,拟合中间层的一些特征。
+3. 基于关系的蒸馏,针对不同的样本(假设个数为N),教师模型会有不同的输出,那么可以基于不同样本的输出,计算一个NxN的相关性矩阵,可以让学生模型去学习教师模型关于不同样本的相关性矩阵。
-### Q3.1.67: PGNet训练中文弯曲数据集,可视化时弯曲文本无法显示。
+当然,知识蒸馏方法日新月异,也欢迎大家提出更多的总结与建议。
-**A**: 可能是因为安装的OpenCV里,cv2.putText不能显示中文的原因,可以尝试用Pillow来添加显示中文,需要改draw_e2e_res函数里面的代码,可以参考如下代码:
-```
-box = box.astype(np.int32).reshape((-1, 1, 2))
-cv2.polylines(src_im, [box], True, color=(255, 255, 0), thickness=2)
+### Q3.1.69: 怎么加速训练过程呢?
-from PIL import ImageFont, ImageDraw, Image
-img = Image.fromarray(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
-draw = ImageDraw.Draw(img)
-fontStyle = ImageFont.truetype(
-"font/msyh.ttc", 16, encoding="utf-8")
-draw.text((int(box[0, 0, 0]), int(box[0, 0, 1])), text, (0, 255, 0), font=fontStyle)
+**A**:OCR模型训练过程中一般包含大量的数据增广,这些数据增广是比较耗时的,因此可以离线生成大量增广后的图像,直接送入网络进行训练,机器资源充足的情况下,也可以使用分布式训练的方法,可以参考[分布式训练教程文档](https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_ch/distributed_training.md)。
-src_im= cv2.cvtColor(np.asarray(img), cv2.COLOR_RGB2BGR)
-```
-### Q3.1.68: 用PGNet做进行端到端训练时,数据集标注的点的个数必须都是统一一样的吗? 能不能随意标点数,只要能够按顺时针从左上角开始标这样?
-**A**: 目前代码要求标注为统一的点数。
+### Q3.1.70: 文字识别模型模型的输出矩阵需要进行解码才能得到识别的文本。代码中实现为preds_idx = preds.argmax(axis=2),也就是最佳路径解码法。这是一种贪心算法,是每一个时间步只将最大概率的字符作为当前时间步的预测输出,但得到的结果不一定是最好的。为什么不使用beam search这种方式进行解码呢?
-#### Q3.4.47: 请教如何优化检测阶段时长?
+**A**:实验发现,使用贪心的方法去做解码,识别精度影响不大,但是速度方面的优势比较明显,因此PaddleOCR中使用贪心算法去做识别的解码。
-**A**: 预测单张图会慢一点,如果批量预测,第一张图比较慢,后面就快了,因为最开始一些初始化操作比较耗时。服务部署的话,访问一次后,后面再访问就不会初始化了,推理的话每次都需要初始化的。
+### Q3.1.71: 遇到中英文识别模型不支持的字符,该如何对模型做微调?
-### Q3.4.48: paddle serving 本地启动调用失败,怎么判断是否正常工作?
+**A**:如果希望识别中英文识别模型中不支持的字符,需要更新识别的字典,并完成微调过程。比如说如果希望模型能够进一步识别罗马数字,可以按照以下步骤完成模型微调过程。
+1. 准备中英文识别数据以及罗马数字的识别数据,用于训练,同时保证罗马数字和中英文识别数字的效果;
+2. 修改默认的字典文件,在后面添加罗马数字的字符;
+3. 下载PaddleOCR提供的预训练模型,配置预训练模型和数据的路径,开始训练。
-**A**:没有打印出预测结果,说明启动失败。可以参考这篇文档重新配置下动态图的paddle serving:https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/deploy/pdserving/README_CN.md
+### Q3.1.72: 文字识别主要有CRNN和Attention两种方式,但是在我们的说明文档中,CRNN有对应的论文,但是Attention没看到,这个具体在哪里呢?
+
+**A**:文字识别主要有CTC和Attention两种方式,基于CTC的算法有CRNN、Rosetta、StarNet,基于Attention的方法有RARE、其他的算法PaddleOCR里没有提供复现代码。论文的链接可以参考:[PaddleOCR文本识别算法教程文档](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.1/doc/doc_ch/algorithm_overview.md#%E6%96%87%E6%9C%AC%E8%AF%86%E5%88%AB%E7%AE%97%E6%B3%95)
@@ -337,9 +333,19 @@ src_im= cv2.cvtColor(np.asarray(img), cv2.COLOR_RGB2BGR)
#### Q2.3.20: 如何根据不同的硬件平台选用不同的backbone?
**A**:在不同的硬件上,不同的backbone的速度优势不同,可以根据不同平台的速度-精度图来确定backbone,这里可以参考[PaddleClas模型速度-精度图](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.0/docs/zh_CN/models)。
-#### Q2.3.21: 端到端算法PGNet是否支持中文识别,速度会很慢嘛?
+#### Q2.3.21: 端到端算法PGNet是否支持中文识别,速度会很慢嘛?
**A**:目前开源的PGNet算法模型主要是用于检测英文数字,对于中文的识别需要自己训练,大家可以使用开源的端到端中文数据集,而对于复杂文本(弯曲文本)的识别,也可以自己构造一批数据集针对进行训练,对于推理速度,可以先将模型转换为inference再进行预测,速度应该会相当可观。
+
+### Q2.3.22: 目前知识蒸馏有哪些主要的实践思路?
+
+**A**:知识蒸馏即利用教师模型指导学生模型的训练,目前有3种主要的蒸馏思路:
+1. 基于输出结果的蒸馏,即让学生模型学习教师模型的软标签(分类或者OCR识别等任务中)或者概率热度图(分割等任务中)。
+2. 基于特征图的蒸馏,即让学生模型学习教师模型中间层的特征图,拟合中间层的一些特征。
+3. 基于关系的蒸馏,针对不同的样本(假设个数为N),教师模型会有不同的输出,那么可以基于不同样本的输出,计算一个NxN的相关性矩阵,可以让学生模型去学习教师模型关于不同样本的相关性矩阵。
+
+当然,知识蒸馏方法日新月异,也欢迎大家提出更多的总结与建议。
+
<a name="PaddleOCR实战问题"></a>
## 【实战篇】PaddleOCR实战问题
@@ -693,6 +699,27 @@ src_im= cv2.cvtColor(np.asarray(img), cv2.COLOR_RGB2BGR)
**A**: 目前代码要求标注为统一的点数。
+### Q3.1.69: 怎么加速训练过程呢?
+
+**A**:OCR模型训练过程中一般包含大量的数据增广,这些数据增广是比较耗时的,因此可以离线生成大量增广后的图像,直接送入网络进行训练,机器资源充足的情况下,也可以使用分布式训练的方法,可以参考[分布式训练教程文档](https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_ch/distributed_training.md)。
+
+
+### Q3.1.70: 文字识别模型模型的输出矩阵需要进行解码才能得到识别的文本。代码中实现为preds_idx = preds.argmax(axis=2),也就是最佳路径解码法。这是一种贪心算法,是每一个时间步只将最大概率的字符作为当前时间步的预测输出,但得到的结果不一定是最好的。为什么不使用beam search这种方式进行解码呢?
+
+**A**:实验发现,使用贪心的方法去做解码,识别精度影响不大,但是速度方面的优势比较明显,因此PaddleOCR中使用贪心算法去做识别的解码。
+
+### Q3.1.71: 遇到中英文识别模型不支持的字符,该如何对模型做微调?
+
+**A**:如果希望识别中英文识别模型中不支持的字符,需要更新识别的字典,并完成微调过程。比如说如果希望模型能够进一步识别罗马数字,可以按照以下步骤完成模型微调过程。
+1. 准备中英文识别数据以及罗马数字的识别数据,用于训练,同时保证罗马数字和中英文识别数字的效果;
+2. 修改默认的字典文件,在后面添加罗马数字的字符;
+3. 下载PaddleOCR提供的预训练模型,配置预训练模型和数据的路径,开始训练。
+
+
+### Q3.1.72: 文字识别主要有CRNN和Attention两种方式,但是在我们的说明文档中,CRNN有对应的论文,但是Attention没看到,这个具体在哪里呢?
+
+**A**:文字识别主要有CTC和Attention两种方式,基于CTC的算法有CRNN、Rosetta、StarNet,基于Attention的方法有RARE、其他的算法PaddleOCR里没有提供复现代码。论文的链接可以参考:[PaddleOCR文本识别算法教程文档](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.1/doc/doc_ch/algorithm_overview.md#%E6%96%87%E6%9C%AC%E8%AF%86%E5%88%AB%E7%AE%97%E6%B3%95)
+
<a name="数据集3"></a>
### 数据集
| att. | https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/2882 | 2021-05-24T05:51:29Z | 2021-05-24T08:24:31Z | 2021-05-24T08:24:31Z | 2021-11-13T03:30:35Z | 3,985 | PaddlePaddle/PaddleOCR | 42,560 |
Use SQS parameter message group id for sqs targets in events rule. | diff --git a/localstack/services/events/events_listener.py b/localstack/services/events/events_listener.py
index c1bf4cb00d453..8076d20bcc904 100644
--- a/localstack/services/events/events_listener.py
+++ b/localstack/services/events/events_listener.py
@@ -60,7 +60,8 @@ def func(*args):
for target in targets:
arn = target.get('Arn')
event = json.loads(target.get('Input') or '{}')
- aws_stack.send_event_to_target(arn, event)
+ attr = aws_stack.get_events_target_attributes(target)
+ aws_stack.send_event_to_target(arn, event, target_attributes=attr)
return func
diff --git a/localstack/utils/aws/aws_stack.py b/localstack/utils/aws/aws_stack.py
index e5fa262d8c815..d97543e14ef6a 100644
--- a/localstack/utils/aws/aws_stack.py
+++ b/localstack/utils/aws/aws_stack.py
@@ -524,7 +524,7 @@ def _resource_arn(name, pattern, account_id=None, region_name=None):
return pattern % (region_name, account_id, name)
-def send_event_to_target(arn, event):
+def send_event_to_target(arn, event, target_attributes=None):
if ':lambda:' in arn:
from localstack.services.awslambda import lambda_api
lambda_api.run_lambda(event=event, context={}, func_arn=arn)
@@ -536,7 +536,10 @@ def send_event_to_target(arn, event):
elif ':sqs:' in arn:
sqs_client = connect_to_service('sqs')
queue_url = get_sqs_queue_url(arn)
- sqs_client.send_message(QueueUrl=queue_url, MessageBody=json.dumps(event))
+
+ msg_group_id = (target_attributes or {}).get('MessageGroupId')
+ kwargs = {'MessageGroupId': msg_group_id} if msg_group_id else {}
+ sqs_client.send_message(QueueUrl=queue_url, MessageBody=json.dumps(event), **kwargs)
elif ':states' in arn:
stepfunctions_client = connect_to_service('stepfunctions')
@@ -546,6 +549,12 @@ def send_event_to_target(arn, event):
LOG.info('Unsupported Events rule target ARN "%s"' % arn)
+def get_events_target_attributes(target):
+ # added for sqs, if needed can be moved to an if else
+ # block for multiple targets
+ return target.get('SqsParameters')
+
+
def create_sqs_queue(queue_name, env=None):
env = get_environment(env)
# queue
diff --git a/tests/integration/test_events.py b/tests/integration/test_events.py
index 53aa917bbced1..c697f98036967 100644
--- a/tests/integration/test_events.py
+++ b/tests/integration/test_events.py
@@ -349,6 +349,7 @@ def forward_request(self, method, path, data, headers):
topic_name = 'topic-{}'.format(short_uid())
queue_name = 'queue-{}'.format(short_uid())
+ fifo_queue_name = 'queue-{}.fifo'.format(short_uid())
rule_name = 'rule-{}'.format(short_uid())
endpoint = '{}://{}:{}'.format(get_service_protocol(), config.LOCALSTACK_HOSTNAME, local_port)
sm_role_arn = aws_stack.role_arn('sfn_role')
@@ -356,6 +357,7 @@ def forward_request(self, method, path, data, headers):
topic_target_id = 'target-{}'.format(short_uid())
sm_target_id = 'target-{}'.format(short_uid())
queue_target_id = 'target-{}'.format(short_uid())
+ fifo_queue_target_id = 'target-{}'.format(short_uid())
events = []
state_machine_definition = """
@@ -381,7 +383,10 @@ def forward_request(self, method, path, data, headers):
self.sns_client.subscribe(TopicArn=topic_arn, Protocol='http', Endpoint=endpoint)
queue_url = self.sqs_client.create_queue(QueueName=queue_name)['QueueUrl']
+ fifo_queue_url = self.sqs_client.create_queue(
+ QueueName=fifo_queue_name, Attributes={'FifoQueue': 'true'})['QueueUrl']
queue_arn = aws_stack.sqs_queue_arn(queue_name)
+ fifo_queue_arn = aws_stack.sqs_queue_arn(fifo_queue_name)
event = {
'env': 'testing'
@@ -409,11 +414,20 @@ def forward_request(self, method, path, data, headers):
'Id': queue_target_id,
'Arn': queue_arn,
'Input': json.dumps(event)
+ },
+ {
+ 'Id': fifo_queue_target_id,
+ 'Arn': fifo_queue_arn,
+ 'Input': json.dumps(event),
+ 'SqsParameters': {
+ 'MessageGroupId': '123'
+ }
+
}
]
)
- def received(q_url):
+ def received(q_urls):
# state machine got executed
executions = self.sfn_client.list_executions(stateMachineArn=state_machine_arn)['executions']
self.assertGreaterEqual(len(executions), 1)
@@ -427,16 +441,22 @@ def received(q_url):
execution_arn = executions[0]['executionArn']
execution_input = self.sfn_client.describe_execution(executionArn=execution_arn)['input']
+ all_msgs = []
# get message from queue
- msgs = self.sqs_client.receive_message(QueueUrl=q_url).get('Messages', [])
- self.assertGreaterEqual(len(msgs), 1)
+ for url in q_urls:
+ msgs = self.sqs_client.receive_message(QueueUrl=url).get('Messages', [])
+ self.assertGreaterEqual(len(msgs), 1)
+ all_msgs.append(msgs[0])
- return execution_input, notifications[0], msgs[0]
+ return execution_input, notifications[0], all_msgs
- execution_input, notification, msg_received = retry(received, retries=5, sleep=15, q_url=queue_url)
+ execution_input, notification, msgs_received = retry(
+ received, retries=5, sleep=15, q_urls=[queue_url, fifo_queue_url]
+ )
self.assertEqual(json.loads(notification), event)
self.assertEqual(json.loads(execution_input), event)
- self.assertEqual(json.loads(msg_received['Body']), event)
+ for msg_received in msgs_received:
+ self.assertEqual(json.loads(msg_received['Body']), event)
proxy.stop()
| - addresses https://github.com/localstack/localstack/issues/3402
| https://api.github.com/repos/localstack/localstack/pulls/3454 | 2021-01-10T15:34:46Z | 2021-01-11T22:02:29Z | 2021-01-11T22:02:28Z | 2021-01-11T22:02:29Z | 1,447 | localstack/localstack | 28,769 |
Fixed a grammatical mistake in the docs | diff --git a/docs/guides/primer/usage_pattern.md b/docs/guides/primer/usage_pattern.md
index c92bc502db91e..eb13874eba3bb 100644
--- a/docs/guides/primer/usage_pattern.md
+++ b/docs/guides/primer/usage_pattern.md
@@ -29,7 +29,7 @@ text_list = [text1, text2, ...]
documents = [Document(t) for t in text_list]
```
-A Document represents a lightweight container around the data source. You can now to choose to proceed with one of the
+A Document represents a lightweight container around the data source. You can now choose to proceed with one of the
following steps:
1. Feed the Document object directly into the index (see section 3).
2. First convert the Document into Node objects (see section 2).
| https://api.github.com/repos/run-llama/llama_index/pulls/1201 | 2023-04-15T12:13:30Z | 2023-04-15T15:52:59Z | 2023-04-15T15:52:59Z | 2023-04-15T15:53:00Z | 189 | run-llama/llama_index | 6,415 | |
PEP8 Love: E126 Fix for #945 | diff --git a/letsencrypt/renewer.py b/letsencrypt/renewer.py
index b62e31bcead..8d8540e6f77 100644
--- a/letsencrypt/renewer.py
+++ b/letsencrypt/renewer.py
@@ -169,8 +169,8 @@ def main(config=None, cli_args=sys.argv[1:]):
# take precedence over this one.
config.merge(configobj.ConfigObj(cli_config.renewer_config_file))
# Ensure that all of the needed folders have been created before continuing
- le_util.make_or_verify_dir(
- cli_config.work_dir, constants.CONFIG_DIRS_MODE, uid)
+ le_util.make_or_verify_dir(cli_config.work_dir,
+ constants.CONFIG_DIRS_MODE, uid)
for i in os.listdir(cli_config.renewal_configs_dir):
print "Processing", i
| Addresses #959
| https://api.github.com/repos/certbot/certbot/pulls/961 | 2015-10-12T20:44:56Z | 2015-10-12T20:51:59Z | 2015-10-12T20:51:59Z | 2016-05-06T19:22:10Z | 190 | certbot/certbot | 3,391 |
Fix chunk handling when partial chunks are returned | diff --git a/fastchat/serve/openai_api_server.py b/fastchat/serve/openai_api_server.py
index d692af967d..036642cd3a 100644
--- a/fastchat/serve/openai_api_server.py
+++ b/fastchat/serve/openai_api_server.py
@@ -599,12 +599,14 @@ async def generate_completion_stream(payload: Dict[str, Any], worker_addr: str):
timeout=WORKER_API_TIMEOUT,
) as response:
# content = await response.aread()
+ buffer = b""
async for raw_chunk in response.aiter_raw():
- for chunk in raw_chunk.split(delimiter):
+ buffer += raw_chunk
+ while (chunk_end := buffer.find(delimiter)) >= 0:
+ chunk, buffer = buffer[:chunk_end], buffer[chunk_end + 1 :]
if not chunk:
continue
- data = json.loads(chunk.decode())
- yield data
+ yield json.loads(chunk.decode())
async def generate_completion(payload: Dict[str, Any], worker_addr: str):
| ## Why are these changes needed?
The openai api implementation assumes aiter_raw returns complete chunks. This is not the case once the response exceeds the size of network buffers causing an issue with the json decode. This change accumulates content till a full chunk is received.
## Checks
- [x] I've run `format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed.
- [ ] I've made sure the relevant tests are passing (if applicable).
| https://api.github.com/repos/lm-sys/FastChat/pulls/2485 | 2023-09-27T23:06:23Z | 2023-09-29T02:56:13Z | 2023-09-29T02:56:13Z | 2023-09-29T02:56:13Z | 232 | lm-sys/FastChat | 41,264 |
Make ACMEv1 deprecation warnings scarier | diff --git a/acme/acme/client.py b/acme/acme/client.py
index e13b272c702..7ab1eb6575b 100644
--- a/acme/acme/client.py
+++ b/acme/acme/client.py
@@ -1,4 +1,7 @@
"""ACME client API."""
+# pylint: disable=too-many-lines
+# This pylint disable can be deleted once the deprecated ACMEv1 code is
+# removed.
import base64
import collections
import datetime
@@ -7,7 +10,9 @@
import http.client as http_client
import logging
import re
+import sys
import time
+from types import ModuleType
from typing import cast
from typing import Dict
from typing import List
@@ -250,8 +255,6 @@ def __init__(self, directory, key, alg=jose.RS256, verify_ssl=True,
URI from which the resource will be downloaded.
"""
- warnings.warn("acme.client.Client (ACMEv1) is deprecated, "
- "use acme.client.ClientV2 instead.", PendingDeprecationWarning)
self.key = key
if net is None:
net = ClientNetwork(key, alg=alg, verify_ssl=verify_ssl)
@@ -834,8 +837,6 @@ class BackwardsCompatibleClientV2:
"""
def __init__(self, net, key, server):
- warnings.warn("acme.client.BackwardsCompatibleClientV2 is deprecated, use "
- "acme.client.ClientV2 instead.", PendingDeprecationWarning)
directory = messages.Directory.from_json(net.get(server).json())
self.acme_version = self._acme_version_from_directory(directory)
self.client: Union[Client, ClientV2]
@@ -1239,3 +1240,35 @@ def _post_once(self, url, obj, content_type=JOSE_CONTENT_TYPE,
response = self._check_response(response, content_type=content_type)
self._add_nonce(response)
return response
+
+
+# This class takes a similar approach to the cryptography project to deprecate attributes
+# in public modules. See the _ModuleWithDeprecation class here:
+# https://github.com/pyca/cryptography/blob/91105952739442a74582d3e62b3d2111365b0dc7/src/cryptography/utils.py#L129
+class _ClientDeprecationModule:
+ """
+ Internal class delegating to a module, and displaying warnings when attributes
+ related to deprecated attributes in the acme.client module.
+ """
+ def __init__(self, module):
+ self.__dict__['_module'] = module
+
+ def __getattr__(self, attr):
+ if attr in ('Client', 'BackwardsCompatibleClientV2'):
+ warnings.warn('The {0} attribute in acme.client is deprecated '
+ 'and will be removed soon.'.format(attr),
+ DeprecationWarning, stacklevel=2)
+ return getattr(self._module, attr)
+
+ def __setattr__(self, attr, value): # pragma: no cover
+ setattr(self._module, attr, value)
+
+ def __delattr__(self, attr): # pragma: no cover
+ delattr(self._module, attr)
+
+ def __dir__(self): # pragma: no cover
+ return ['_module'] + dir(self._module)
+
+
+# Patching ourselves to warn about deprecation and planned removal of some elements in the module.
+sys.modules[__name__] = cast(ModuleType, _ClientDeprecationModule(sys.modules[__name__]))
diff --git a/certbot/certbot/_internal/client.py b/certbot/certbot/_internal/client.py
index 66a07e84263..a217e889d24 100644
--- a/certbot/certbot/_internal/client.py
+++ b/certbot/certbot/_internal/client.py
@@ -39,8 +39,7 @@ def acme_from_config_key(config, key, regr=None):
user_agent=determine_user_agent(config))
with warnings.catch_warnings():
- # TODO: full removal of ACMEv1 support: https://github.com/certbot/certbot/issues/6844
- warnings.simplefilter("ignore", PendingDeprecationWarning)
+ warnings.simplefilter("ignore", DeprecationWarning)
client = acme_client.BackwardsCompatibleClientV2(net, key, config.server)
if client.acme_version == 1:
diff --git a/certbot/tests/auth_handler_test.py b/certbot/tests/auth_handler_test.py
index ad09067a1cd..a94259a7991 100644
--- a/certbot/tests/auth_handler_test.py
+++ b/certbot/tests/auth_handler_test.py
@@ -79,9 +79,9 @@ def setUp(self):
self.mock_auth.perform.side_effect = gen_auth_resp
self.mock_account = mock.Mock(key=util.Key("file_path", "PEM"))
- self.mock_net = mock.MagicMock(spec=acme_client.Client)
+ self.mock_net = mock.MagicMock(spec=acme_client.ClientV2)
self.mock_net.acme_version = 1
- self.mock_net.retry_after.side_effect = acme_client.Client.retry_after
+ self.mock_net.retry_after.side_effect = acme_client.ClientV2.retry_after
self.handler = AuthHandler(
self.mock_auth, self.mock_net, self.mock_account, [])
@@ -165,9 +165,6 @@ def test_name1_http_01_1_dns_1_acme_2(self):
self.assertEqual(len(authzr), 1)
def _test_name3_http_01_3_common(self, combos):
- self.mock_net.request_domain_challenges.side_effect = functools.partial(
- gen_dom_authzr, challs=acme_util.CHALLENGES, combos=combos)
-
authzrs = [gen_dom_authzr(domain="0", challs=acme_util.CHALLENGES),
gen_dom_authzr(domain="1", challs=acme_util.CHALLENGES),
gen_dom_authzr(domain="2", challs=acme_util.CHALLENGES)]
diff --git a/certbot/tests/client_test.py b/certbot/tests/client_test.py
index 4d4c3036f0f..330fd2852f1 100644
--- a/certbot/tests/client_test.py
+++ b/certbot/tests/client_test.py
@@ -1,4 +1,5 @@
"""Tests for certbot._internal.client."""
+import contextlib
import platform
import shutil
import tempfile
@@ -92,8 +93,17 @@ def _true_mock():
def _false_mock():
return False
+ @staticmethod
+ @contextlib.contextmanager
+ def _patched_acme_client():
+ # This function is written this way to avoid deprecation warnings that
+ # are raised when BackwardsCompatibleClientV2 is accessed on the real
+ # acme.client module.
+ with mock.patch('certbot._internal.client.acme_client') as mock_acme_client:
+ yield mock_acme_client.BackwardsCompatibleClientV2
+
def test_no_tos(self):
- with mock.patch("certbot._internal.client.acme_client.BackwardsCompatibleClientV2") as mock_client:
+ with self._patched_acme_client() as mock_client:
mock_client.new_account_and_tos().terms_of_service = "http://tos"
mock_client().external_account_required.side_effect = self._false_mock
with mock.patch("certbot._internal.eff.prepare_subscription") as mock_prepare:
@@ -107,7 +117,7 @@ def test_no_tos(self):
@test_util.patch_display_util()
def test_it(self, unused_mock_get_utility):
- with mock.patch("certbot._internal.client.acme_client.BackwardsCompatibleClientV2") as mock_client:
+ with self._patched_acme_client() as mock_client:
mock_client().external_account_required.side_effect = self._false_mock
with mock.patch("certbot._internal.eff.handle_subscription"):
self._call()
@@ -118,7 +128,7 @@ def test_email_retry(self, mock_get_email):
self.config.noninteractive_mode = False
msg = "DNS problem: NXDOMAIN looking up MX for example.com"
mx_err = messages.Error.with_code('invalidContact', detail=msg)
- with mock.patch("certbot._internal.client.acme_client.BackwardsCompatibleClientV2") as mock_client:
+ with self._patched_acme_client() as mock_client:
mock_client().external_account_required.side_effect = self._false_mock
with mock.patch("certbot._internal.eff.prepare_subscription") as mock_prepare:
mock_client().new_account_and_tos.side_effect = [mx_err, mock.MagicMock()]
@@ -131,7 +141,7 @@ def test_email_invalid_noninteractive(self):
self.config.noninteractive_mode = True
msg = "DNS problem: NXDOMAIN looking up MX for example.com"
mx_err = messages.Error.with_code('invalidContact', detail=msg)
- with mock.patch("certbot._internal.client.acme_client.BackwardsCompatibleClientV2") as mock_client:
+ with self._patched_acme_client() as mock_client:
mock_client().external_account_required.side_effect = self._false_mock
with mock.patch("certbot._internal.eff.handle_subscription"):
mock_client().new_account_and_tos.side_effect = [mx_err, mock.MagicMock()]
@@ -144,8 +154,8 @@ def test_needs_email(self):
@mock.patch("certbot._internal.client.logger")
def test_without_email(self, mock_logger):
with mock.patch("certbot._internal.eff.prepare_subscription") as mock_prepare:
- with mock.patch("certbot._internal.client.acme_client.BackwardsCompatibleClientV2") as mock_clnt:
- mock_clnt().external_account_required.side_effect = self._false_mock
+ with self._patched_acme_client() as mock_client:
+ mock_client().external_account_required.side_effect = self._false_mock
self.config.email = None
self.config.register_unsafely_without_email = True
self.config.dry_run = False
@@ -156,7 +166,7 @@ def test_without_email(self, mock_logger):
@mock.patch("certbot._internal.client.display_ops.get_email")
def test_dry_run_no_staging_account(self, mock_get_email):
"""Tests dry-run for no staging account, expect account created with no email"""
- with mock.patch("certbot._internal.client.acme_client.BackwardsCompatibleClientV2") as mock_client:
+ with self._patched_acme_client() as mock_client:
mock_client().external_account_required.side_effect = self._false_mock
with mock.patch("certbot._internal.eff.handle_subscription"):
self.config.dry_run = True
@@ -168,7 +178,7 @@ def test_dry_run_no_staging_account(self, mock_get_email):
@test_util.patch_display_util()
def test_with_eab_arguments(self, unused_mock_get_utility):
- with mock.patch("certbot._internal.client.acme_client.BackwardsCompatibleClientV2") as mock_client:
+ with self._patched_acme_client() as mock_client:
mock_client().client.directory.__getitem__ = mock.Mock(
side_effect=self._new_acct_dir_mock
)
@@ -184,7 +194,7 @@ def test_with_eab_arguments(self, unused_mock_get_utility):
@test_util.patch_display_util()
def test_without_eab_arguments(self, unused_mock_get_utility):
- with mock.patch("certbot._internal.client.acme_client.BackwardsCompatibleClientV2") as mock_client:
+ with self._patched_acme_client() as mock_client:
mock_client().external_account_required.side_effect = self._false_mock
with mock.patch("certbot._internal.eff.handle_subscription"):
target = "certbot._internal.client.messages.ExternalAccountBinding.from_data"
@@ -196,7 +206,7 @@ def test_without_eab_arguments(self, unused_mock_get_utility):
self.assertIs(mock_eab_from_data.called, False)
def test_external_account_required_without_eab_arguments(self):
- with mock.patch("certbot._internal.client.acme_client.BackwardsCompatibleClientV2") as mock_client:
+ with self._patched_acme_client() as mock_client:
mock_client().client.net.key.public_key = mock.Mock(side_effect=self._public_key_mock)
mock_client().external_account_required.side_effect = self._true_mock
with mock.patch("certbot._internal.eff.handle_subscription"):
@@ -210,7 +220,7 @@ def test_unsupported_error(self):
from acme import messages
msg = "Test"
mx_err = messages.Error.with_code("malformed", detail=msg, title="title")
- with mock.patch("certbot._internal.client.acme_client.BackwardsCompatibleClientV2") as mock_client:
+ with self._patched_acme_client() as mock_client:
mock_client().client.directory.__getitem__ = mock.Mock(
side_effect=self._new_acct_dir_mock
)
@@ -232,9 +242,10 @@ def setUp(self):
self.account = mock.MagicMock(**{"key.pem": KEY})
from certbot._internal.client import Client
- with mock.patch("certbot._internal.client.acme_client.BackwardsCompatibleClientV2") as acme:
- self.acme_client = acme
- self.acme = acme.return_value = mock.MagicMock()
+ with mock.patch("certbot._internal.client.acme_client") as acme:
+ self.acme_client = acme.BackwardsCompatibleClientV2
+ self.acme = self.acme_client.return_value = mock.MagicMock()
+ self.client_network = acme.ClientNetwork
self.client = Client(
config=self.config, account_=self.account,
auth=None, installer=None)
@@ -255,8 +266,7 @@ def setUp(self):
csr_pem=mock.sentinel.csr_pem)
def test_init_acme_verify_ssl(self):
- net = self.acme_client.call_args[0][0]
- self.assertIs(net.verify_ssl, True)
+ self.assertIs(self.client_network.call_args[1]['verify_ssl'], True)
def _mock_obtain_certificate(self):
self.client.auth_handler = mock.MagicMock()
diff --git a/certbot/tests/main_test.py b/certbot/tests/main_test.py
index de202d006d2..3fd71b1531b 100644
--- a/certbot/tests/main_test.py
+++ b/certbot/tests/main_test.py
@@ -323,12 +323,12 @@ def setUp(self):
self.tmp_cert_path = os.path.abspath(os.path.join(self.tempdir, 'cert_512.pem'))
patches = [
- mock.patch('acme.client.BackwardsCompatibleClientV2'),
+ mock.patch('certbot._internal.client.acme_client'),
mock.patch('certbot._internal.client.Client'),
mock.patch('certbot._internal.main._determine_account'),
mock.patch('certbot._internal.main.display_ops.success_revocation')
]
- self.mock_acme_client = patches[0].start()
+ self.mock_acme_client = patches[0].start().BackwardsCompatibleClientV2
patches[1].start()
self.mock_determine_account = patches[2].start()
self.mock_success_revoke = patches[3].start()
@@ -708,11 +708,10 @@ def test_noninteractive(self, _):
@mock.patch('certbot._internal.eff.handle_subscription')
@mock.patch('certbot._internal.log.post_arg_parse_setup')
@mock.patch('certbot._internal.main._report_new_cert')
- @mock.patch('certbot._internal.main.client.acme_client.Client')
@mock.patch('certbot._internal.main._determine_account')
@mock.patch('certbot._internal.main.client.Client.obtain_and_enroll_certificate')
@mock.patch('certbot._internal.main._get_and_save_cert')
- def test_user_agent(self, gsc, _obt, det, _client, _, __, ___):
+ def test_user_agent(self, gsc, _obt, det, _, __, ___):
# Normally the client is totally mocked out, but here we need more
# arguments to automate it...
args = ["--standalone", "certonly", "-m", "none@none.com",
@@ -720,7 +719,8 @@ def test_user_agent(self, gsc, _obt, det, _client, _, __, ___):
det.return_value = mock.MagicMock(), None
gsc.return_value = mock.MagicMock()
- with mock.patch('certbot._internal.main.client.acme_client.ClientNetwork') as acme_net:
+ with mock.patch('certbot._internal.main.client.acme_client') as acme_client:
+ acme_net = acme_client.ClientNetwork
self._call_no_clientmock(args)
os_ver = util.get_os_info_ua()
ua = acme_net.call_args[1]["user_agent"]
@@ -730,7 +730,8 @@ def test_user_agent(self, gsc, _obt, det, _client, _, __, ___):
if "linux" in plat.lower():
self.assertIn(util.get_os_info_ua(), ua)
- with mock.patch('certbot._internal.main.client.acme_client.ClientNetwork') as acme_net:
+ with mock.patch('certbot._internal.main.client.acme_client') as acme_client:
+ acme_net = acme_client.ClientNetwork
ua = "bandersnatch"
args += ["--user-agent", ua]
self._call_no_clientmock(args)
| Fixes https://github.com/certbot/certbot/issues/6844.
This PR does two things:
1. Changes ACMEv1 deprecation warnings from `PendingDeprecationWarning` to `DeprecationWarning`.
2. Changes the ACMEv1 deprecation warnings to be on references to the class themselves. This is the approach taken in https://github.com/certbot/certbot/pull/8989, the PRs linked there, and the `cryptography` code in the code comment. I think this approach warns in more cases and I updated our unit tests to avoid hitting these warnings. | https://api.github.com/repos/certbot/certbot/pulls/9015 | 2021-08-30T22:15:10Z | 2021-08-30T22:38:12Z | 2021-08-30T22:38:12Z | 2021-08-30T22:38:13Z | 3,924 | certbot/certbot | 3,026 |
[test] Make test object store more accurate. | diff --git a/release/benchmarks/object_store/test_object_store.py b/release/benchmarks/object_store/test_object_store.py
index 2403be078a079..ef8fb55340808 100644
--- a/release/benchmarks/object_store/test_object_store.py
+++ b/release/benchmarks/object_store/test_object_store.py
@@ -28,8 +28,8 @@ class Actor:
def foo(self):
pass
- def sum(self, arr):
- return np.sum(arr)
+ def data_len(self, arr):
+ return len(arr)
actors = [Actor.remote() for _ in range(NUM_NODES)]
@@ -39,25 +39,28 @@ def sum(self, arr):
for actor in tqdm(actors, desc="Ensure all actors have started."):
ray.get(actor.foo.remote())
+ start = perf_counter()
result_refs = []
for actor in tqdm(actors, desc="Broadcasting objects"):
- result_refs.append(actor.sum.remote(ref))
+ result_refs.append(actor.data_len.remote(ref))
results = ray.get(result_refs)
+ end = perf_counter()
+
for result in results:
assert result == OBJECT_SIZE
+ return end - start
+
ray.init(address="auto")
-start = perf_counter()
-test_object_broadcast()
-end = perf_counter()
-print(f"Broadcast time: {end - start} ({OBJECT_SIZE} B x {NUM_NODES} nodes)")
+duration = test_object_broadcast()
+print(f"Broadcast time: {duration} ({OBJECT_SIZE} B x {NUM_NODES} nodes)")
if "TEST_OUTPUT_JSON" in os.environ:
out_file = open(os.environ["TEST_OUTPUT_JSON"], "w")
results = {
- "broadcast_time": end - start,
+ "broadcast_time": duration,
"object_size": OBJECT_SIZE,
"num_nodes": NUM_NODES,
"success": "1",
@@ -66,7 +69,7 @@ def sum(self, arr):
results["perf_metrics"] = [
{
"perf_metric_name": perf_metric_name,
- "perf_metric_value": end - start,
+ "perf_metric_value": duration,
"perf_metric_type": "LATENCY",
}
]
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
The test include ray.put and also actor start time. These should be excluded from the testing to make it more accurate.
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR.
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've added any new APIs to the API Reference. For example, if I added a
method in Tune, I've added it in `doc/source/tune/api/` under the
corresponding `.rst` file.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/34885 | 2023-04-29T04:27:12Z | 2023-05-01T18:31:02Z | 2023-05-01T18:31:02Z | 2023-05-01T18:31:02Z | 479 | ray-project/ray | 19,695 |
Bump sphinx from 4.1.1 to 4.1.2 | diff --git a/docs/requirements.txt b/docs/requirements.txt
index ca85e46e8..67cbc0079 100644
--- a/docs/requirements.txt
+++ b/docs/requirements.txt
@@ -1,4 +1,4 @@
alabaster==0.7.12
-Sphinx==4.1.1
+Sphinx==4.1.2
sphinx-rtd-theme==0.5.2
sphinx-copybutton==0.4.0
| Bumps [sphinx](https://github.com/sphinx-doc/sphinx) from 4.1.1 to 4.1.2.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/sphinx-doc/sphinx/blob/4.x/CHANGES">sphinx's changelog</a>.</em></p>
<blockquote>
<h1>Release 4.1.2 (released Jul 27, 2021)</h1>
<h2>Incompatible changes</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9435">#9435</a>: linkcheck: Disable checking automatically generated anchors on
github.com (ex. anchors in reST/Markdown documents)</li>
</ul>
<h2>Bugs fixed</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9489">#9489</a>: autodoc: Custom types using <code>typing.NewType</code> are not displayed well
with the HEAD of 3.10</li>
<li><a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9490">#9490</a>: autodoc: Some objects under <code>typing</code> module are not displayed well
with the HEAD of 3.10</li>
<li><a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9436">#9436</a>, <a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9471">#9471</a>: autodoc: crashed if <code>autodoc_class_signature = "separated"</code></li>
<li><a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9456">#9456</a>: html search: html_copy_source can't control the search summaries</li>
<li><a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9435">#9435</a>: linkcheck: Failed to check anchors in github.com</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/sphinx-doc/sphinx/commit/4ba5c21b0182557157710a457bed470d335f7571"><code>4ba5c21</code></a> Bump to 4.1.2 final</li>
<li><a href="https://github.com/sphinx-doc/sphinx/commit/4dc45f0a3d7e1cd7a1df524c23e8883e7639e95a"><code>4dc45f0</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9494">#9494</a> from tk0miya/9456_revert_9129</li>
<li><a href="https://github.com/sphinx-doc/sphinx/commit/df562b4343aaa29163aefafe04906cb5bf7ffe83"><code>df562b4</code></a> Merge branch '4.1.x' into 9456_revert_9129</li>
<li><a href="https://github.com/sphinx-doc/sphinx/commit/5d8f925e02e6ab339deb8a603a7355d46571c3e4"><code>5d8f925</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9493">#9493</a> from tk0miya/9436_autodoc_class_signature</li>
<li><a href="https://github.com/sphinx-doc/sphinx/commit/ef53dec9c59a2379b69416fdd45c163606237c94"><code>ef53dec</code></a> Fix <a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9456">#9456</a>: html search: html_copy_source can't control the search summaries</li>
<li><a href="https://github.com/sphinx-doc/sphinx/commit/4f364a30bc6e3df96cf8e1d7dd1a0d2115c30f0b"><code>4f364a3</code></a> Fix <a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9436">#9436</a>, <a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9471">#9471</a>: autodoc: crashed if autodoc_class_signature = "separated"</li>
<li><a href="https://github.com/sphinx-doc/sphinx/commit/9218ad4adc20297ab3d3591b0513604bf1e35631"><code>9218ad4</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9491">#9491</a> from tk0miya/9489_NewType</li>
<li><a href="https://github.com/sphinx-doc/sphinx/commit/68fb54806f2ce476159e5607fc303342c29e31be"><code>68fb548</code></a> Fix <a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9490">#9490</a>: autodoc: Some typing.* objects are broken</li>
<li><a href="https://github.com/sphinx-doc/sphinx/commit/771507e073d46b12bcbdc56346a4ef06d2f653f5"><code>771507e</code></a> Fix <a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9489">#9489</a>: autodoc: Custom types using typing.NewType are broken</li>
<li><a href="https://github.com/sphinx-doc/sphinx/commit/9ebdc987b08a52e857fee5a4099d64baadfae729"><code>9ebdc98</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/sphinx-doc/sphinx/issues/9467">#9467</a> from tk0miya/9435_disable_rewrite_github_anchor</li>
<li>Additional commits viewable in <a href="https://github.com/sphinx-doc/sphinx/compare/v4.1.1...v4.1.2">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details> | https://api.github.com/repos/Textualize/rich/pulls/1367 | 2021-07-27T13:03:17Z | 2021-07-27T18:46:08Z | 2021-07-27T18:46:08Z | 2021-07-27T18:46:13Z | 106 | Textualize/rich | 48,279 |
Add the content interface to Certbot | diff --git a/snap/snapcraft.yaml b/snap/snapcraft.yaml
index 3b5d98f2de6..07910bf0d89 100644
--- a/snap/snapcraft.yaml
+++ b/snap/snapcraft.yaml
@@ -85,3 +85,9 @@ parts:
# After certbot-apache to not rebuild duplicates (essentially sharing what was already staged,
# like zope)
after: [certbot-apache]
+
+plugs:
+ plugin:
+ interface: content
+ content: certbot-1
+ target: $SNAP/certbot-plugin
| This PR moves the setup of the content interface which we plan to use to support external plugins from the [snap-plugin branch](https://github.com/certbot/certbot/tree/snap-plugin) to `master`. This being in `master` might make it easier to make progress on issues like https://github.com/certbot/certbot/issues/7667.
I think it's OK to land this in `master` now because:
1. Certbot isn't configured to consume anything shared over the interface yet. That work is tracked in https://github.com/certbot/certbot/issues/7945.
2. While I don't think we currently have any plans to change anything here, since the Certbot snap is still in its beta phase, I think we can change whatever we want here whenever we want. | https://api.github.com/repos/certbot/certbot/pulls/8009 | 2020-05-21T17:53:12Z | 2020-05-27T20:59:09Z | 2020-05-27T20:59:09Z | 2020-05-27T20:59:13Z | 142 | certbot/certbot | 1,449 |
[youtube] Add support for multifeed videos | diff --git a/youtube_dl/extractor/youtube.py b/youtube_dl/extractor/youtube.py
index 4023a6e50ba..8a5ef2e7028 100644
--- a/youtube_dl/extractor/youtube.py
+++ b/youtube_dl/extractor/youtube.py
@@ -33,9 +33,11 @@
int_or_none,
orderedSet,
parse_duration,
+ smuggle_url,
str_to_int,
unescapeHTML,
unified_strdate,
+ unsmuggle_url,
uppercase_escape,
ISO3166Utils,
)
@@ -558,6 +560,59 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'format': '135', # bestvideo
}
},
+ {
+ # Multifeed videos (multiple cameras), URL is for Main Camera
+ 'url': 'https://www.youtube.com/watch?v=jqWvoWXjCVs',
+ 'info_dict': {
+ 'id': 'jqWvoWXjCVs',
+ 'title': 'teamPGP: Rocket League Noob Stream',
+ 'description': 'md5:dc7872fb300e143831327f1bae3af010',
+ },
+ 'playlist': [{
+ 'info_dict': {
+ 'id': 'jqWvoWXjCVs',
+ 'ext': 'mp4',
+ 'title': 'teamPGP: Rocket League Noob Stream (Main Camera)',
+ 'description': 'md5:dc7872fb300e143831327f1bae3af010',
+ 'upload_date': '20150721',
+ 'uploader': 'Beer Games Beer',
+ 'uploader_id': 'beergamesbeer',
+ },
+ }, {
+ 'info_dict': {
+ 'id': '6h8e8xoXJzg',
+ 'ext': 'mp4',
+ 'title': 'teamPGP: Rocket League Noob Stream (kreestuh)',
+ 'description': 'md5:dc7872fb300e143831327f1bae3af010',
+ 'upload_date': '20150721',
+ 'uploader': 'Beer Games Beer',
+ 'uploader_id': 'beergamesbeer',
+ },
+ }, {
+ 'info_dict': {
+ 'id': 'PUOgX5z9xZw',
+ 'ext': 'mp4',
+ 'title': 'teamPGP: Rocket League Noob Stream (grizzle)',
+ 'description': 'md5:dc7872fb300e143831327f1bae3af010',
+ 'upload_date': '20150721',
+ 'uploader': 'Beer Games Beer',
+ 'uploader_id': 'beergamesbeer',
+ },
+ }, {
+ 'info_dict': {
+ 'id': 'teuwxikvS5k',
+ 'ext': 'mp4',
+ 'title': 'teamPGP: Rocket League Noob Stream (zim)',
+ 'description': 'md5:dc7872fb300e143831327f1bae3af010',
+ 'upload_date': '20150721',
+ 'uploader': 'Beer Games Beer',
+ 'uploader_id': 'beergamesbeer',
+ },
+ }],
+ 'params': {
+ 'skip_download': True,
+ },
+ }
]
def __init__(self, *args, **kwargs):
@@ -889,6 +944,8 @@ def decrypt_sig(mobj):
return formats
def _real_extract(self, url):
+ url, smuggled_data = unsmuggle_url(url, {})
+
proto = (
'http' if self._downloader.params.get('prefer_insecure', False)
else 'https')
@@ -1005,6 +1062,55 @@ def add_dash_mpd(video_info):
'"token" parameter not in video info for unknown reason',
video_id=video_id)
+ # title
+ if 'title' in video_info:
+ video_title = video_info['title'][0]
+ else:
+ self._downloader.report_warning('Unable to extract video title')
+ video_title = '_'
+
+ # description
+ video_description = get_element_by_id("eow-description", video_webpage)
+ if video_description:
+ video_description = re.sub(r'''(?x)
+ <a\s+
+ (?:[a-zA-Z-]+="[^"]+"\s+)*?
+ title="([^"]+)"\s+
+ (?:[a-zA-Z-]+="[^"]+"\s+)*?
+ class="yt-uix-redirect-link"\s*>
+ [^<]+
+ </a>
+ ''', r'\1', video_description)
+ video_description = clean_html(video_description)
+ else:
+ fd_mobj = re.search(r'<meta name="description" content="([^"]+)"', video_webpage)
+ if fd_mobj:
+ video_description = unescapeHTML(fd_mobj.group(1))
+ else:
+ video_description = ''
+
+ if 'multifeed_metadata_list' in video_info and not smuggled_data.get('force_singlefeed', False):
+ if not self._downloader.params.get('noplaylist'):
+ entries = []
+ feed_ids = []
+ multifeed_metadata_list = compat_urllib_parse_unquote_plus(video_info['multifeed_metadata_list'][0])
+ for feed in multifeed_metadata_list.split(','):
+ feed_data = compat_parse_qs(feed)
+ entries.append({
+ '_type': 'url_transparent',
+ 'ie_key': 'Youtube',
+ 'url': smuggle_url(
+ '%s://www.youtube.com/watch?v=%s' % (proto, feed_data['id'][0]),
+ {'force_singlefeed': True}),
+ 'title': '%s (%s)' % (video_title, feed_data['title'][0]),
+ })
+ feed_ids.append(feed_data['id'][0])
+ self.to_screen(
+ 'Downloading multifeed video (%s) - add --no-playlist to just download video %s'
+ % (', '.join(feed_ids), video_id))
+ return self.playlist_result(entries, video_id, video_title, video_description)
+ self.to_screen('Downloading just video %s because of --no-playlist' % video_id)
+
if 'view_count' in video_info:
view_count = int(video_info['view_count'][0])
else:
@@ -1030,13 +1136,6 @@ def add_dash_mpd(video_info):
else:
self._downloader.report_warning('unable to extract uploader nickname')
- # title
- if 'title' in video_info:
- video_title = video_info['title'][0]
- else:
- self._downloader.report_warning('Unable to extract video title')
- video_title = '_'
-
# thumbnail image
# We try first to get a high quality image:
m_thumb = re.search(r'<span itemprop="thumbnail".*?href="(.*?)">',
@@ -1072,26 +1171,6 @@ def add_dash_mpd(video_info):
else:
video_categories = None
- # description
- video_description = get_element_by_id("eow-description", video_webpage)
- if video_description:
- video_description = re.sub(r'''(?x)
- <a\s+
- (?:[a-zA-Z-]+="[^"]+"\s+)*?
- title="([^"]+)"\s+
- (?:[a-zA-Z-]+="[^"]+"\s+)*?
- class="yt-uix-redirect-link"\s*>
- [^<]+
- </a>
- ''', r'\1', video_description)
- video_description = clean_html(video_description)
- else:
- fd_mobj = re.search(r'<meta name="description" content="([^"]+)"', video_webpage)
- if fd_mobj:
- video_description = unescapeHTML(fd_mobj.group(1))
- else:
- video_description = ''
-
def _extract_count(count_name):
return str_to_int(self._search_regex(
r'-%s-button[^>]+><span[^>]+class="yt-uix-button-content"[^>]*>([\d,]+)</span>'
| Adds support for downloading multifeed videos (from multiple cameras) as playlist (e.g. https://www.youtube.com/watch?v=jqWvoWXjCVs)
| https://api.github.com/repos/ytdl-org/youtube-dl/pulls/6360 | 2015-07-25T15:37:49Z | 2015-07-29T19:57:05Z | 2015-07-29T19:57:05Z | 2015-07-29T19:57:32Z | 1,880 | ytdl-org/youtube-dl | 50,500 |
[dplay] Migrate DiscoveryPlusItaly to DiscoveryPlus | diff --git a/yt_dlp/extractor/dplay.py b/yt_dlp/extractor/dplay.py
index e1f5e9dc86a..6a245c1f0d8 100644
--- a/yt_dlp/extractor/dplay.py
+++ b/yt_dlp/extractor/dplay.py
@@ -575,16 +575,19 @@ def _real_extract(self, url):
return self.playlist_result(self._entries(show_name), playlist_id=show_name)
-class DiscoveryPlusItalyIE(InfoExtractor):
+class DiscoveryPlusItalyIE(DiscoveryPlusIE):
_VALID_URL = r'https?://(?:www\.)?discoveryplus\.com/it/video' + DPlayBaseIE._PATH_REGEX
_TESTS = [{
'url': 'https://www.discoveryplus.com/it/video/i-signori-della-neve/stagione-2-episodio-1-i-preparativi',
'only_matching': True,
}]
+ _API_URL = 'eu1-prod-direct.discoveryplus.com'
+
def _real_extract(self, url):
- video_id = self._match_id(url)
- return self.url_result(f'https://discoveryplus.it/video/{video_id}', DPlayIE.ie_key(), video_id)
+ display_id = self._match_id(url)
+ return self._get_disco_api_info(
+ url, display_id, self._API_URL, 'dplay', 'it')
class DiscoveryPlusItalyShowIE(DiscoveryPlusShowBaseIE):
| ### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [x] Bug fix
- [ ] Improvement
- [ ] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
DiscoveryPlus in Italy migrated to the standard discoveryplus.com, so I migrated the DiscoveryPlusItaly extractor as a subclass of DiscoveryPlus.
I also removed the old, no more working DiscoveryPlusItalyShow extractor.
This should fix #2138 | https://api.github.com/repos/yt-dlp/yt-dlp/pulls/2315 | 2022-01-12T18:07:46Z | 2022-01-13T17:11:27Z | 2022-01-13T17:11:27Z | 2022-01-13T17:11:27Z | 341 | yt-dlp/yt-dlp | 7,530 |
[hotfix] Improve tester precision by removing ZeRO on vanilla lamb | diff --git a/colossalai/shardformer/modeling/gpt2.py b/colossalai/shardformer/modeling/gpt2.py
index 407338b162df..e3bf4b782f29 100644
--- a/colossalai/shardformer/modeling/gpt2.py
+++ b/colossalai/shardformer/modeling/gpt2.py
@@ -1084,7 +1084,6 @@ def forward(
shift_logits, shift_labels, process_group=shard_config.tensor_parallel_process_group
)
-
if not shard_config.parallel_output:
lm_logits = gather_forward_split_backward(lm_logits, -1, shard_config.tensor_parallel_process_group)
diff --git a/colossalai/shardformer/policies/gpt2.py b/colossalai/shardformer/policies/gpt2.py
index 6a50d65ba1e6..4bb6c8225970 100644
--- a/colossalai/shardformer/policies/gpt2.py
+++ b/colossalai/shardformer/policies/gpt2.py
@@ -269,13 +269,17 @@ def module_policy(self):
GPT2LMHeadModel: ModulePolicyDescription(
sub_module_replacement=[
SubModuleReplacementDescription(
- suffix="lm_head", target_module=col_nn.Linear1D_Col, kwargs={"gather_output": not self.shard_config.parallel_output}
+ suffix="lm_head",
+ target_module=col_nn.Linear1D_Col,
+ kwargs={"gather_output": not self.shard_config.parallel_output},
)
],
)
}
if self.shard_config.parallel_output:
- addon_module[GPT2LMHeadModel].method_replacement={"forward": get_lm_forward_with_dist_cross_entropy(self.shard_config)}
+ addon_module[GPT2LMHeadModel].method_replacement = {
+ "forward": get_lm_forward_with_dist_cross_entropy(self.shard_config)
+ }
module_policy.update(addon_module)
if self.pipeline_stage_manager is not None:
diff --git a/colossalai/shardformer/policies/llama.py b/colossalai/shardformer/policies/llama.py
index 4c454ac7f2cf..bcc825104f1d 100644
--- a/colossalai/shardformer/policies/llama.py
+++ b/colossalai/shardformer/policies/llama.py
@@ -255,12 +255,18 @@ def module_policy(self):
new_item = {
LlamaForCausalLM: ModulePolicyDescription(
sub_module_replacement=[
- SubModuleReplacementDescription(suffix="lm_head", target_module=Linear1D_Col, kwargs={"gather_output": not self.shard_config.parallel_output})
+ SubModuleReplacementDescription(
+ suffix="lm_head",
+ target_module=Linear1D_Col,
+ kwargs={"gather_output": not self.shard_config.parallel_output},
+ )
],
)
}
if self.shard_config.parallel_output:
- new_item[LlamaForCausalLM].method_replacement={"forward": get_lm_forward_with_dist_cross_entropy(self.shard_config)}
+ new_item[LlamaForCausalLM].method_replacement = {
+ "forward": get_lm_forward_with_dist_cross_entropy(self.shard_config)
+ }
policy.update(new_item)
if self.pipeline_stage_manager:
diff --git a/examples/images/vit/vit_benchmark.py b/examples/images/vit/vit_benchmark.py
index 32b1ec803aec..fdae9ee01537 100644
--- a/examples/images/vit/vit_benchmark.py
+++ b/examples/images/vit/vit_benchmark.py
@@ -119,9 +119,7 @@ def criterion(outputs, inputs):
if hasattr(booster.plugin, "stage_manager") and booster.plugin.stage_manager is not None:
# run pipeline forward backward
batch = iter([batch])
- outputs = booster.execute_pipeline(
- batch, model, criterion, optimizer, return_loss=True
- )
+ outputs = booster.execute_pipeline(batch, model, criterion, optimizer, return_loss=True)
else:
outputs = model(**batch)
loss = criterion(outputs, None)
diff --git a/examples/language/llama2/finetune.py b/examples/language/llama2/finetune.py
index 122186c30a58..69b4ebe42bf7 100644
--- a/examples/language/llama2/finetune.py
+++ b/examples/language/llama2/finetune.py
@@ -270,9 +270,7 @@ def main():
) as pbar:
for step in pbar:
if use_pipeline:
- outputs = booster.execute_pipeline(
- dataloader_iter, model, _criterion, optimizer, return_loss=True
- )
+ outputs = booster.execute_pipeline(dataloader_iter, model, _criterion, optimizer, return_loss=True)
loss = outputs["loss"]
else:
batch = next(dataloader_iter)
diff --git a/examples/language/llama2/pretrain.py b/examples/language/llama2/pretrain.py
index 7b5805b801a8..970cd5290f9f 100644
--- a/examples/language/llama2/pretrain.py
+++ b/examples/language/llama2/pretrain.py
@@ -285,9 +285,7 @@ def main():
) as pbar:
for step in pbar:
if use_pipeline:
- outputs = booster.execute_pipeline(
- dataloader_iter, model, _criterion, optimizer, return_loss=True
- )
+ outputs = booster.execute_pipeline(dataloader_iter, model, _criterion, optimizer, return_loss=True)
loss = outputs["loss"]
else:
batch = next(dataloader_iter)
diff --git a/examples/language/opt/opt_train_demo.py b/examples/language/opt/opt_train_demo.py
index 82dff1920fde..05336bec42c5 100644
--- a/examples/language/opt/opt_train_demo.py
+++ b/examples/language/opt/opt_train_demo.py
@@ -41,9 +41,7 @@ def train_epoch(epoch, model, optimizer, _criterion, lr_scheduler, dataloader, b
# Forward pass
for _ in pbar:
if use_pipeline:
- outputs = booster.execute_pipeline(
- dataloader, model, _criterion, optimizer, return_loss=True
- )
+ outputs = booster.execute_pipeline(dataloader, model, _criterion, optimizer, return_loss=True)
# Backward and optimize
if is_pp_last_stage:
loss = outputs["loss"]
diff --git a/tests/test_checkpoint_io/test_hybrid_parallel_plugin_checkpoint_io.py b/tests/test_checkpoint_io/test_hybrid_parallel_plugin_checkpoint_io.py
index 557666a804e3..d8a625b98a66 100644
--- a/tests/test_checkpoint_io/test_hybrid_parallel_plugin_checkpoint_io.py
+++ b/tests/test_checkpoint_io/test_hybrid_parallel_plugin_checkpoint_io.py
@@ -74,9 +74,7 @@ def _preprocess_data(data):
data = data_gen_fn()
model.train()
if booster.plugin.stage_manager is not None:
- booster.execute_pipeline(
- _preprocess_data(data), model, _criterion, optimizer, return_loss=True
- )
+ booster.execute_pipeline(_preprocess_data(data), model, _criterion, optimizer, return_loss=True)
else:
output = model(**_preprocess_data(data))
loss = criterion(output)
@@ -108,9 +106,7 @@ def _preprocess_data(data):
data_for_shard = data_gen_fn()
data_for_origin = data_gen_fn()
if booster.plugin.stage_manager is not None:
- booster.execute_pipeline(
- _preprocess_data(data_for_shard), model, _criterion, optimizer, return_loss=True
- )
+ booster.execute_pipeline(_preprocess_data(data_for_shard), model, _criterion, optimizer, return_loss=True)
booster.execute_pipeline(
_preprocess_data(data_for_origin),
new_model,
diff --git a/tests/test_optimizer/test_dist_lamb.py b/tests/test_optimizer/test_dist_lamb.py
index 9c3ed3e92813..93e9f56cee57 100644
--- a/tests/test_optimizer/test_dist_lamb.py
+++ b/tests/test_optimizer/test_dist_lamb.py
@@ -162,11 +162,11 @@ def run_dist_lamb_basic(
)
optim.setup_distributed(tp_group)
- rtol, atol = 8e-5, 8e-5
+ rtol, atol = 8e-7, 8e-7
if p_dtype is torch.float16 or g_dtype is torch.float16:
- rtol, atol = 2e-4, 2e-4
+ rtol, atol = 1e-6, 1e-6
if p_dtype is torch.bfloat16 or g_dtype is torch.bfloat16:
- rtol, atol = 4e-4, 4e-4
+ rtol, atol = 2e-6, 2e-6
for i in range(_N_STEP):
seed_all(_SEED + i) # NOTE: having only one manual_seed above doesn't work?
@@ -241,23 +241,15 @@ def run_dist_lamb_fwd_bwd(
verbose=True,
)
shard_to_param = optim._param_store.master_to_working_param
- torch_optim = LowLevelZeroOptimizer(
- torch_optim,
- overlap_communication=True,
- initial_scale=128,
- partition_grad=True,
- dp_process_group=dp_group,
- verbose=True,
- )
- optim.optim.setup_distributed(tp_group, dp_group, shard_to_param)
+ optim.optim.setup_distributed(tp_group, dp_group, shard_to_param, is_zero=True)
else:
optim.setup_distributed(tp_group)
- rtol, atol = 3e-5, 3e-5
+ rtol, atol = 8e-7, 8e-7
if p_dtype is torch.float16 or g_dtype is torch.float16:
- rtol, atol = 1e-4, 1e-4
+ rtol, atol = 1e-6, 1e-6
if p_dtype is torch.bfloat16 or g_dtype is torch.bfloat16:
- rtol, atol = 2e-4, 2e-4
+ rtol, atol = 2e-6, 2e-6
seed_all(_SEED) # NOTE: having only one manual_seed above doesn't work?
x = data_gen()
@@ -275,13 +267,14 @@ def run_dist_lamb_fwd_bwd(
if zero_size > 1:
optim.backward(out_tp.sum())
- torch_optim.backward(out.sum())
+ out.sum().backward()
else:
out_tp.sum().backward()
out.sum().backward()
torch_optim.step()
optim.step()
+ dist.barrier()
torch_optim.zero_grad()
optim.zero_grad()
try:
diff --git a/tests/test_optimizer/test_nvme.py b/tests/test_optimizer/test_nvme.py
index 3315b3256d02..603b7b6fa325 100644
--- a/tests/test_optimizer/test_nvme.py
+++ b/tests/test_optimizer/test_nvme.py
@@ -1,5 +1,5 @@
-import torch
import pytest
+import torch
from colossalai.nn.optimizer import CPUAdam, HybridAdam
from colossalai.testing import clear_cache_before_run, parameterize
@@ -17,6 +17,7 @@ def check_params_equal(model, torch_model):
for p, torch_p in zip(model.parameters(), torch_model.parameters()):
assert torch.allclose(p, torch_p, atol=1e-3), f"diff: {torch.abs(p - torch_p)}"
+
# TODO Something wrong with ci when running this test.
@pytest.mark.skip(reason="skip because of something wrong with CI")
@clear_cache_before_run()
diff --git a/tests/test_pipeline/test_schedule/test_interleaved.py b/tests/test_pipeline/test_schedule/test_interleaved.py
index 7aa4640553ca..f8820688e610 100644
--- a/tests/test_pipeline/test_schedule/test_interleaved.py
+++ b/tests/test_pipeline/test_schedule/test_interleaved.py
@@ -103,9 +103,7 @@ def criterion(x, *args, **kwargs):
torch_loss = criterion(torch_output)
torch_loss.backward()
- pp_ret = schedule.forward_backward_step(
- sharded_model, iter(input_list), criterion, pp_optimizer, return_loss=True
- )
+ pp_ret = schedule.forward_backward_step(sharded_model, iter(input_list), criterion, pp_optimizer, return_loss=True)
# check loss
if stage_manager.is_last_stage(ignore_chunk=True):
diff --git a/tests/test_pipeline/test_schedule/test_oneF_oneB.py b/tests/test_pipeline/test_schedule/test_oneF_oneB.py
index e1a679890c8d..590800780ab4 100644
--- a/tests/test_pipeline/test_schedule/test_oneF_oneB.py
+++ b/tests/test_pipeline/test_schedule/test_oneF_oneB.py
@@ -99,9 +99,7 @@ def custom_fwd(self, x):
torch_output = torch_model(input_list[0])
torch_loss = criterion(torch_output)
torch_loss.backward()
- pp_ret = schedule.forward_backward_step(
- sharded_model, iter(input_list), criterion, pp_optimizer, return_loss=True
- )
+ pp_ret = schedule.forward_backward_step(sharded_model, iter(input_list), criterion, pp_optimizer, return_loss=True)
# check loss
if stage_manager.is_last_stage():
| ## 📌 Checklist before creating the PR
- [ ] I have created an issue for this PR for traceability
- [ ] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`
- [ ] I have added relevant tags if possible for us to better distinguish different PRs
- [ ] I have installed pre-commit: `pip install pre-commit && pre-commit install`
## 🚨 Issue number
> Link this PR to your issue with words like fixed to automatically close the linked issue upon merge
>
> e.g. `fixed #1234`, `closed #1234`, `resolved #1234`
## 📝 What does this PR do?
> Summarize your work here.
> if you have any plots/diagrams/screenshots/tables, please attach them here.
## 💥 Checklist before requesting a review
- [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
- [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
- [ ] I have performed a self-review of my code
- [ ] I have added thorough tests.
- [ ] I have added docstrings for all the functions/methods I implemented
## ⭐️ Do you enjoy contributing to Colossal-AI?
- [ ] 🌝 Yes, I do.
- [ ] 🌚 No, I don't.
Tell us more if you don't enjoy contributing to Colossal-AI.
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/5576 | 2024-04-09T10:34:45Z | 2024-04-10T03:37:24Z | 2024-04-10T03:37:24Z | 2024-04-10T03:37:30Z | 3,023 | hpcaitech/ColossalAI | 11,235 |
Update withdrawal fees and API link | diff --git a/js/bitfinex2.js b/js/bitfinex2.js
index 314ef6c15249..785d1a8958a0 100644
--- a/js/bitfinex2.js
+++ b/js/bitfinex2.js
@@ -54,7 +54,7 @@ module.exports = class bitfinex2 extends bitfinex {
'api': 'https://api.bitfinex.com',
'www': 'https://www.bitfinex.com',
'doc': [
- 'https://bitfinex.readme.io/v2/docs',
+ 'https://docs.bitfinex.com/v2/docs/',
'https://github.com/bitfinexcom/bitfinex-api-node',
],
'fees': 'https://www.bitfinex.com/fees',
@@ -134,12 +134,12 @@ module.exports = class bitfinex2 extends bitfinex {
},
'funding': {
'withdraw': {
- 'BTC': 0.0005,
- 'BCH': 0.0005,
- 'ETH': 0.01,
- 'EOS': 0.1,
+ 'BTC': 0.0004,
+ 'BCH': 0.0001,
+ 'ETH': 0.00135,
+ 'EOS': 0.0,
'LTC': 0.001,
- 'OMG': 0.1,
+ 'OMG': 0.15097,
'IOT': 0.0,
'NEO': 0.0,
'ETC': 0.01,
@@ -148,19 +148,19 @@ module.exports = class bitfinex2 extends bitfinex {
'ZEC': 0.001,
'BTG': 0.0,
'DASH': 0.01,
- 'XMR': 0.04,
+ 'XMR': 0.0001,
'QTM': 0.01,
- 'EDO': 0.5,
- 'DAT': 1.0,
- 'AVT': 0.5,
- 'SAN': 0.1,
+ 'EDO': 0.23687,
+ 'DAT': 9.8858,
+ 'AVT': 1.1251,
+ 'SAN': 0.35977,
'USDT': 5.0,
- 'SPK': 9.2784,
- 'BAT': 9.0883,
- 'GNT': 8.2881,
- 'SNT': 14.303,
- 'QASH': 3.2428,
- 'YYW': 18.055,
+ 'SPK': 16.971,
+ 'BAT': 1.1209,
+ 'GNT': 2.8789,
+ 'SNT': 9.0848,
+ 'QASH': 1.726,
+ 'YYW': 7.9464,
},
},
},
@@ -179,7 +179,7 @@ module.exports = class bitfinex2 extends bitfinex {
'STOP LIMIT': undefined,
'EXCHANGE STOP LIMIT': 'limit stop',
'IOC': undefined,
- 'EXCHANGE IOC': 'limit ico',
+ 'EXCHANGE IOC': 'limit ioc',
},
'fiat': {
'USD': 'USD',
| Also, 182 string can have typo, but i don't now maybe it's not typo and i can broke something:
from:
`'EXCHANGE IOC': 'limit ico',`
to:
` 'EXCHANGE IOC': 'limit ioc',` | https://api.github.com/repos/ccxt/ccxt/pulls/4783 | 2019-03-04T14:58:04Z | 2019-03-04T15:53:58Z | 2019-03-04T15:53:58Z | 2019-03-04T16:14:24Z | 776 | ccxt/ccxt | 13,145 |
Bump numpy from 1.23.5 to 1.24.0 | diff --git a/Hand-Motion-Detection/requirements.txt b/Hand-Motion-Detection/requirements.txt
index d57a0682f9..ba4ba8692a 100644
--- a/Hand-Motion-Detection/requirements.txt
+++ b/Hand-Motion-Detection/requirements.txt
@@ -1,3 +1,3 @@
-numpy==1.23.5
+numpy==1.24.0
opencv_python==4.6.0.66
mediapipe==0.9.0.1
| Bumps [numpy](https://github.com/numpy/numpy) from 1.23.5 to 1.24.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/numpy/numpy/releases">numpy's releases</a>.</em></p>
<blockquote>
<h2>v1.24.0</h2>
<h1>NumPy 1.24 Release Notes</h1>
<p>The NumPy 1.24.0 release continues the ongoing work to improve the
handling and promotion of dtypes, increase the execution speed, and
clarify the documentation. There are also a large number of new and
expired deprecations due to changes in promotion and cleanups. This
might be called a deprecation release. Highlights are</p>
<ul>
<li>Many new deprecations, check them out.</li>
<li>Many expired deprecations,</li>
<li>New F2PY features and fixes.</li>
<li>New "dtype" and "casting" keywords for stacking functions.</li>
</ul>
<p>See below for the details,</p>
<p>This release supports Python versions 3.8-3.11.</p>
<h2>Deprecations</h2>
<h3>Deprecate fastCopyAndTranspose and PyArray_CopyAndTranspose</h3>
<p>The <code>numpy.fastCopyAndTranspose</code> function has been deprecated. Use the
corresponding copy and transpose methods directly:</p>
<pre><code>arr.T.copy()
</code></pre>
<p>The underlying C function <code>PyArray_CopyAndTranspose</code> has also been
deprecated from the NumPy C-API.</p>
<p>(<a href="https://github-redirect.dependabot.com/numpy/numpy/pull/22313">gh-22313</a>)</p>
<h3>Conversion of out-of-bound Python integers</h3>
<p>Attempting a conversion from a Python integer to a NumPy value will now
always check whether the result can be represented by NumPy. This means
the following examples will fail in the future and give a
<code>DeprecationWarning</code> now:</p>
<pre><code>np.uint8(-1)
np.array([3000], dtype=np.int8)
</code></pre>
<p>Many of these did succeed before. Such code was mainly useful for
unsigned integers with negative values such as <code>np.uint8(-1)</code> giving
<code>np.iinfo(np.uint8).max</code>.</p>
<p>Note that conversion between NumPy integers is unaffected, so that
<code>np.array(-1).astype(np.uint8)</code> continues to work and use C integer
overflow logic. For negative values, it will also work to view the
array: <code>np.array(-1, dtype=np.int8).view(np.uint8)</code>. In some cases,</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/numpy/numpy/commit/8cec82012694571156e8d7696307c848a7603b4e"><code>8cec820</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22813">#22813</a> from charris/prepare-1.24.0-release</li>
<li><a href="https://github.com/numpy/numpy/commit/8d33e689cfa2a6153d716b76f6c3f8d6ae06b4b2"><code>8d33e68</code></a> REL: Prepare for the NumPy 1.24.0 release.</li>
<li><a href="https://github.com/numpy/numpy/commit/5ac09da28abacd6f9eed4971b8e7eb6d0b6334a4"><code>5ac09da</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22815">#22815</a> from charris/backport-22814</li>
<li><a href="https://github.com/numpy/numpy/commit/df2d26fda121e5204771dc143d1d96d0d0107f71"><code>df2d26f</code></a> BLD: use newer version of delocate</li>
<li><a href="https://github.com/numpy/numpy/commit/e18104ecc14f734076d0e24f25766e9f15792c7c"><code>e18104e</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22805">#22805</a> from charris/backport-22804</li>
<li><a href="https://github.com/numpy/numpy/commit/6d444245087fa0166749d8dbc780b5e82b98e6f4"><code>6d44424</code></a> REV: revert change to <code>numpyconfig.h</code> for sizeof(type) hardcoding on macOS</li>
<li><a href="https://github.com/numpy/numpy/commit/c48459319ac837b6539d4f4f16de3de711d24672"><code>c484593</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22795">#22795</a> from charris/backport-22791</li>
<li><a href="https://github.com/numpy/numpy/commit/0904c01d3f608f22a27327adf29a06615184be45"><code>0904c01</code></a> Change argument to npy_floatstatus_..._barrier() functions to ensure it</li>
<li><a href="https://github.com/numpy/numpy/commit/34653f92760b8dad7c5db9a176b5325cc8074924"><code>34653f9</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22793">#22793</a> from charris/backport-22789</li>
<li><a href="https://github.com/numpy/numpy/commit/21f7096ed9d274f95a9611f374227587f2fce377"><code>21f7096</code></a> BUG: Fix infinite recursion in longdouble/large integer scalar ops</li>
<li>Additional commits viewable in <a href="https://github.com/numpy/numpy/compare/v1.23.5...v1.24.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details> | https://api.github.com/repos/geekcomputers/Python/pulls/1808 | 2022-12-19T18:14:57Z | 2022-12-23T19:15:07Z | 2022-12-23T19:15:07Z | 2022-12-23T19:15:15Z | 120 | geekcomputers/Python | 31,187 |
add chatgpt-discord-bot to related project | diff --git a/README.md b/README.md
index 9f99a2e4dc..555c86ea74 100644
--- a/README.md
+++ b/README.md
@@ -363,6 +363,13 @@ set G4F_PROXY=http://host:port
<td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free/issues"><img alt="Issues" src="https://img.shields.io/github/issues/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/mishalhossin/Coding-Chatbot-Gpt4Free/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
+<tr>
+ <tr>
+ <td><a href="https://github.com/Zero6992/chatGPT-discord-bot"><b>chatGPT-discord-bot</b></a></td>
+ <td><a href="https://github.com/Zero6992/chatGPT-discord-bot/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/Zero6992/chatGPT-discord-bot?style=flat-square&labelColor=343b41"/></a></td>
+ <td><a href="https://github.com/Zero6992/chatGPT-discord-bot/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/Zero6992/chatGPT-discord-bot?style=flat-square&labelColor=343b41"/></a></td>
+ <td><a href="https://github.com/Zero6992/chatGPT-discord-bot/issues"><img alt="Issues" src="https://img.shields.io/github/issues/Zero6992/chatGPT-discord-bot?style=flat-square&labelColor=343b41"/></a></td>
+ <td><a href="https://github.com/Zero6992/chatGPT-discord-bot/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/Zero6992/chatGPT-discord-bot?style=flat-square&labelColor=343b41"/></a></td>
<tr>
<td><a href="https://github.com/SamirXR/Nyx-Bot"><b>Nyx-Bot (Discord)</b></a></td>
<td><a href="https://github.com/SamirXR/Nyx-Bot/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/SamirXR/Nyx-Bot?style=flat-square&labelColor=343b41"/></a></td>
| add [chatgpt-discord-bot](https://github.com/Zero6992/chatGPT-discord-bot) to related project | https://api.github.com/repos/xtekky/gpt4free/pulls/1680 | 2024-03-12T04:57:54Z | 2024-03-12T08:08:19Z | 2024-03-12T08:08:19Z | 2024-03-12T08:10:55Z | 726 | xtekky/gpt4free | 38,146 |
Fixed URL for wordlist.tgz in image_ocr.py | diff --git a/examples/image_ocr.py b/examples/image_ocr.py
index 15d20112894..04d13ee1437 100644
--- a/examples/image_ocr.py
+++ b/examples/image_ocr.py
@@ -416,7 +416,7 @@ def train(run_name, start_epoch, stop_epoch, img_w):
input_shape = (img_w, img_h, 1)
fdir = os.path.dirname(get_file('wordlists.tgz',
- origin='http://www.isosemi.com/datasets/wordlists.tgz', untar=True))
+ origin='http://www.mythic-ai.com/datasets/wordlists.tgz', untar=True))
img_gen = TextImageGenerator(monogram_file=os.path.join(fdir, 'wordlist_mono_clean.txt'),
bigram_file=os.path.join(fdir, 'wordlist_bi_clean.txt'),
| https://api.github.com/repos/keras-team/keras/pulls/6136 | 2017-04-04T05:08:25Z | 2017-04-04T06:55:19Z | 2017-04-04T06:55:19Z | 2017-04-04T06:55:19Z | 193 | keras-team/keras | 47,415 | |
Potenial improvements to password generator and battleship | diff --git a/.gitignore b/.gitignore
index 9753953afa..412d32f9c1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -16,6 +16,7 @@ build/
.env
.env.sh
venv/
+.venv/
ENV/
# IDE-specific files
diff --git a/gpt_engineer/API/api.py b/gpt_engineer/API/api.py
index b2a1d41af0..be6b5ba0a0 100644
--- a/gpt_engineer/API/api.py
+++ b/gpt_engineer/API/api.py
@@ -10,8 +10,8 @@
import pathlib
from typing import Callable, Optional, Coroutine, Any
-from gpt_engineer.db import DB
-from gpt_engineer.main import main
+from gpt_engineer.core.db import DB
+from gpt_engineer.cli.main import main
from gpt_engineer.API.agent import base_router, Agent
from gpt_engineer.API.db import NotFoundException, not_found_exception_handler
diff --git a/gpt_engineer/cli/file_selector.py b/gpt_engineer/cli/file_selector.py
index b4f7997c09..02860b1820 100644
--- a/gpt_engineer/cli/file_selector.py
+++ b/gpt_engineer/cli/file_selector.py
@@ -50,6 +50,7 @@
from gpt_engineer.core.db import DB, DBs
IGNORE_FOLDERS = {"site-packages", "node_modules", "venv"}
+REFERENCE_FILE_LIST_NAME = "file_to_reference_list.txt"
FILE_LIST_NAME = "file_list.txt"
@@ -321,6 +322,57 @@ def is_in_ignoring_extensions(path: Path) -> bool:
return is_hidden and is_pycache
+def scan_for_reference_files(metadata_db: DB, workspace_db: DB) -> List[str]:
+ """
+ Scans the root directory for reference files and updates the file list in project metadata.
+
+ This function scans the root directory of the workspace for files referenced in the code
+ and updates the file list in the project metadata database. It ensures that the list of
+ reference files is up to date.
+
+ Parameters:
+ - metadata_db (DB): The project metadata database where the reference file list is stored.
+ - workspace_db (DB): The workspace database representing the root directory.
+
+ Returns:
+ - List[str]: A list of file paths found in the workspace.
+ """
+
+ root_directory = workspace_db.path
+ existing_files = []
+
+ # Files to ignore
+ ignore_files = ["run.sh", "README.md", "pre-execution-files.txt", "prompt"]
+
+ # Directories to ignore
+ ignore_directories = ["env", "venv", "preprompts"]
+
+ # Walk through the root directory and find files referenced in the code
+ for dirpath, dirnames, filenames in os.walk(root_directory):
+ # Ignore directories starting with "." and specified directories
+ dirnames[:] = [
+ d
+ for d in dirnames
+ if not d.startswith(".")
+ and not d.startswith("__")
+ and d not in ignore_directories
+ ]
+
+ for filename in filenames:
+ file_path = os.path.join(dirpath, filename)
+
+ # Check if the file should be ignored
+ if filename not in ignore_files:
+ if file_path not in existing_files:
+ existing_files.append(file_path)
+
+ # Update the referenced file list in the project metadata
+ metadata_db[REFERENCE_FILE_LIST_NAME] = "\n".join(
+ str(file_path) for file_path in existing_files
+ )
+ return existing_files
+
+
def ask_for_files(metadata_db: DB, workspace_db: DB) -> None:
"""
Ask user to select files to improve.
diff --git a/gpt_engineer/core/chat_to_files.py b/gpt_engineer/core/chat_to_files.py
index 2a166fabac..0ac32aad44 100644
--- a/gpt_engineer/core/chat_to_files.py
+++ b/gpt_engineer/core/chat_to_files.py
@@ -137,7 +137,9 @@ def overwrite_files(chat: str, dbs: DBs) -> None:
dbs.workspace[file_name] = file_content
-def get_code_strings(workspace: DB, metadata_db: DB) -> dict[str, str]:
+def get_code_strings(
+ workspace: DB, metadata_db: DB, file_list_name: str = FILE_LIST_NAME
+) -> dict[str, str]:
"""
Read file_list.txt and return file names and their content.
@@ -159,7 +161,7 @@ def get_all_files_in_dir(directory):
for dir in dirs:
yield from get_all_files_in_dir(os.path.join(root, dir))
- files_paths = metadata_db[FILE_LIST_NAME].strip().split("\n")
+ files_paths = metadata_db[file_list_name].strip().split("\n")
files = []
for full_file_path in files_paths:
diff --git a/gpt_engineer/core/steps.py b/gpt_engineer/core/steps.py
index ea39b8ce4d..d5c7dbdb88 100644
--- a/gpt_engineer/core/steps.py
+++ b/gpt_engineer/core/steps.py
@@ -64,7 +64,12 @@
to_files_and_memory,
)
from gpt_engineer.core.db import DBs
-from gpt_engineer.cli.file_selector import FILE_LIST_NAME, ask_for_files
+from gpt_engineer.cli.file_selector import (
+ REFERENCE_FILE_LIST_NAME,
+ FILE_LIST_NAME,
+ ask_for_files,
+ scan_for_reference_files,
+)
from gpt_engineer.cli.learning import human_review_input
# Type hint for chat messages
@@ -180,7 +185,14 @@ def simple_gen(ai: AI, dbs: DBs) -> List[Message]:
The function assumes the `ai.start` method and the `to_files` utility are correctly
set up and functional. Ensure these prerequisites are in place before invoking `simple_gen`.
"""
- messages = ai.start(setup_sys_prompt(dbs), dbs.input["prompt"], step_name=curr_fn())
+
+ # use an enhanced prompt
+ if "enhanced_prompt" in dbs.memory:
+ input_prompt = dbs.memory["enhanced_prompt"]
+ else:
+ input_prompt = dbs.input["prompt"]
+
+ messages = ai.start(setup_sys_prompt(dbs), input_prompt, step_name=curr_fn())
to_files_and_memory(messages[-1].content.strip(), dbs)
return messages
@@ -634,6 +646,80 @@ def human_review(ai: AI, dbs: DBs):
return []
+def enhance_prompt_add_reference_files(ai: AI, dbs: DBs):
+ """
+ Scans the root directory for existing files referenced in the generated code.
+
+ This function scans the root directory for any files that may already exist and
+ are referenced in the code generated for the input prompt. It then updates the
+ file list in the database to include these files.
+
+ Parameters:
+ - dbs (DBs): An instance containing the database configurations and project metadata.
+ The function will update the file list in the project metadata.
+
+ Returns:
+ - list: Returns an empty list, indicating that there's no subsequent interaction with the LLM.
+ """
+ reference_files = scan_for_reference_files(dbs.project_metadata, dbs.workspace)
+
+ files_info = get_code_strings(
+ dbs.workspace, dbs.project_metadata, REFERENCE_FILE_LIST_NAME
+ ) # this has file names relative to the workspace path
+
+ enhanced_prompt = (
+ dbs.input["prompt"]
+ + "\n Here is a list of all the existing files present in the root directory your code will be added to: \n"
+ )
+
+ # Add files as input
+ for file_name, file_str in files_info.items():
+ enhanced_prompt += format_file_to_input(file_name, file_str)
+
+ dbs.memory["enhanced_prompt"] = enhanced_prompt
+
+ return []
+
+
+def enhance_prompt_add_strict_requirements(ai: AI, dbs: DBs) -> List[Message]:
+ """
+ Enhances the promp by adding a set of strict functional requirements aimed
+ at helping it pass tests written against the outputted code.
+
+ This function takes a user-provided prompt and asks the AI model to generate
+ a set of strict functional requirements for the described scenario or system.
+ The AI's response is appended to the original prompt.
+
+ Parameters:
+ - ai (AI): An instance of the AI model.
+ - dbs (DBs): An instance containing the database configurations and user prompts.
+
+ Returns:
+ - List[Message]: A list of message objects encapsulating the AI's generated output.
+
+ Note:
+ - The function assumes the `ai.start` method is correctly set up and functional.
+ Ensure these prerequisites before invoking `convert_to_strict_requirements`.
+ """
+ system_prompt = "Your being shown a prompt which will be passed to an LLM to make it generate code. \
+ The LLMs response to the prompt is being tested to see how it performs. \
+ Every aspect of the prompt will have a corresponding test applied to the LLMs output. \
+ With this in mind, generate a set of strict functional requirements which can be appended to the prompt to improve the LLMs performance. \
+ If some aspect of the prompt seems vague and colloquial e.g. the program 'should' do this or that - Interpret these vague requirements as strict requirements e.g. the program 'must' do this or that. \
+ Output requirements which ensure no reasonable test written against this prompt would fail."
+
+ user_prompt = dbs.input["prompt"]
+ messages = ai.start(system_prompt, user_prompt, step_name=curr_fn())
+
+ dbs.memory["enhanced_prompt"] = (
+ dbs.input["prompt"]
+ + "\n Here are a set of strict functional requirements to consider when completing this task: \n"
+ + messages[-1].content.strip()
+ )
+
+ return messages
+
+
class Config(str, Enum):
"""
Enumeration representing different configuration modes for the code processing system.
@@ -669,6 +755,8 @@ class Config(str, Enum):
STEPS = {
Config.DEFAULT: [
+ # enhance_prompt_add_strict_requirements,
+ # enhance_prompt_add_reference_files,
simple_gen,
gen_entrypoint,
execute_entrypoint,
@@ -689,6 +777,8 @@ class Config(str, Enum):
gen_entrypoint,
],
Config.SIMPLE: [
+ # enhance_prompt_add_strict_requirements, This seems to add some minor improvements for the password generator but given the exta call the the LLM adds a lot of time its not worth it.
+ # enhance_prompt_add_reference_files, This seems to add a fairly major improvement to the battleships test - but it breaks every other test
simple_gen,
gen_entrypoint,
execute_entrypoint,
diff --git a/gpt_engineer/preprompts/file_format b/gpt_engineer/preprompts/file_format
index 45a611855a..e7c5711762 100644
--- a/gpt_engineer/preprompts/file_format
+++ b/gpt_engineer/preprompts/file_format
@@ -18,3 +18,5 @@ print("Hello World")
```
Do not comment on what every file does. Please note that the code should be fully functional. No placeholders.
+
+Files provided to you in the prompt may also be provided in this format.
diff --git a/gpt_engineer/preprompts/generate b/gpt_engineer/preprompts/generate
index 6e9725879a..763ee1380d 100644
--- a/gpt_engineer/preprompts/generate
+++ b/gpt_engineer/preprompts/generate
@@ -12,4 +12,4 @@ Ensure to implement all code, if you are unsure, write a plausible implementatio
Include module dependency or package manager dependency definition file.
Before you finish, double check that all parts of the architecture is present in the files.
-When you are done, write finish with "this concludes a fully working implementation".
+When you are done, finish by writing "this concludes a fully working implementation".
diff --git a/gpt_engineer/preprompts/philosophy b/gpt_engineer/preprompts/philosophy
index 4adf2779b0..a52b542dfd 100644
--- a/gpt_engineer/preprompts/philosophy
+++ b/gpt_engineer/preprompts/philosophy
@@ -1,9 +1,16 @@
+Important: The code in your response will be tested. It must be a very detailed, fully working implementation. Never just provide a basic implementation.
+In long answers, always prioritise sending code snippets over helpful text.
+Assume the root folder is the source folder - do not prefix file paths with src/
Keep the program as short and simple as possible, keeping all code in one file if appropriate.
Always use the programming language the user asks for.
For Python, you always create an appropriate requirements.txt file.
For NodeJS, you always create an appropriate package.json file.
Always add a comment briefly describing the purpose of the function definition.
Add comments explaining very complex bits of logic.
+Make sure to consider any references you have missed in your own source folder, which the code you have generated is dependent on and import those dependencies
+Make sure any interfaces or abstract classes you implement are implemented correctly.
+Make sure any tests provided in the root folder will pass.
+
Python toolbelt preferences:
| https://api.github.com/repos/gpt-engineer-org/gpt-engineer/pulls/799 | 2023-10-15T13:34:01Z | 2023-10-16T12:56:18Z | 2023-10-16T12:56:18Z | 2023-10-16T12:56:19Z | 3,018 | gpt-engineer-org/gpt-engineer | 33,170 | |
Update link to the Server forum category | diff --git a/CHANGELOG.md b/CHANGELOG.md
index fbe7e070685..de2fdfb033e 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -10,7 +10,9 @@ Certbot adheres to [Semantic Versioning](https://semver.org/).
### Changed
-*
+* If Certbot fails to rollback your server configuration, the error message
+ links to the Let's Encrypt forum. Change the link to the Help category now
+ that the Server category has been closed.
### Fixed
diff --git a/certbot/client.py b/certbot/client.py
index 7372d6d9d7e..c1199daac4d 100644
--- a/certbot/client.py
+++ b/certbot/client.py
@@ -624,7 +624,7 @@ def _rollback_and_restart(self, success_msg):
reporter.add_message(
"An error occurred and we failed to restore your config and "
"restart your server. Please post to "
- "https://community.letsencrypt.org/c/server-config "
+ "https://community.letsencrypt.org/c/help "
"with details about your configuration and this error you received.",
reporter.HIGH_PRIORITY)
raise
| Let's Encrypt [closed it](https://community.letsencrypt.org/t/closing-the-server-category/93016) in favor of the Help category.
## Pull Request Checklist
- [x] Edit the `master` section of `CHANGELOG.md` to include a description of
the change being made.
- [x] Add [mypy type
annotations](https://certbot.eff.org/docs/contributing.html#mypy-type-annotations)
for any functions that were added or modified.
- [ ] Include your name in `AUTHORS.md` if you like. | https://api.github.com/repos/certbot/certbot/pulls/7309 | 2019-08-07T16:23:42Z | 2019-08-08T18:44:22Z | 2019-08-08T18:44:22Z | 2019-08-08T18:44:29Z | 282 | certbot/certbot | 1,912 |
Added credit card validator script | diff --git a/Credit_Card_Validator.py b/Credit_Card_Validator.py
new file mode 100644
index 0000000000..82906ac65a
--- /dev/null
+++ b/Credit_Card_Validator.py
@@ -0,0 +1,93 @@
+#luhn algorithm
+
+class CreditCard:
+ def __init__(self,card_no):
+ self.card_no = card_no
+
+
+ @property
+ def company(self):
+ comp =None
+ if str(self.card_no).startswith('4'):
+ comp = 'Visa Card'
+ elif str(self.card_no).startswith('5'):
+ comp = 'Master Card'
+ elif str(self.card_no).startswith('37'):
+ comp = 'American Express Card'
+ elif str(self.card_no).startswith('6'):
+ comp = 'Discover Card'
+ elif str(self.card_no).startswith('35'):
+ comp = 'JCB Card'
+ elif str(self.card_no).startswith('50' or '67'or '58'or'63'):
+ comp = 'Maestro Card'
+ elif str(self.card_no).startswith('7'):
+ comp = 'Gasoline Card'
+
+ return 'Company : '+comp
+
+
+
+ def first_check(self):
+ if 13<=len(self.card_no)<=19:
+ message = "First check : Valid in terms of length."
+
+ else:
+ message = "First chek : Check Card number once again it must be of 13 or 16 digit long."
+ return message
+
+
+
+
+ def validate(self):
+ #double every second digit from right to left
+ sum_=0
+ crd_no = self.card_no[::-1]
+ for i in range(len(crd_no)):
+ if i%2==1:
+ double_it = int(crd_no[i])*2
+
+ if len(str(double_it))==2:
+ sum_ += sum([eval(i) for i in str(double_it)])
+
+ else:
+ sum_+=double_it
+
+ else:
+ sum_+=int(crd_no[i])
+
+
+
+ if sum_%10==0:
+ response = "Valid Card"
+ else:
+ response = 'Invalid Card'
+
+ return response
+
+
+
+ @property
+ def checksum(self):
+ return '#CHECKSUM# : '+self.card_no[-1]
+
+
+ @classmethod
+ def set_card(cls,card_to_check):
+ return cls(card_to_check)
+
+
+
+card_number = input()
+card = CreditCard.set_card(card_number)
+print(card.company)
+print('Card : ',card.card_no)
+print(card.first_check())
+print(card.checksum)
+print(card.validate())
+
+
+
+# 79927398713
+#4388576018402626
+#379354508162306
+
| https://api.github.com/repos/geekcomputers/Python/pulls/340 | 2018-06-02T08:24:59Z | 2018-06-03T21:03:28Z | 2018-06-03T21:03:28Z | 2018-06-03T21:03:28Z | 689 | geekcomputers/Python | 31,840 | |
Add larger app ex | diff --git a/docs/patterns/packages.rst b/docs/patterns/packages.rst
index 1cd7797420..1bb84f8cca 100644
--- a/docs/patterns/packages.rst
+++ b/docs/patterns/packages.rst
@@ -17,6 +17,10 @@ this::
login.html
...
+If you find yourself stuck on something, feel free
+to take a look at the source code for this example.
+You'll find `the full src for this example here`_.
+
Simple Packages
---------------
@@ -130,6 +134,7 @@ You should then end up with something like that::
.. _working-with-modules:
+.. _the full src for this example here: https://github.com/pallets/flask/tree/master/examples/patterns/largerapp
Working with Blueprints
-----------------------
diff --git a/examples/patterns/largerapp/setup.py b/examples/patterns/largerapp/setup.py
new file mode 100644
index 0000000000..eaf00f0718
--- /dev/null
+++ b/examples/patterns/largerapp/setup.py
@@ -0,0 +1,10 @@
+from setuptools import setup
+
+setup(
+ name='yourapplication',
+ packages=['yourapplication'],
+ include_package_data=True,
+ install_requires=[
+ 'flask',
+ ],
+)
diff --git a/examples/patterns/largerapp/tests/test_largerapp.py b/examples/patterns/largerapp/tests/test_largerapp.py
new file mode 100644
index 0000000000..6bc0531e03
--- /dev/null
+++ b/examples/patterns/largerapp/tests/test_largerapp.py
@@ -0,0 +1,12 @@
+from yourapplication import app
+import pytest
+
+@pytest.fixture
+def client():
+ app.config['TESTING'] = True
+ client = app.test_client()
+ return client
+
+def test_index(client):
+ rv = client.get('/')
+ assert b"Hello World!" in rv.data
\ No newline at end of file
diff --git a/examples/patterns/largerapp/yourapplication/__init__.py b/examples/patterns/largerapp/yourapplication/__init__.py
new file mode 100644
index 0000000000..089d29371c
--- /dev/null
+++ b/examples/patterns/largerapp/yourapplication/__init__.py
@@ -0,0 +1,4 @@
+from flask import Flask
+app = Flask(__name__)
+
+import yourapplication.views
\ No newline at end of file
diff --git a/examples/patterns/largerapp/yourapplication/static/style.css b/examples/patterns/largerapp/yourapplication/static/style.css
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/examples/patterns/largerapp/yourapplication/templates/index.html b/examples/patterns/largerapp/yourapplication/templates/index.html
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/examples/patterns/largerapp/yourapplication/templates/layout.html b/examples/patterns/largerapp/yourapplication/templates/layout.html
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/examples/patterns/largerapp/yourapplication/templates/login.html b/examples/patterns/largerapp/yourapplication/templates/login.html
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/examples/patterns/largerapp/yourapplication/views.py b/examples/patterns/largerapp/yourapplication/views.py
new file mode 100644
index 0000000000..b112328e13
--- /dev/null
+++ b/examples/patterns/largerapp/yourapplication/views.py
@@ -0,0 +1,5 @@
+from yourapplication import app
+
+@app.route('/')
+def index():
+ return 'Hello World!'
\ No newline at end of file
diff --git a/tox.ini b/tox.ini
index 406de5dde5..c070a6291d 100644
--- a/tox.ini
+++ b/tox.ini
@@ -10,6 +10,7 @@ commands =
# We need to install those after Flask is installed.
pip install -e examples/flaskr
pip install -e examples/minitwit
+ pip install -e examples/patterns/largerapp
py.test --cov=flask --cov-report html []
deps=
pytest
| I chose the wrong base: https://github.com/pallets/flask/pull/2130
Original PR message:
> Addresses: #1902 (comment)
>
> Confirms that the larger application as outlined in the docs, works as expected. For the sake of less confusion, for those working through this example, having src might be helpful. Not everyone will want to work along with the docs. If there are parts of the docs that are not clear, we should fix that as well. (They seemed clear to me).
Thiefmaster's response:
> The weird commits in this PR aside (rebase to upstream/master and force-push to fix), I'm not sure how useful this is to have in the repo... Especially since having a "large" app without an app factory is somewhat questionable...
My response:
> @ThiefMaster Sorry about that. I didn't choose correct base. I was thinking that in order to debug issues that people are having with code examples in the docs, it would be useful to have those examples on-hand.
>
> I agree. But maybe until we can put something better together for #2027, this addition could just be temporary and we could deprecate this "larger applications" example at later time. | https://api.github.com/repos/pallets/flask/pulls/2131 | 2016-12-30T18:48:08Z | 2017-01-15T04:56:13Z | 2017-01-15T04:56:13Z | 2020-11-14T04:23:14Z | 1,027 | pallets/flask | 20,717 |
Create logs dir if missing when saving history | diff --git a/modules/chat.py b/modules/chat.py
index 5e4eb2450e..8e562b9832 100644
--- a/modules/chat.py
+++ b/modules/chat.py
@@ -394,6 +394,8 @@ def redraw_html(history, name1, name2, mode, style, reset_cache=False):
def save_history(history, path=None):
p = path or Path('logs/exported_history.json')
+ if not p.parent.is_dir():
+ p.parent.mkdir(parents=True)
with open(p, 'w', encoding='utf-8') as f:
f.write(json.dumps(history, indent=4))
| ## Checklist:
- [x] I have read the [Contributing guidelines](https://github.com/oobabooga/text-generation-webui/wiki/Contributing-guidelines).
---
When using chat mode with a character, a `FileNotFoundError` exception is raised when the `logs` directory is missing.
This results in the command-line window being spammed with errors while chatting on a new installation.
I was unsure whether to have `parents=True` or not. I assumed an extension or something else might save logs to a different path. In that case, `parents=True` would create folders as needed to save to that path.
`is_dir()` is used here to check if the parent path both exists and points to a directory. | https://api.github.com/repos/oobabooga/text-generation-webui/pulls/3462 | 2023-08-05T09:31:08Z | 2023-08-05T16:47:17Z | 2023-08-05T16:47:17Z | 2023-08-05T16:47:17Z | 143 | oobabooga/text-generation-webui | 26,006 |
fix typo for attention | diff --git a/configs/rec/rec_mv3_tps_bilstm_att.yml b/configs/rec/rec_mv3_tps_bilstm_att.yml
index c64b2ccc26..0ce067343c 100644
--- a/configs/rec/rec_mv3_tps_bilstm_att.yml
+++ b/configs/rec/rec_mv3_tps_bilstm_att.yml
@@ -1,5 +1,5 @@
Global:
- use_gpu: true
+ use_gpu: True
epoch_num: 72
log_smooth_window: 20
print_batch_step: 10
@@ -65,7 +65,7 @@ Metric:
Train:
dataset:
- name: LMDBDateSet
+ name: LMDBDataSet
data_dir: ../training/
transforms:
- DecodeImage: # load image
@@ -84,7 +84,7 @@ Train:
Eval:
dataset:
- name: LMDBDateSet
+ name: LMDBDataSet
data_dir: ../validation/
transforms:
- DecodeImage: # load image
diff --git a/configs/rec/rec_mv3_tps_bilstm_ctc.yml b/configs/rec/rec_mv3_tps_bilstm_ctc.yml
index 1b9fb0a08d..4e86709942 100644
--- a/configs/rec/rec_mv3_tps_bilstm_ctc.yml
+++ b/configs/rec/rec_mv3_tps_bilstm_ctc.yml
@@ -1,5 +1,5 @@
Global:
- use_gpu: true
+ use_gpu: True
epoch_num: 72
log_smooth_window: 20
print_batch_step: 10
diff --git a/configs/rec/rec_r34_vd_tps_bilstm_att.yml b/configs/rec/rec_r34_vd_tps_bilstm_att.yml
index 7be34b9c55..02aeb8c522 100644
--- a/configs/rec/rec_r34_vd_tps_bilstm_att.yml
+++ b/configs/rec/rec_r34_vd_tps_bilstm_att.yml
@@ -1,5 +1,5 @@
Global:
- use_gpu: true
+ use_gpu: True
epoch_num: 400
log_smooth_window: 20
print_batch_step: 10
@@ -64,7 +64,7 @@ Metric:
Train:
dataset:
- name: LMDBDateSet
+ name: LMDBDataSet
data_dir: ../training/
transforms:
- DecodeImage: # load image
@@ -83,7 +83,7 @@ Train:
Eval:
dataset:
- name: LMDBDateSet
+ name: LMDBDataSet
data_dir: ../validation/
transforms:
- DecodeImage: # load image
| https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/1919 | 2021-02-02T11:32:25Z | 2021-02-02T11:32:35Z | 2021-02-02T11:32:35Z | 2021-02-02T11:32:35Z | 625 | PaddlePaddle/PaddleOCR | 42,215 | |
Fixup changelog with missing breaking change. | diff --git a/HISTORY.rst b/HISTORY.rst
index 601ba580d2..f27ce332ee 100644
--- a/HISTORY.rst
+++ b/HISTORY.rst
@@ -7,6 +7,7 @@ Release History
++++++++++++++++++
- Updated CA Bundle, of course.
+- Cookies set on individual Requests through a ``Session`` (e.g. via ``Session.get()``) are no longer persisted to the ``Session``.
- Clean up connections when we hit problems during chunked upload, rather than leaking them.
- Return connections to the pool when a chunked upload is successful, rather than leaking it.
- Match the HTTPbis recommendation for HTTP 301 redirects.
| https://api.github.com/repos/psf/requests/pulls/1792 | 2013-12-12T16:21:35Z | 2013-12-12T16:21:38Z | 2013-12-12T16:21:38Z | 2021-09-08T23:05:26Z | 152 | psf/requests | 32,122 | |
Don't special case Path hashing | diff --git a/lib/streamlit/hashing.py b/lib/streamlit/hashing.py
index 8e53145000cc..b808197aea04 100644
--- a/lib/streamlit/hashing.py
+++ b/lib/streamlit/hashing.py
@@ -28,9 +28,11 @@
import os
import sys
import textwrap
+import tempfile
import threading
import streamlit as st
+from streamlit import compatibility
from streamlit import config
from streamlit import file_util
from streamlit import type_util
@@ -372,10 +374,9 @@ def _to_bytes(self, obj, context):
return self.to_bytes(obj.__name__)
elif hasattr(obj, "name") and (
isinstance(obj, io.IOBase)
- or (
- isinstance(obj.name, string_types) # noqa: F821
- and os.path.exists(obj.name)
- )
+ # Handle temporary files used during testing
+ or isinstance(obj, tempfile._TemporaryFileWrapper)
+ or (not compatibility.is_running_py3() and isinstance(obj, file))
):
# Hash files as name + last modification date + offset.
h = hashlib.new(self.name)
| Issue #857
We were incorrectly treating `Path`s like `File`s.
Remove special case hashing for `Path`s and let it be hashed by `__reduce__` | https://api.github.com/repos/streamlit/streamlit/pulls/1014 | 2020-01-24T20:25:27Z | 2020-01-28T21:15:03Z | 2020-01-28T21:15:03Z | 2020-01-28T21:15:04Z | 256 | streamlit/streamlit | 21,569 |
Fix various small spelling errors. | diff --git a/docs/appcontext.rst b/docs/appcontext.rst
index 63006ad490..5f41535a67 100644
--- a/docs/appcontext.rst
+++ b/docs/appcontext.rst
@@ -49,7 +49,7 @@ Typically, an application context will have the same lifetime as a
request.
See :doc:`/reqcontext` for more information about how the contexts work
-and the full lifecycle of a request.
+and the full life cycle of a request.
Manually Push a Context
diff --git a/docs/becomingbig.rst b/docs/becomingbig.rst
index 0facbfee90..16dea1da86 100644
--- a/docs/becomingbig.rst
+++ b/docs/becomingbig.rst
@@ -96,6 +96,6 @@ Discuss with the community.
The Flask developers keep the framework accessible to users with codebases big
and small. If you find an obstacle in your way, caused by Flask, don't hesitate
-to contact the developers on the mailinglist or IRC channel. The best way for
+to contact the developers on the mailing list or IRC channel. The best way for
the Flask and Flask extension developers to improve the tools for larger
applications is getting feedback from users.
diff --git a/docs/blueprints.rst b/docs/blueprints.rst
index d3ab234c5b..e6003214e5 100644
--- a/docs/blueprints.rst
+++ b/docs/blueprints.rst
@@ -244,7 +244,7 @@ was dispatched to any other admin blueprint endpoint.
Error Handlers
--------------
-Blueprints support the errorhandler decorator just like the :class:`Flask`
+Blueprints support the ``errorhandler`` decorator just like the :class:`Flask`
application object, so it is easy to make Blueprint-specific custom error
pages.
@@ -259,7 +259,7 @@ concerning handlers for 404 and 405 exceptions. These errorhandlers are only
invoked from an appropriate ``raise`` statement or a call to ``abort`` in another
of the blueprint's view functions; they are not invoked by, e.g., an invalid URL
access. This is because the blueprint does not "own" a certain URL space, so
-the application instance has no way of knowing which blueprint errorhandler it
+the application instance has no way of knowing which blueprint error handler it
should run if given an invalid URL. If you would like to execute different
handling strategies for these errors based on URL prefixes, they may be defined
at the application level using the ``request`` proxy object::
diff --git a/docs/config.rst b/docs/config.rst
index 81e580caf1..1a89ffe596 100644
--- a/docs/config.rst
+++ b/docs/config.rst
@@ -9,7 +9,7 @@ toggling the debug mode, setting the secret key, and other such
environment-specific things.
The way Flask is designed usually requires the configuration to be
-available when the application starts up. You can hardcode the
+available when the application starts up. You can hard code the
configuration in the code, which for many small applications is not
actually that bad, but there are better ways.
@@ -494,7 +494,7 @@ that experience:
1. Create your application in a function and register blueprints on it.
That way you can create multiple instances of your application with
- different configurations attached which makes unittesting a lot
+ different configurations attached which makes unit testing a lot
easier. You can use this to pass in configuration as needed.
2. Do not write code that needs the configuration at import time. If you
@@ -527,7 +527,7 @@ the config file by adding ``from yourapplication.default_settings
import *`` to the top of the file and then overriding the changes by hand.
You could also inspect an environment variable like
``YOURAPPLICATION_MODE`` and set that to `production`, `development` etc
-and import different hardcoded files based on that.
+and import different hard-coded files based on that.
An interesting pattern is also to use classes and inheritance for
configuration::
diff --git a/docs/design.rst b/docs/design.rst
index f0f7126d09..3dd1a284bd 100644
--- a/docs/design.rst
+++ b/docs/design.rst
@@ -41,7 +41,7 @@ the time. There are ways to fake multiple applications with a single
application object, like maintaining a stack of applications, but this
causes some problems I won't outline here in detail. Now the question is:
when does a microframework need more than one application at the same
-time? A good example for this is unittesting. When you want to test
+time? A good example for this is unit testing. When you want to test
something it can be very helpful to create a minimal application to test
specific behavior. When the application object is deleted everything it
allocated will be freed again.
@@ -76,7 +76,7 @@ there are better ways to do that so that you do not lose the reference
to the application object :meth:`~flask.Flask.wsgi_app`).
Furthermore this design makes it possible to use a factory function to
-create the application which is very helpful for unittesting and similar
+create the application which is very helpful for unit testing and similar
things (:ref:`app-factories`).
The Routing System
diff --git a/docs/extensiondev.rst b/docs/extensiondev.rst
index aa4eff76a2..57d7425bc1 100644
--- a/docs/extensiondev.rst
+++ b/docs/extensiondev.rst
@@ -287,7 +287,7 @@ also avoids having multiple developers working in isolation on pretty much the
same problem.
Remember: good API design is hard, so introduce your project on the
-mailinglist, and let other developers give you a helping hand with
+mailing list, and let other developers give you a helping hand with
designing the API.
The best Flask extensions are extensions that share common idioms for the
diff --git a/docs/extensions.rst b/docs/extensions.rst
index 92e8a5b255..ecb587f9e0 100644
--- a/docs/extensions.rst
+++ b/docs/extensions.rst
@@ -6,7 +6,7 @@ Extensions
Extensions are extra packages that add functionality to a Flask
application. For example, an extension might add support for sending
email or connecting to a database. Some extensions add entire new
-frameworks to help build certain types of applications, like a ReST API.
+frameworks to help build certain types of applications, like a REST API.
Finding Extensions
diff --git a/docs/shell.rst b/docs/shell.rst
index 9d9bb5f9f9..c863a77d9c 100644
--- a/docs/shell.rst
+++ b/docs/shell.rst
@@ -20,7 +20,7 @@ can you do?
This is where some helper functions come in handy. Keep in mind however
that these functions are not only there for interactive shell usage, but
-also for unittesting and other situations that require a faked request
+also for unit testing and other situations that require a faked request
context.
Generally it's recommended that you read the :ref:`request-context`
diff --git a/docs/unicode.rst b/docs/unicode.rst
index 5aa6e25def..3ea10a074b 100644
--- a/docs/unicode.rst
+++ b/docs/unicode.rst
@@ -43,8 +43,8 @@ The Golden Rule
So the rule of thumb: if you are not dealing with binary data, work with
Unicode. What does working with Unicode in Python 2.x mean?
-- as long as you are using ASCII charpoints only (basically numbers,
- some special characters of latin letters without umlauts or anything
+- as long as you are using ASCII code points only (basically numbers,
+ some special characters of Latin letters without umlauts or anything
fancy) you can use regular string literals (``'Hello World'``).
- if you need anything else than ASCII in a string you have to mark
this string as Unicode string by prefixing it with a lowercase `u`.
| This patch fixes a few small spelling errors throughout the documentation.
I didn't change common programming or Python jargon. For example, I didn't change virtualenv (to "virtual environment"), "runtime", or "auto escaping".
I turned ReST to REST because the latter spelling seems to have taken over. | https://api.github.com/repos/pallets/flask/pulls/3199 | 2019-05-14T20:16:56Z | 2019-05-16T18:54:34Z | 2019-05-16T18:54:34Z | 2020-11-14T02:21:35Z | 1,840 | pallets/flask | 20,241 |
Use `client.id` as connection id where possible | diff --git a/mitmproxy/addons/proxyserver.py b/mitmproxy/addons/proxyserver.py
index 58e44acde3..820a30cc71 100644
--- a/mitmproxy/addons/proxyserver.py
+++ b/mitmproxy/addons/proxyserver.py
@@ -108,7 +108,7 @@ class Proxyserver(ServerManager):
This addon runs the actual proxy server.
"""
- connections: dict[tuple, ProxyConnectionHandler]
+ connections: dict[tuple | str, ProxyConnectionHandler]
servers: Servers
is_running: bool
@@ -125,7 +125,7 @@ def __repr__(self):
@contextmanager
def register_connection(
- self, connection_id: tuple, handler: ProxyConnectionHandler
+ self, connection_id: tuple | str, handler: ProxyConnectionHandler
):
self.connections[connection_id] = handler
try:
@@ -278,6 +278,7 @@ def configure(self, updated) -> None:
self._update_task = asyncio.create_task(self.servers.update(modes))
async def setup_servers(self) -> bool:
+ """Setup proxy servers. This may take an indefinite amount of time to complete (e.g. on permission prompts)."""
return await self.servers.update(
[mode_specs.ProxyMode.parse(m) for m in ctx.options.mode]
)
@@ -286,11 +287,15 @@ def listen_addrs(self) -> list[Address]:
return [addr for server in self.servers for addr in server.listen_addrs]
def inject_event(self, event: events.MessageInjected):
- connection_id = (
- event.flow.client_conn.transport_protocol,
- event.flow.client_conn.peername,
- event.flow.client_conn.sockname,
- )
+ connection_id: str | tuple
+ if event.flow.client_conn.transport_protocol != "udp":
+ connection_id = event.flow.client_conn.id
+ else: # pragma: no cover
+ # temporary workaround: for UDP we don't have persistent client IDs yet.
+ connection_id = (
+ event.flow.client_conn.peername,
+ event.flow.client_conn.sockname,
+ )
if connection_id not in self.connections:
raise ValueError("Flow is not from a live connection.")
self.connections[connection_id].server_event(event)
diff --git a/mitmproxy/master.py b/mitmproxy/master.py
index 651acb98b3..4af6cd98da 100644
--- a/mitmproxy/master.py
+++ b/mitmproxy/master.py
@@ -52,7 +52,14 @@ async def run(self) -> None:
if ec := self.addons.get("errorcheck"):
await ec.shutdown_if_errored()
if ps := self.addons.get("proxyserver"):
- await ps.setup_servers()
+ # This may block for some proxy modes, so we also monitor should_exit.
+ await asyncio.wait(
+ [
+ asyncio.create_task(ps.setup_servers()),
+ asyncio.create_task(self.should_exit.wait()),
+ ],
+ return_when=asyncio.FIRST_COMPLETED,
+ )
if ec := self.addons.get("errorcheck"):
await ec.shutdown_if_errored()
ec.finish()
diff --git a/mitmproxy/proxy/mode_servers.py b/mitmproxy/proxy/mode_servers.py
index d991dee5a3..1d505ae777 100644
--- a/mitmproxy/proxy/mode_servers.py
+++ b/mitmproxy/proxy/mode_servers.py
@@ -76,11 +76,12 @@ async def handle_hook(self, hook: commands.StartHook) -> None:
class ServerManager(typing.Protocol):
- connections: dict[tuple, ProxyConnectionHandler]
+ # temporary workaround: for UDP, we use the 4-tuple because we don't have a uuid.
+ connections: dict[tuple | str, ProxyConnectionHandler]
@contextmanager
def register_connection(
- self, connection_id: tuple, handler: ProxyConnectionHandler
+ self, connection_id: tuple | str, handler: ProxyConnectionHandler
):
... # pragma: no cover
@@ -200,14 +201,11 @@ async def handle_tcp_connection(
handler.layer.context.client.sockname = original_dst
handler.layer.context.server.address = original_dst
elif isinstance(self.mode, (mode_specs.WireGuardMode, mode_specs.OsProxyMode)):
- handler.layer.context.server.address = handler.layer.context.client.sockname
+ handler.layer.context.server.address = writer.get_extra_info(
+ "destination_address", handler.layer.context.client.sockname
+ )
- connection_id = (
- handler.layer.context.client.transport_protocol,
- handler.layer.context.client.peername,
- handler.layer.context.client.sockname,
- )
- with self.manager.register_connection(connection_id, handler):
+ with self.manager.register_connection(handler.layer.context.client.id, handler):
await handler.handle_client()
def handle_udp_datagram(
@@ -217,7 +215,8 @@ def handle_udp_datagram(
remote_addr: Address,
local_addr: Address,
) -> None:
- connection_id = ("udp", remote_addr, local_addr)
+ # temporary workaround: we don't have a client uuid here.
+ connection_id = (remote_addr, local_addr)
if connection_id not in self.manager.connections:
reader = udp.DatagramReader()
writer = udp.DatagramWriter(transport, remote_addr, reader)
diff --git a/test/mitmproxy/addons/test_proxyserver.py b/test/mitmproxy/addons/test_proxyserver.py
index c197d7ce08..97a1412db7 100644
--- a/test/mitmproxy/addons/test_proxyserver.py
+++ b/test/mitmproxy/addons/test_proxyserver.py
@@ -301,9 +301,7 @@ async def test_dns(caplog_async) -> None:
resp = dns.Message.unpack(await r.read(udp.MAX_DATAGRAM_SIZE))
assert req.id == resp.id and "8.8.8.8" in str(resp)
assert len(ps.connections) == 1
- dns_layer = ps.connections[
- ("udp", w.get_extra_info("sockname"), dns_addr)
- ].layer
+ dns_layer = ps.connections[(w.get_extra_info("sockname"), dns_addr)].layer
assert isinstance(dns_layer, layers.DNSLayer)
assert len(dns_layer.flows) == 2
| This fixes compatibilit with the new macOS transparent mode, where the client's peername is unspecified (which otherwise leads to duplicates). | https://api.github.com/repos/mitmproxy/mitmproxy/pulls/6372 | 2023-09-18T21:04:14Z | 2023-09-18T23:17:47Z | 2023-09-18T23:17:47Z | 2023-09-18T23:17:47Z | 1,410 | mitmproxy/mitmproxy | 27,589 |
Exchange: parse json once | diff --git a/js/base/Exchange.js b/js/base/Exchange.js
index 531b15f8089e..4c3c7d773927 100644
--- a/js/base/Exchange.js
+++ b/js/base/Exchange.js
@@ -262,13 +262,12 @@ module.exports = class Exchange {
this.executeRestRequest = function (url, method = 'GET', headers = undefined, body = undefined) {
let promise =
- fetchImplementation (url, { 'method': method, 'headers': headers, 'body': body, 'agent': this.tunnelAgent || null, timeout: this.timeout })
+ fetchImplementation (url, { method, headers, body, 'agent': this.tunnelAgent || null, timeout: this.timeout })
.catch (e => {
if (isNode)
throw new ExchangeNotAvailable ([ this.id, method, url, e.type, e.message ].join (' '))
throw e // rethrow all unknown errors
})
- .then (response => this.handleRestErrors (response, url, method, headers, body))
.then (response => this.handleRestResponse (response, url, method, headers, body))
return timeout (this.timeout, promise).catch (e => {
@@ -370,17 +369,44 @@ module.exports = class Exchange {
return this.fetch2 (path, type, method, params, headers, body)
}
- handleErrors (statusCode, statusText, url, method, headers, body) {
+ parseJson (responseBody, url, method = 'GET') {
+ try {
+
+ return (responseBody.length > 1) ? JSON.parse (responseBody) : {} // FIXME: empty object for (almost) empty body
+
+ } catch (e) {
+
+ let maintenance = responseBody.match (/offline|busy|retry|wait|unavailable|maintain|maintenance|maintenancing/i)
+ let ddosProtection = responseBody.match (/cloudflare|incapsula|overload/i)
+
+ if (e instanceof SyntaxError) {
+
+ let error = ExchangeNotAvailable
+ let details = 'not accessible from this location at the moment'
+ if (maintenance)
+ details = 'offline, on maintenance or unreachable from this location at the moment'
+ if (ddosProtection)
+ error = DDoSProtection
+ throw new error ([ this.id, method, url, details ].join (' '))
+ }
+
+ if (this.verbose)
+ console.log ('parseJson:\n', this.id, method, url, 'error', e, "response body:\n'" + responseBody + "'\n")
+
+ throw e
+ }
+ }
+
+ handleErrors (statusCode, statusText, url, method, requestHeaders, responseBody, json) {
// override me
}
- defaultErrorHandler (code, reason, url, method, headers, body) {
+ defaultErrorHandler (code, reason, url, method, responseBody) {
if ((code >= 200) && (code <= 300))
- return body
+ return
let error = undefined
- this.last_http_response = body
- let details = body
- let match = body.match (/<title>([^<]+)/i)
+ let details = responseBody
+ let match = responseBody.match (/<title>([^<]+)/i)
if (match)
details = match[1].trim ();
if ([ 418, 429 ].includes (code)) {
@@ -388,7 +414,7 @@ module.exports = class Exchange {
} else if ([ 404, 409, 500, 501, 502, 520, 521, 522, 525 ].includes (code)) {
error = ExchangeNotAvailable
} else if ([ 400, 403, 405, 503, 530 ].includes (code)) {
- let ddosProtection = body.match (/cloudflare|incapsula/i)
+ let ddosProtection = responseBody.match (/cloudflare|incapsula/i)
if (ddosProtection) {
error = DDoSProtection
} else {
@@ -412,58 +438,23 @@ module.exports = class Exchange {
throw new error ([ this.id, method, url, code, reason, details ].join (' '))
}
- handleRestErrors (response, url, method = 'GET', headers = undefined, body = undefined) {
+ handleRestResponse (response, url, method = 'GET', requestHeaders = undefined, requestBody = undefined) {
- if (typeof response === 'string')
- return response
+ return response.text ().then (responseBody => {
- return response.text ().then (text => {
-
- const args = [ response.status, response.statusText, url, method, headers, text ]
+ let json = this.parseJsonResponse ? this.parseJson (responseBody, url, method) : undefined
if (this.verbose)
- console.log ("handleRestErrors:\n", this.id, method, url, response.status, response.statusText, headers, text ? ("\nResponse:\n" + text) : '', "\n")
+ console.log ("handleRestResponse:\n", this.id, method, url, response.status, response.statusText, requestHeaders, responseBody ? ("\nResponse:\n" + responseBody) : '', "\n")
+ const args = [ response.status, response.statusText, url, method, requestHeaders, responseBody, json ]
this.handleErrors (...args)
- return this.defaultErrorHandler (...args)
- })
- }
+ this.defaultErrorHandler (response.status, response.statusText, url, method, responseBody)
- handleRestResponse (response, url, method = 'GET', headers = undefined, body = undefined) {
-
- try {
-
- this.last_http_response = response
- if (this.parseJsonResponse) {
- this.last_json_response =
- ((typeof response === 'string') && (response.length > 1)) ?
- JSON.parse (response) : response
- return this.last_json_response
- }
-
- return response
-
- } catch (e) {
-
- let maintenance = response.match (/offline|busy|retry|wait|unavailable|maintain|maintenance|maintenancing/i)
- let ddosProtection = response.match (/cloudflare|incapsula|overload/i)
-
- if (e instanceof SyntaxError) {
-
- let error = ExchangeNotAvailable
- let details = 'not accessible from this location at the moment'
- if (maintenance)
- details = 'offline, on maintenance or unreachable from this location at the moment'
- if (ddosProtection)
- error = DDoSProtection
- throw new error ([ this.id, method, url, details ].join (' '))
- }
-
- if (this.verbose)
- console.log ('handleRestResponse:\n', this.id, method, url, 'error', e, "response body:\n'" + response + "'\n")
-
- throw e
- }
+ this.last_http_response = responseBody // FIXME: for those classes that haven't switched to handleErrors yet
+ this.last_json_response = json // FIXME: for those classes that haven't switched to handleErrors yet
+ return this.parseJsonResponse ? json : responseBody
+ })
}
setMarkets (markets, currencies = undefined) {
| Actually, this consists of two commits.
The first one is just variable/args renaming without functionality change. It fixes ambiguities that inhibit code comprehension, such as:
https://github.com/ccxt/ccxt/blob/0cc2191190748c57d460c0c49f617d197ea29bcc/js/base/Exchange.js#L271-L272
(first `response` is response object, second `response` is response body)
or:
https://github.com/ccxt/ccxt/blob/0cc2191190748c57d460c0c49f617d197ea29bcc/js/base/Exchange.js#L373
(`headers` belong to request, while `body` belongs to response)
First commit makest these distinctions apparent. It still could be safely merged if you won't like the second one.
The second one is as per discussion in #1250. It makes one less `.then()` and one less `JSON.parse()`. Proposed order of calls:
1. json = parseJson () // or throw Ddos
2. handleErrors (...args, json)
3. defaultErrorHandler (...args, json)
As the next step, I'd remove `last_json_response`/`last_http_response` - I see these are used in few exchanges (probably were invented before `handleErrors` was born). | https://api.github.com/repos/ccxt/ccxt/pulls/1551 | 2018-01-28T17:26:03Z | 2018-01-29T03:20:08Z | 2018-01-29T03:20:08Z | 2018-01-29T19:01:57Z | 1,633 | ccxt/ccxt | 13,025 |
Added new Algorithm to find middle element of Linked List | diff --git a/data_structures/linked_list/middle_element_of_linked_list.py b/data_structures/linked_list/middle_element_of_linked_list.py
new file mode 100644
index 000000000000..2903fe604dfa
--- /dev/null
+++ b/data_structures/linked_list/middle_element_of_linked_list.py
@@ -0,0 +1,64 @@
+class Node:
+ def __init__(self, data: int) -> int:
+ self.data = data
+ self.next = None
+
+
+class LinkedList:
+ def __init__(self):
+ self.head = None
+
+ def push(self, new_data:int) -> int:
+ new_node = Node(new_data)
+ new_node.next = self.head
+ self.head = new_node
+ return self.head.data
+
+ def middle_element(self) -> int:
+ '''
+ >>> link = LinkedList()
+ >>> link.middle_element()
+ No element found.
+ >>> link.push(5)
+ 5
+ >>> link.push(6)
+ 6
+ >>> link.push(8)
+ 8
+ >>> link.push(8)
+ 8
+ >>> link.push(10)
+ 10
+ >>> link.push(12)
+ 12
+ >>> link.push(17)
+ 17
+ >>> link.push(7)
+ 7
+ >>> link.push(3)
+ 3
+ >>> link.push(20)
+ 20
+ >>> link.push(-20)
+ -20
+ >>> link.middle_element()
+ 12
+ >>>
+ '''
+ slow_pointer = self.head
+ fast_pointer = self.head
+ if self.head:
+ while fast_pointer and fast_pointer.next:
+ fast_pointer = fast_pointer.next.next
+ slow_pointer = slow_pointer.next
+ return slow_pointer.data
+ else:
+ print("No element found.")
+
+
+if __name__ == "__main__":
+ link = LinkedList()
+ for i in range(int(input().strip())):
+ data = int(input().strip())
+ link.push(data)
+ print(link.middle_element())
| Please read CONTRIBUTING.md
> 1 files contain uppercase characters:
> data_structures/linked_list/MiddleElementOfLinkedList.py | https://api.github.com/repos/TheAlgorithms/Python/pulls/1822 | 2020-03-31T13:44:47Z | 2020-04-12T14:45:07Z | 2020-04-12T14:45:07Z | 2020-04-12T14:45:07Z | 505 | TheAlgorithms/Python | 29,912 |
build(webpack): Fix webpack chunk names | diff --git a/src/sentry/static/sentry/app/components/avatar/gravatar.jsx b/src/sentry/static/sentry/app/components/avatar/gravatar.jsx
index c3d096606fa4f..a2d021b34334d 100644
--- a/src/sentry/static/sentry/app/components/avatar/gravatar.jsx
+++ b/src/sentry/static/sentry/app/components/avatar/gravatar.jsx
@@ -30,7 +30,7 @@ class Gravatar extends React.Component {
componentDidMount() {
this._isMounted = true;
- import(/*webpackChunkName: MD5*/ 'crypto-js/md5').then(MD5 => {
+ import(/*webpackChunkName: "MD5"*/ 'crypto-js/md5').then(MD5 => {
if (!this._isMounted) return;
this.setState({MD5});
});
diff --git a/src/sentry/static/sentry/app/routes.jsx b/src/sentry/static/sentry/app/routes.jsx
index 49b0ffe5f2577..cbb3bd4592e17 100644
--- a/src/sentry/static/sentry/app/routes.jsx
+++ b/src/sentry/static/sentry/app/routes.jsx
@@ -432,7 +432,7 @@ function routes() {
<IndexRoute
name="General"
componentPromise={() =>
- import(/*webpackChunkName: OrganizationGeneralSettings*/ './views/settings/organizationGeneralSettings')}
+ import(/*webpackChunkName: "OrganizationGeneralSettings"*/ './views/settings/organizationGeneralSettings')}
component={errorHandler(LazyLoad)}
/>
@@ -440,14 +440,14 @@ function routes() {
path="projects/"
name="Projects"
componentPromise={() =>
- import(/*webpackChunkName: OrganizationProjects*/ './views/settings/organizationProjects')}
+ import(/*webpackChunkName: "OrganizationProjects"*/ './views/settings/organizationProjects')}
component={errorHandler(LazyLoad)}
/>
<Route path="api-keys/" name="API Key">
<IndexRoute
componentPromise={() =>
- import(/*webpackChunkName: OrganizationApiKeys*/ './views/settings/organizationApiKeys')}
+ import(/*webpackChunkName: "OrganizationApiKeys"*/ './views/settings/organizationApiKeys')}
component={errorHandler(LazyLoad)}
/>
@@ -455,7 +455,7 @@ function routes() {
path=":apiKey/"
name="Details"
componentPromise={() =>
- import(/*webpackChunkName: OrganizationApiKeyDetails*/ './views/settings/organizationApiKeys/organizationApiKeyDetails')}
+ import(/*webpackChunkName: "OrganizationApiKeyDetails"*/ './views/settings/organizationApiKeys/organizationApiKeyDetails')}
component={errorHandler(LazyLoad)}
/>
</Route>
@@ -464,7 +464,7 @@ function routes() {
path="audit-log/"
name="Audit Log"
componentPromise={() =>
- import(/*webpackChunkName: OrganizationAuditLog*/ './views/settings/organizationAuditLog')}
+ import(/*webpackChunkName: "OrganizationAuditLog"*/ './views/settings/organizationAuditLog')}
component={errorHandler(LazyLoad)}
/>
@@ -490,7 +490,7 @@ function routes() {
path="new/"
name="Invite"
componentPromise={() =>
- import(/*webpackChunkName: InviteMember*/ './views/settings/organizationMembers/inviteMember')}
+ import(/*webpackChunkName: "InviteMember"*/ './views/settings/organizationMembers/inviteMember')}
component={errorHandler(LazyLoad)}
/>
@@ -498,7 +498,7 @@ function routes() {
path=":memberId/"
name="Details"
componentPromise={() =>
- import(/*webpackChunkName: OrganizationMemberDetail*/ './views/settings/organizationMembers/organizationMemberDetail')}
+ import(/*webpackChunkName: "OrganizationMemberDetail"*/ './views/settings/organizationMembers/organizationMemberDetail')}
component={errorHandler(LazyLoad)}
/>
</Route>
@@ -507,7 +507,7 @@ function routes() {
path="rate-limits/"
name="Rate Limits"
componentPromise={() =>
- import(/*webpackChunkName: OrganizationRateLimits*/ './views/settings/organizationRateLimits')}
+ import(/*webpackChunkName: "OrganizationRateLimits"*/ './views/settings/organizationRateLimits')}
component={errorHandler(LazyLoad)}
/>
@@ -515,21 +515,21 @@ function routes() {
path="repos/"
name="Repositories"
componentPromise={() =>
- import(/*webpackChunkName: OrganizationRepositories*/ './views/settings/organizationRepositories')}
+ import(/*webpackChunkName: "OrganizationRepositories"*/ './views/settings/organizationRepositories')}
component={errorHandler(LazyLoad)}
/>
<Route
path="settings/"
componentPromise={() =>
- import(/*webpackChunkName: OrganizationGeneralSettings*/ './views/settings/organizationGeneralSettings')}
+ import(/*webpackChunkName: "OrganizationGeneralSettings"*/ './views/settings/organizationGeneralSettings')}
component={errorHandler(LazyLoad)}
/>
<Route name="Teams" path="teams/">
<IndexRoute
componentPromise={() =>
- import(/*webpackChunkName: OrganizationTeams*/ './views/settings/organizationTeams')}
+ import(/*webpackChunkName: "OrganizationTeams"*/ './views/settings/organizationTeams')}
component={errorHandler(LazyLoad)}
/>
@@ -537,7 +537,7 @@ function routes() {
name="Team"
path=":teamId/"
componentPromise={() =>
- import(/*webpackChunkName: TeamDetails*/ './views/settings/organizationTeams/teamDetails')}
+ import(/*webpackChunkName: "TeamDetails"*/ './views/settings/organizationTeams/teamDetails')}
component={errorHandler(LazyLoad)}
>
<IndexRedirect to="members/" />
@@ -545,21 +545,21 @@ function routes() {
path="members/"
name="Members"
componentPromise={() =>
- import(/*webpackChunkName: TeamMembers*/ './views/settings/organizationTeams/teamMembers')}
+ import(/*webpackChunkName: "TeamMembers"*/ './views/settings/organizationTeams/teamMembers')}
component={errorHandler(LazyLoad)}
/>
<Route
path="projects/"
name="Projects"
componentPromise={() =>
- import(/*webpackChunkName: TeamProjects*/ './views/settings/organizationTeams/teamProjects')}
+ import(/*webpackChunkName: "TeamProjects"*/ './views/settings/organizationTeams/teamProjects')}
component={errorHandler(LazyLoad)}
/>
<Route
path="settings/"
name="settings"
componentPromise={() =>
- import(/*webpackChunkName: TeamSettings*/ './views/settings/organizationTeams/teamSettings')}
+ import(/*webpackChunkName: "TeamSettings"*/ './views/settings/organizationTeams/teamSettings')}
component={errorHandler(LazyLoad)}
/>
</Route>
@@ -568,14 +568,14 @@ function routes() {
<Route name="Integrations" path="integrations/">
<IndexRoute
componentPromise={() =>
- import(/*webpackChunkName: OrganizationIntegrations*/ './views/organizationIntegrations')}
+ import(/*webpackChunkName: "OrganizationIntegrations"*/ './views/organizationIntegrations')}
component={errorHandler(LazyLoad)}
/>
<Route
name="Configure Integration"
path=":providerKey/:integrationId/"
componentPromise={() =>
- import(/*webpackChunkName: ConfigureIntegration*/ './views/settings/organizationIntegrations/configureIntegration')}
+ import(/*webpackChunkName: "ConfigureIntegration"*/ './views/settings/organizationIntegrations/configureIntegration')}
component={errorHandler(LazyLoad)}
/>
</Route>
@@ -759,39 +759,39 @@ function routes() {
<Route
path="/organizations/:orgId/health/"
componentPromise={() =>
- import(/*webpackChunkName: OrganizationHealth*/ './views/organizationHealth')}
+ import(/*webpackChunkName: "OrganizationHealth"*/ './views/organizationHealth')}
component={errorHandler(LazyLoad)}
>
<IndexRoute
componentPromise={() =>
- import(/*webpackChunkName: HealthOverview*/ './views/organizationHealth/overview')}
+ import(/*webpackChunkName: "HealthOverview"*/ './views/organizationHealth/overview')}
component={errorHandler(LazyLoad)}
/>
<Route
path="errors"
componentPromise={() =>
- import(/*webpackChunkName: HealthErrors*/ './views/organizationHealth/errors')}
+ import(/*webpackChunkName: "HealthErrors"*/ './views/organizationHealth/errors')}
component={errorHandler(LazyLoad)}
/>
<Route
path="transactions"
componentPromise={() =>
- import(/*webpackChunkName: HealthTransactions*/ './views/organizationHealth/transactions')}
+ import(/*webpackChunkName: "HealthTransactions"*/ './views/organizationHealth/transactions')}
component={errorHandler(LazyLoad)}
/>
<Route
path="browsers"
componentPromise={() =>
- import(/*webpackChunkName: HealthBrowsers*/ './views/organizationHealth/browsers')}
+ import(/*webpackChunkName: "HealthBrowsers"*/ './views/organizationHealth/browsers')}
component={errorHandler(LazyLoad)}
/>
<Route
path="devices"
componentPromise={() =>
- import(/*webpackChunkName: HealthDevices*/ './views/organizationHealth/devices')}
+ import(/*webpackChunkName: "HealthDevices"*/ './views/organizationHealth/devices')}
component={errorHandler(LazyLoad)}
/>
</Route>
| I think these get evaluated as javascript so the chunk names need to be
a string. | https://api.github.com/repos/getsentry/sentry/pulls/10047 | 2018-10-08T23:58:30Z | 2018-10-09T18:24:00Z | 2018-10-09T18:24:00Z | 2020-12-21T08:54:56Z | 2,066 | getsentry/sentry | 44,620 |
Correct small typo in internal link | diff --git a/docs/advanced_foreword.rst b/docs/advanced_foreword.rst
index f7e70a7102..53df8175a2 100644
--- a/docs/advanced_foreword.rst
+++ b/docs/advanced_foreword.rst
@@ -64,6 +64,6 @@ compatible Python code
<http://lucumr.pocoo.org/2011/1/22/forwards-compatible-python/>`_.
If you do want to dive into Python 3 already have a look at the
-:ref:`python3_support` page.
+:ref:`python3-support` page.
Continue to :ref:`installation` or the :ref:`quickstart`.
| https://api.github.com/repos/pallets/flask/pulls/768 | 2013-06-14T13:35:56Z | 2013-06-16T10:40:57Z | 2013-06-16T10:40:57Z | 2020-11-14T07:18:51Z | 154 | pallets/flask | 20,910 | |
Make argparse dependency unconditional. | diff --git a/acme/setup.py b/acme/setup.py
index f169f59a70b..48210108a87 100644
--- a/acme/setup.py
+++ b/acme/setup.py
@@ -8,6 +8,7 @@
# Please update tox.ini when modifying dependency version requirements
install_requires = [
+ 'argparse',
# load_pem_private/public_key (>=0.6)
# rsa_recover_prime_factors (>=0.8)
'cryptography>=0.8',
@@ -30,8 +31,6 @@
# Keep in sync with conditional_requirements.py.
if sys.version_info < (2, 7):
install_requires.extend([
- # only some distros recognize stdlib argparse as already satisfying
- 'argparse',
'mock<1.1.0',
])
else:
diff --git a/setup.py b/setup.py
index 0c47b973f60..8ce4c51c946 100644
--- a/setup.py
+++ b/setup.py
@@ -36,6 +36,7 @@ def read_file(filename, encoding='utf8'):
# https://github.com/pypa/pip/issues/988 for more info.
install_requires = [
'acme=={0}'.format(version),
+ 'argparse',
# We technically need ConfigArgParse 0.10.0 for Python 2.6 support, but
# saying so here causes a runtime error against our temporary fork of 0.9.3
# in which we added 2.6 support (see #2243), so we relax the requirement.
@@ -58,8 +59,6 @@ def read_file(filename, encoding='utf8'):
# Keep in sync with conditional_requirements.py.
if sys.version_info < (2, 7):
install_requires.extend([
- # only some distros recognize stdlib argparse as already satisfying
- 'argparse',
'mock<1.1.0',
])
else:
| The primary motivation is to avoid a branch, giving bugs one fewer place to hide. But, as a bonus, more people get a more bugfixed version of argparse. (To use the example from the argparse docs, people stuck on Python 3.2.3 can get bugfixes that made it into the stdlib only in 3.2.4.)
| https://api.github.com/repos/certbot/certbot/pulls/2249 | 2016-01-20T22:03:59Z | 2017-03-09T01:10:13Z | 2017-03-09T01:10:13Z | 2017-04-19T23:29:20Z | 432 | certbot/certbot | 2,589 |
Score_columns method added | diff --git a/scalg.py b/scalg.py
index a5d073d5e8..d95ceb2611 100644
--- a/scalg.py
+++ b/scalg.py
@@ -1,13 +1,12 @@
-'''
+"""
developed by: markmelnic
original repo: https://github.com/markmelnic/Scoring-Algorithm
-
+ pypi: https://pypi.org/project/scalg/
Analyse data using a range based percentual proximity algorithm
and calculate the linear maximum likelihood estimation.
The basic principle is that all values supplied will be broken
down to a range from 0 to 1 and each column's score will be added
up to get the total score.
-
==========
Example for data of vehicles
price|mileage|registration_year
@@ -15,45 +14,53 @@
22k |50k |2011
23k |90k |2015
16k |210k |2010
-
We want the vehicle with the lowest price,
lowest mileage but newest registration year.
Thus the weights for each column are as follows:
[0, 0, 1]
-
->>> procentual_proximity([[20, 60, 2012],[23, 90, 2015],[22, 50, 2011]], [0, 0, 1])
+>>> score([[20, 60, 2012],[23, 90, 2015],[22, 50, 2011]], [0, 0, 1])
[[20, 60, 2012, 2.0], [23, 90, 2015, 1.0], [22, 50, 2011, 1.3333333333333335]]
-'''
-
-
-def procentual_proximity(source_data : list, weights : list) -> list:
-
- '''
- weights - int list
- possible values - 0 / 1
- 0 if lower values have higher weight in the data set
- 1 if higher values have higher weight in the data set
- '''
+>>> score([[20, 60, 2012],[23, 90, 2015],[22, 50, 2011]], [0, 0, 1], 'scores')
+[2.0, 1.0, 1.3333333333333335]
+>>> score_columns([[20, 60, 2012],[23, 90, 2015],[22, 50, 2011]], [0, 2], [0, 0, 1])
+[[20, 2012, 1.25], [23, 2015, 1.0], [22, 2011, 0.33333333333333337]]
+"""
+
+
+def score(source_data: list, weights: list, *args) -> list:
+ """Analyse and score a dataset using a range based percentual proximity
+ algorithm and calculate the linear maximum likelihood estimation.
+ Args:
+ source_data (list): Data set to process.
+ weights (list): Weights corresponding to each column from the data set.
+ 0 if lower values have higher weight in the data set,
+ 1 if higher values have higher weight in the data set
+ Optional args:
+ "score_lists" (str): Returns a list with lists of each column scores.
+ "scores" (str): Returns only the final scores.
+ Raises:
+ ValueError: Weights can only be either 0 or 1 (int)
+ Returns:
+ list: Source data with the score of the set appended at as the last element.
+ """
# getting data
data_lists = []
for item in source_data:
- for i in range(len(item)):
+ for i, val in enumerate(item):
try:
- data_lists[i].append(float(item[i]))
+ data_lists[i].append(float(val))
except IndexError:
- # generate corresponding number of lists
data_lists.append([])
- data_lists[i].append(float(item[i]))
+ data_lists[i].append(float(val))
+ # calculating price score
score_lists = []
- # calculating each score
for dlist, weight in zip(data_lists, weights):
mind = min(dlist)
maxd = max(dlist)
score = []
- # for weight 0 score is 1 - actual score
if weight == 0:
for item in dlist:
try:
@@ -68,12 +75,15 @@ def procentual_proximity(source_data : list, weights : list) -> list:
except ZeroDivisionError:
score.append(0)
- # weight not 0 or 1
else:
raise ValueError("Invalid weight of %f provided" % (weight))
score_lists.append(score)
+ # return score lists
+ if "score_lists" in args:
+ return score_lists
+
# initialize final scores
final_scores = [0 for i in range(len(score_lists[0]))]
@@ -82,8 +92,40 @@ def procentual_proximity(source_data : list, weights : list) -> list:
for j, ele in enumerate(slist):
final_scores[j] = final_scores[j] + ele
+ # return only scores
+ if "scores" in args:
+ return final_scores
+
# append scores to source data
for i, ele in enumerate(final_scores):
source_data[i].append(ele)
return source_data
+
+
+def score_columns(source_data: list, columns: list, weights: list) -> list:
+ """Analyse data file using a range based percentual proximity
+ algorithm and calculate the linear maximum likelihood estimation.
+ Args:
+ source_data (list): Data set to process.
+ columns (list): Indexes of the source_data columns to be scored.
+ weights (list): Weights corresponding to each column from the data set.
+ 0 if lower values have higher weight in the data set,
+ 1 if higher values have higher weight in the data set
+ Raises:
+ ValueError: Weights can only be either 0 or 1 (int)
+ Returns:
+ list: Source data with the score of the set appended at as the last element.
+ """
+
+ temp_data = []
+ for item in source_data:
+ temp_data.append([item[c] for c in columns])
+
+ if len(weights) > len(columns):
+ weights = [weights[item] for item in columns]
+
+ for i, sc in enumerate(score(temp_data, weights, "scores")):
+ source_data[i].append(sc)
+
+ return source_data
| https://api.github.com/repos/geekcomputers/Python/pulls/1123 | 2020-10-13T11:42:21Z | 2020-10-23T07:39:40Z | 2020-10-23T07:39:40Z | 2020-10-23T07:39:40Z | 1,493 | geekcomputers/Python | 31,243 | |
[stable-2.14] Update update-sanity-requirements.py script (#81424) | diff --git a/hacking/update-sanity-requirements.py b/hacking/update-sanity-requirements.py
index 5861590beaf786..997d6dbf87adfd 100755
--- a/hacking/update-sanity-requirements.py
+++ b/hacking/update-sanity-requirements.py
@@ -15,6 +15,7 @@
import packaging.version
import packaging.specifiers
+import packaging.requirements
try:
import argcomplete
@@ -34,6 +35,11 @@ class SanityTest:
source_path: pathlib.Path
def freeze_requirements(self) -> None:
+ source_requirements = [packaging.requirements.Requirement(re.sub(' #.*$', '', line)) for line in self.source_path.read_text().splitlines()]
+
+ install_packages = {requirement.name for requirement in source_requirements}
+ exclude_packages = {'distribute', 'pip', 'setuptools', 'wheel'} - install_packages
+
with tempfile.TemporaryDirectory() as venv_dir:
venv.create(venv_dir, with_pip=True)
@@ -49,13 +55,6 @@ def freeze_requirements(self) -> None:
subprocess.run(pip + ['install', 'wheel'], env=env, check=True) # make bdist_wheel available during pip install
subprocess.run(pip + ['install', '-r', self.source_path], env=env, check=True)
- keep_setuptools = any(line.startswith('setuptools ') for line in self.source_path.read_text().splitlines())
-
- exclude_packages = ['pip', 'distribute', 'wheel']
-
- if not keep_setuptools:
- exclude_packages.append('setuptools')
-
freeze_options = ['--all']
for exclude_package in exclude_packages:
| ##### SUMMARY
Backport of https://github.com/ansible/ansible/pull/81424
Frozen requirements can now preserve any explicitly installed package that would normally be omitted, not just setuptools.
(cherry picked from commit dbb3feddaf663aa5901b0254a7b259e99c0eae72)
##### ISSUE TYPE
Test Pull Request
| https://api.github.com/repos/ansible/ansible/pulls/81432 | 2023-08-03T20:21:26Z | 2023-08-03T20:31:14Z | 2023-08-03T20:31:14Z | 2023-08-31T13:00:32Z | 378 | ansible/ansible | 49,308 |
Backport PR #45641 on branch 1.4.x (Switch deps to Conda) | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 5232b76a6388d..3ffcf29f47688 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -43,7 +43,7 @@ repos:
- flake8==4.0.1
- flake8-comprehensions==3.7.0
- flake8-bugbear==21.3.2
- - pandas-dev-flaker==0.2.0
+ - pandas-dev-flaker==0.4.0
- repo: https://github.com/PyCQA/isort
rev: 5.10.1
hooks:
diff --git a/environment.yml b/environment.yml
index e5b1217be6d54..d49192a7fb165 100644
--- a/environment.yml
+++ b/environment.yml
@@ -32,10 +32,12 @@ dependencies:
# documentation
- gitpython # obtain contributors from git for whatsnew
- gitdb
- - sphinx
- - sphinx-panels
- numpydoc < 1.2 # 2021-02-09 1.2dev breaking CI
+ - pandas-dev-flaker=0.4.0
- pydata-sphinx-theme
+ - pytest-cython
+ - sphinx
+ - sphinx-panels
- types-python-dateutil
- types-PyMySQL
- types-pytz
@@ -78,7 +80,6 @@ dependencies:
- ipywidgets
- nbformat
- notebook>=6.0.3
- - pip
# optional
- blosc
@@ -120,6 +121,3 @@ dependencies:
- pyreadstat # pandas.read_spss
- tabulate>=0.8.3 # DataFrame.to_markdown
- natsort # DataFrame.sort_values
- - pip:
- - pandas-dev-flaker==0.2.0
- - pytest-cython
diff --git a/pandas/io/formats/excel.py b/pandas/io/formats/excel.py
index 1f1ca434a22c0..507516e9c5868 100644
--- a/pandas/io/formats/excel.py
+++ b/pandas/io/formats/excel.py
@@ -85,7 +85,9 @@ def __init__(
**kwargs,
):
if css_styles and css_converter:
- css = ";".join(a + ":" + str(v) for (a, v) in css_styles[css_row, css_col])
+ css = ";".join(
+ [a + ":" + str(v) for (a, v) in css_styles[css_row, css_col]]
+ )
style = css_converter(css)
return super().__init__(row=row, col=col, val=val, style=style, **kwargs)
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 9d4998784222f..910ba63b5bb3a 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -724,7 +724,7 @@ def _combine_lines(self, lines) -> str:
Combines a list of JSON objects into one JSON object.
"""
return (
- f'[{",".join((line for line in (line.strip() for line in lines) if line))}]'
+ f'[{",".join([line for line in (line.strip() for line in lines) if line])}]'
)
def read(self):
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 3ce5cb31a127a..2b68eeba09621 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -2030,10 +2030,10 @@ def __repr__(self) -> str:
map(pprint_thing, (self.name, self.cname, self.axis, self.pos, self.kind))
)
return ",".join(
- (
+ [
f"{key}->{value}"
for key, value in zip(["name", "cname", "axis", "pos", "kind"], temp)
- )
+ ]
)
def __eq__(self, other: Any) -> bool:
@@ -2331,10 +2331,10 @@ def __repr__(self) -> str:
)
)
return ",".join(
- (
+ [
f"{key}->{value}"
for key, value in zip(["name", "cname", "dtype", "kind", "shape"], temp)
- )
+ ]
)
def __eq__(self, other: Any) -> bool:
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 314a897189738..581c64c1b3f0c 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -18,10 +18,12 @@ pycodestyle
pyupgrade
gitpython
gitdb
-sphinx
-sphinx-panels
numpydoc < 1.2
+pandas-dev-flaker==0.4.0
pydata-sphinx-theme
+pytest-cython
+sphinx
+sphinx-panels
types-python-dateutil
types-PyMySQL
types-pytz
@@ -52,7 +54,6 @@ statsmodels
ipywidgets
nbformat
notebook>=6.0.3
-pip
blosc
bottleneck>=1.3.1
ipykernel
@@ -84,6 +85,4 @@ cftime
pyreadstat
tabulate>=0.8.3
natsort
-pandas-dev-flaker==0.2.0
-pytest-cython
setuptools>=51.0.0
diff --git a/scripts/tests/test_sync_flake8_versions.py b/scripts/tests/test_sync_flake8_versions.py
index 21c3b743830ee..cd8d18f7e41cc 100644
--- a/scripts/tests/test_sync_flake8_versions.py
+++ b/scripts/tests/test_sync_flake8_versions.py
@@ -87,7 +87,7 @@ def test_get_revisions_no_failure(capsys):
{
"id": "flake8",
"additional_dependencies": [
- "pandas-dev-flaker==0.2.0",
+ "pandas-dev-flaker==0.4.0",
"flake8-bugs==1.1.1",
],
}
@@ -101,7 +101,7 @@ def test_get_revisions_no_failure(capsys):
"id": "yesqa",
"additional_dependencies": [
"flake8==0.1.1",
- "pandas-dev-flaker==0.2.0",
+ "pandas-dev-flaker==0.4.0",
"flake8-bugs==1.1.1",
],
}
@@ -116,7 +116,7 @@ def test_get_revisions_no_failure(capsys):
{
"pip": [
"git+https://github.com/pydata/pydata-sphinx-theme.git@master",
- "pandas-dev-flaker==0.2.0",
+ "pandas-dev-flaker==0.4.0",
]
},
]
diff --git a/setup.cfg b/setup.cfg
index 9deebb835eff7..27a5c51d2f2aa 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -103,7 +103,11 @@ ignore =
# tests use comparisons but not their returned value
B015,
# false positives
- B301
+ B301,
+ # single-letter variables
+ PDF023
+ # "use 'pandas._testing' instead" in non-test code
+ PDF025
exclude =
doc/sphinxext/*.py,
doc/build/*.py,
| Backport PR #45641: Switch deps to Conda | https://api.github.com/repos/pandas-dev/pandas/pulls/45851 | 2022-02-06T22:38:40Z | 2022-02-07T13:07:17Z | 2022-02-07T13:07:17Z | 2022-02-07T13:07:17Z | 1,796 | pandas-dev/pandas | 45,136 |
refactor and restructuring of sampler code to give custom nodes more access | diff --git a/comfy/sample.py b/comfy/sample.py
new file mode 100644
index 0000000000..09ab20cd20
--- /dev/null
+++ b/comfy/sample.py
@@ -0,0 +1,57 @@
+import torch
+import comfy.model_management
+
+
+def prepare_noise(latent_image, seed, skip=0):
+ """
+ creates random noise given a latent image and a seed.
+ optional arg skip can be used to skip and discard x number of noise generations for a given seed
+ """
+ generator = torch.manual_seed(seed)
+ for _ in range(skip):
+ noise = torch.randn([1] + list(latent_image.size())[1:], dtype=latent_image.dtype, layout=latent_image.layout, generator=generator, device="cpu")
+ noise = torch.randn(latent_image.size(), dtype=latent_image.dtype, layout=latent_image.layout, generator=generator, device="cpu")
+ return noise
+
+def prepare_mask(noise_mask, noise):
+ """ensures noise mask is of proper dimensions"""
+ device = comfy.model_management.get_torch_device()
+ noise_mask = torch.nn.functional.interpolate(noise_mask[None,None,], size=(noise.shape[2], noise.shape[3]), mode="bilinear")
+ noise_mask = noise_mask.round()
+ noise_mask = torch.cat([noise_mask] * noise.shape[1], dim=1)
+ noise_mask = torch.cat([noise_mask] * noise.shape[0])
+ noise_mask = noise_mask.to(device)
+ return noise_mask
+
+def broadcast_cond(cond, noise):
+ """broadcasts conditioning to the noise batch size"""
+ device = comfy.model_management.get_torch_device()
+ copy = []
+ for p in cond:
+ t = p[0]
+ if t.shape[0] < noise.shape[0]:
+ t = torch.cat([t] * noise.shape[0])
+ t = t.to(device)
+ copy += [[t] + p[1:]]
+ return copy
+
+def get_models_from_cond(cond, model_type):
+ models = []
+ for c in cond:
+ if model_type in c[1]:
+ models += [c[1][model_type]]
+ return models
+
+def load_additional_models(positive, negative):
+ """loads additional models in positive and negative conditioning"""
+ control_nets = get_models_from_cond(positive, "control") + get_models_from_cond(negative, "control")
+ gligen = get_models_from_cond(positive, "gligen") + get_models_from_cond(negative, "gligen")
+ gligen = [x[1] for x in gligen]
+ models = control_nets + gligen
+ comfy.model_management.load_controlnet_gpu(models)
+ return models
+
+def cleanup_additional_models(models):
+ """cleanup additional models that were loaded"""
+ for m in models:
+ m.cleanup()
\ No newline at end of file
diff --git a/comfy/samplers.py b/comfy/samplers.py
index b860f25f16..46bdb82a05 100644
--- a/comfy/samplers.py
+++ b/comfy/samplers.py
@@ -400,6 +400,38 @@ def encode_adm(noise_augmentor, conds, batch_size, device):
return conds
+def calculate_sigmas(model, steps, scheduler, sampler):
+ """
+ Returns a tensor containing the sigmas corresponding to the given model, number of steps, scheduler type and sample technique
+ """
+ if not (isinstance(model, CompVisVDenoiser) or isinstance(model, k_diffusion_external.CompVisDenoiser)):
+ model = CFGNoisePredictor(model)
+ if model.inner_model.parameterization == "v":
+ model = CompVisVDenoiser(model, quantize=True)
+ else:
+ model = k_diffusion_external.CompVisDenoiser(model, quantize=True)
+
+ sigmas = None
+
+ discard_penultimate_sigma = False
+ if sampler in ['dpm_2', 'dpm_2_ancestral']:
+ steps += 1
+ discard_penultimate_sigma = True
+
+ if scheduler == "karras":
+ sigmas = k_diffusion_sampling.get_sigmas_karras(n=steps, sigma_min=float(model.sigma_min), sigma_max=float(model.sigma_max))
+ elif scheduler == "normal":
+ sigmas = model.get_sigmas(steps)
+ elif scheduler == "simple":
+ sigmas = simple_scheduler(model, steps)
+ elif scheduler == "ddim_uniform":
+ sigmas = ddim_scheduler(model, steps)
+ else:
+ print("error invalid scheduler", scheduler)
+
+ if discard_penultimate_sigma:
+ sigmas = torch.cat([sigmas[:-2], sigmas[-1:]])
+ return sigmas
class KSampler:
SCHEDULERS = ["karras", "normal", "simple", "ddim_uniform"]
@@ -429,41 +461,19 @@ def __init__(self, model, steps, device, sampler=None, scheduler=None, denoise=N
self.denoise = denoise
self.model_options = model_options
- def _calculate_sigmas(self, steps):
- sigmas = None
-
- discard_penultimate_sigma = False
- if self.sampler in ['dpm_2', 'dpm_2_ancestral']:
- steps += 1
- discard_penultimate_sigma = True
-
- if self.scheduler == "karras":
- sigmas = k_diffusion_sampling.get_sigmas_karras(n=steps, sigma_min=self.sigma_min, sigma_max=self.sigma_max, device=self.device)
- elif self.scheduler == "normal":
- sigmas = self.model_wrap.get_sigmas(steps).to(self.device)
- elif self.scheduler == "simple":
- sigmas = simple_scheduler(self.model_wrap, steps).to(self.device)
- elif self.scheduler == "ddim_uniform":
- sigmas = ddim_scheduler(self.model_wrap, steps).to(self.device)
- else:
- print("error invalid scheduler", self.scheduler)
-
- if discard_penultimate_sigma:
- sigmas = torch.cat([sigmas[:-2], sigmas[-1:]])
- return sigmas
-
def set_steps(self, steps, denoise=None):
self.steps = steps
if denoise is None or denoise > 0.9999:
- self.sigmas = self._calculate_sigmas(steps)
+ self.sigmas = calculate_sigmas(self.model_wrap, steps, self.scheduler, self.sampler).to(self.device)
else:
new_steps = int(steps/denoise)
- sigmas = self._calculate_sigmas(new_steps)
+ sigmas = calculate_sigmas(self.model_wrap, new_steps, self.scheduler, self.sampler).to(self.device)
self.sigmas = sigmas[-(steps + 1):]
- def sample(self, noise, positive, negative, cfg, latent_image=None, start_step=None, last_step=None, force_full_denoise=False, denoise_mask=None):
- sigmas = self.sigmas
+ def sample(self, noise, positive, negative, cfg, latent_image=None, start_step=None, last_step=None, force_full_denoise=False, denoise_mask=None, sigmas=None):
+ if sigmas is None:
+ sigmas = self.sigmas
sigma_min = self.sigma_min
if last_step is not None and last_step < (len(sigmas) - 1):
diff --git a/nodes.py b/nodes.py
index 6ca73fa0c8..f9bedc97eb 100644
--- a/nodes.py
+++ b/nodes.py
@@ -16,6 +16,7 @@
import comfy.diffusers_convert
import comfy.samplers
+import comfy.sample
import comfy.sd
import comfy.utils
@@ -739,31 +740,19 @@ def set_mask(self, samples, mask):
s["noise_mask"] = mask
return (s,)
-
def common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent, denoise=1.0, disable_noise=False, start_step=None, last_step=None, force_full_denoise=False):
- latent_image = latent["samples"]
- noise_mask = None
device = comfy.model_management.get_torch_device()
+ latent_image = latent["samples"]
if disable_noise:
noise = torch.zeros(latent_image.size(), dtype=latent_image.dtype, layout=latent_image.layout, device="cpu")
else:
- batch_index = 0
- if "batch_index" in latent:
- batch_index = latent["batch_index"]
-
- generator = torch.manual_seed(seed)
- for i in range(batch_index):
- noise = torch.randn([1] + list(latent_image.size())[1:], dtype=latent_image.dtype, layout=latent_image.layout, generator=generator, device="cpu")
- noise = torch.randn(latent_image.size(), dtype=latent_image.dtype, layout=latent_image.layout, generator=generator, device="cpu")
+ skip = latent["batch_index"] if "batch_index" in latent else 0
+ noise = comfy.sample.prepare_noise(latent_image, seed, skip)
+ noise_mask = None
if "noise_mask" in latent:
- noise_mask = latent['noise_mask']
- noise_mask = torch.nn.functional.interpolate(noise_mask[None,None,], size=(noise.shape[2], noise.shape[3]), mode="bilinear")
- noise_mask = noise_mask.round()
- noise_mask = torch.cat([noise_mask] * noise.shape[1], dim=1)
- noise_mask = torch.cat([noise_mask] * noise.shape[0])
- noise_mask = noise_mask.to(device)
+ noise_mask = comfy.sample.prepare_mask(latent["noise_mask"], noise)
real_model = None
comfy.model_management.load_model_gpu(model)
@@ -772,34 +761,10 @@ def common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive,
noise = noise.to(device)
latent_image = latent_image.to(device)
- positive_copy = []
- negative_copy = []
-
- control_nets = []
- def get_models(cond):
- models = []
- for c in cond:
- if 'control' in c[1]:
- models += [c[1]['control']]
- if 'gligen' in c[1]:
- models += [c[1]['gligen'][1]]
- return models
-
- for p in positive:
- t = p[0]
- if t.shape[0] < noise.shape[0]:
- t = torch.cat([t] * noise.shape[0])
- t = t.to(device)
- positive_copy += [[t] + p[1:]]
- for n in negative:
- t = n[0]
- if t.shape[0] < noise.shape[0]:
- t = torch.cat([t] * noise.shape[0])
- t = t.to(device)
- negative_copy += [[t] + n[1:]]
-
- models = get_models(positive) + get_models(negative)
- comfy.model_management.load_controlnet_gpu(models)
+ positive_copy = comfy.sample.broadcast_cond(positive, noise)
+ negative_copy = comfy.sample.broadcast_cond(negative, noise)
+
+ models = comfy.sample.load_additional_models(positive, negative)
if sampler_name in comfy.samplers.KSampler.SAMPLERS:
sampler = comfy.samplers.KSampler(real_model, steps=steps, device=device, sampler=sampler_name, scheduler=scheduler, denoise=denoise, model_options=model.model_options)
@@ -809,8 +774,8 @@ def get_models(cond):
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask)
samples = samples.cpu()
- for m in models:
- m.cleanup()
+
+ comfy.sample.cleanup_additional_models(models)
out = latent.copy()
out["samples"] = samples
| This PR makes the following changes:
- refactors the code in [common_ksampler](https://github.com/comfyanonymous/ComfyUI/blob/master/nodes.py#L743) to a set of functions that are easier to reuse.
- makes it possible for other code to calculate sigmas without creating a Ksampler to call [_calculate_sigmas](https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/samplers.py#L432)
- allow calls to [Ksampler.sample](https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/samplers.py#L465) to supply a set of sigmas to be used instead of the pre-calculated ones.
I was not entirely sure where to put the functions that resulted from the refactoring of the common_ksampler function, these currently reside in comfy/sample.py
These changes make it possible to for instance create the custom nodes in [this](https://github.com/BlenderNeko/ComfyUI_Noise) repo | https://api.github.com/repos/comfyanonymous/ComfyUI/pulls/559 | 2023-04-23T20:27:09Z | 2023-04-25T03:54:08Z | 2023-04-25T03:54:07Z | 2023-04-25T03:55:34Z | 2,733 | comfyanonymous/ComfyUI | 18,025 |
Enable ansible-galaxy to specify client id override with Keycloak Token | diff --git a/changelogs/fragments/75593-ansible-galaxy-keycloak-clientid.yml b/changelogs/fragments/75593-ansible-galaxy-keycloak-clientid.yml
new file mode 100644
index 00000000000000..bf8ae157283bd6
--- /dev/null
+++ b/changelogs/fragments/75593-ansible-galaxy-keycloak-clientid.yml
@@ -0,0 +1,2 @@
+minor_changes:
+ - ansible-galaxy - Allow specification of client_id override value for Keycloak Token (https://github.com/ansible/ansible/issues/75593).
diff --git a/docs/docsite/rst/shared_snippets/galaxy_server_list.txt b/docs/docsite/rst/shared_snippets/galaxy_server_list.txt
index 47151079e65a15..abc624514b4669 100644
--- a/docs/docsite/rst/shared_snippets/galaxy_server_list.txt
+++ b/docs/docsite/rst/shared_snippets/galaxy_server_list.txt
@@ -28,7 +28,7 @@ The following example shows how to configure multiple servers:
.. code-block:: ini
[galaxy]
- server_list = automation_hub, my_org_hub, release_galaxy, test_galaxy
+ server_list = automation_hub, my_org_hub, release_galaxy, test_galaxy, my_galaxy_ng
[galaxy_server.automation_hub]
url=https://cloud.redhat.com/api/automation-hub/
@@ -48,6 +48,12 @@ The following example shows how to configure multiple servers:
url=https://galaxy-dev.ansible.com/
token=my_test_token
+ [galaxy_server.my_galaxy_ng]
+ url=http://my_galaxy_ng:8000/api/automation-hub/
+ auth_url=http://my_keycloak:8080/auth/realms/myco/protocol/openid-connect/token
+ client_id=galaxy-ng
+ token=my_keycloak_access_token
+
.. note::
You can use the ``--server`` command line argument to select an explicit Galaxy server in the ``server_list`` and
the value of this argument should match the name of the server. To use a server not in the server list, set the value to the URL to access that server (all servers in the server list will be ignored). Also you cannot use the ``--api-key`` argument for any of the predefined servers. You can only use the ``api_key`` argument if you did not define a server list or if you specify a URL in the
@@ -67,6 +73,7 @@ define the following keys:
* ``password``: The password to use, in conjunction with ``username``, for basic authentication.
* ``auth_url``: The URL of a Keycloak server 'token_endpoint' if using SSO authentication (for example, Automation Hub). Mutually exclusive with ``username``. Requires ``token``.
* ``validate_certs``: Whether or not to verify TLS certificates for the Galaxy server. This defaults to True unless the ``--ignore-certs`` option is provided or ``GALAXY_IGNORE_CERTS`` is configured to True.
+* ``client_id``: The Keycloak token's client_id to use for authentication. Requires ``auth_url`` and ``token``. The default ``client_id`` is cloud-services to work with Red Hat SSO.
As well as defining these server options in the ``ansible.cfg`` file, you can also define them as environment variables.
The environment variable is in the form ``ANSIBLE_GALAXY_SERVER_{{ id }}_{{ key }}`` where ``{{ id }}`` is the upper
diff --git a/lib/ansible/cli/galaxy.py b/lib/ansible/cli/galaxy.py
index e57db6e7966bf7..cc9a813ef23ba0 100644
--- a/lib/ansible/cli/galaxy.py
+++ b/lib/ansible/cli/galaxy.py
@@ -62,7 +62,8 @@
('token', False),
('auth_url', False),
('v3', False),
- ('validate_certs', False)
+ ('validate_certs', False),
+ ('client_id', False),
]
@@ -498,6 +499,7 @@ def server_config_def(section, key, required):
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi
auth_url = server_options.pop('auth_url', None)
+ client_id = server_options.pop('client_id', None)
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
available_api_versions = None
@@ -524,7 +526,8 @@ def server_config_def(section, key, required):
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
- validate_certs=validate_certs)
+ validate_certs=validate_certs,
+ client_id=client_id)
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
diff --git a/lib/ansible/galaxy/token.py b/lib/ansible/galaxy/token.py
index 11a12ce45ad9c7..4455fd01c1f24b 100644
--- a/lib/ansible/galaxy/token.py
+++ b/lib/ansible/galaxy/token.py
@@ -50,14 +50,18 @@ class KeycloakToken(object):
token_type = 'Bearer'
- def __init__(self, access_token=None, auth_url=None, validate_certs=True):
+ def __init__(self, access_token=None, auth_url=None, validate_certs=True, client_id=None):
self.access_token = access_token
self.auth_url = auth_url
self._token = None
self.validate_certs = validate_certs
+ self.client_id = client_id
+ if self.client_id is None:
+ self.client_id = 'cloud-services'
def _form_payload(self):
- return 'grant_type=refresh_token&client_id=cloud-services&refresh_token=%s' % self.access_token
+ return 'grant_type=refresh_token&client_id=%s&refresh_token=%s' % (self.client_id,
+ self.access_token)
def get(self):
if self._token:
diff --git a/test/units/galaxy/test_token.py b/test/units/galaxy/test_token.py
index 94449e28e9758d..13426688e35292 100644
--- a/test/units/galaxy/test_token.py
+++ b/test/units/galaxy/test_token.py
@@ -8,8 +8,10 @@
import os
import pytest
+from units.compat.mock import MagicMock
import ansible.constants as C
+from ansible.cli.galaxy import GalaxyCLI, SERVER_DEF
from ansible.galaxy.token import GalaxyToken, NoTokenSentinel
from ansible.module_utils._text import to_bytes, to_text
@@ -32,6 +34,47 @@ def b_token_file(request, tmp_path_factory):
C.GALAXY_TOKEN_PATH = orig_token_path
+def test_client_id(monkeypatch):
+ monkeypatch.setattr(C, 'GALAXY_SERVER_LIST', ['server1', 'server2'])
+
+ test_server_config = {option[0]: None for option in SERVER_DEF}
+ test_server_config.update(
+ {
+ 'url': 'http://my_galaxy_ng:8000/api/automation-hub/',
+ 'auth_url': 'http://my_keycloak:8080/auth/realms/myco/protocol/openid-connect/token',
+ 'client_id': 'galaxy-ng',
+ 'token': 'access_token',
+ }
+ )
+
+ test_server_default = {option[0]: None for option in SERVER_DEF}
+ test_server_default.update(
+ {
+ 'url': 'https://cloud.redhat.com/api/automation-hub/',
+ 'auth_url': 'https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token',
+ 'token': 'access_token',
+ }
+ )
+
+ get_plugin_options = MagicMock(side_effect=[test_server_config, test_server_default])
+ monkeypatch.setattr(C.config, 'get_plugin_options', get_plugin_options)
+
+ cli_args = [
+ 'ansible-galaxy',
+ 'collection',
+ 'install',
+ 'namespace.collection:1.0.0',
+ ]
+
+ galaxy_cli = GalaxyCLI(args=cli_args)
+ mock_execute_install = MagicMock()
+ monkeypatch.setattr(galaxy_cli, '_execute_install_collection', mock_execute_install)
+ galaxy_cli.run()
+
+ assert galaxy_cli.api_servers[0].token.client_id == 'galaxy-ng'
+ assert galaxy_cli.api_servers[1].token.client_id == 'cloud-services'
+
+
def test_token_explicit(b_token_file):
assert GalaxyToken(token="explicit").get() == "explicit"
| * Specify ability to provide override of client_id
##### SUMMARY
GalaxyNG just added the capability to use Keycloak for user integration/authentication, instead of using DRF tokens it would be great to be able to use Keycloak tokens with ansible-galaxy CLI. ansible-galaxy CLI has the ability to use Keycloak tokens today in order to work with Public Automation Hub on console.redhat.com. However, it won't work with GalaxyNG + Keycloak because the client ID in the code for refreshing tokens is hardcoded to cloud-services to work with the Red Hat SSO.
It would be great to be able to supply an override configuration variable for the client ID.
Fixes #75593
##### ISSUE TYPE
- Feature Pull Request
##### COMPONENT NAME
lib/ansible/galaxy
##### ADDITIONAL INFORMATION
As described in the galaxy [download a collection section](https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#downloading-a-collection-from-automation-hub)
Instead of supplying:
```
[galaxy]
server_list = automation_hub
[galaxy_server.automation_hub]
url=https://cloud.redhat.com/api/automation-hub/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
token=my_ah_token
```
You could supply the following:
```
[galaxy]
server_list = my_galaxy_ng
[galaxy_server.my_galaxy_ng]
url=http://my_galaxy_ng:8000/api/automation-hub/
auth_url=http://my_keycloak:8080/auth/realms/myco/protocol/openid-connect/token
client_id=galaxy-ng
token=my_keycloak_access_token
```
| https://api.github.com/repos/ansible/ansible/pulls/75601 | 2021-08-30T14:30:48Z | 2021-09-17T19:10:11Z | 2021-09-17T19:10:11Z | 2021-10-15T13:00:07Z | 1,981 | ansible/ansible | 49,158 |
Add the code_search_net datasets tag to CodeBERTa model cards | diff --git a/model_cards/huggingface/CodeBERTa-language-id/README.md b/model_cards/huggingface/CodeBERTa-language-id/README.md
index 63c0eeb162dc3..c9234f2cf861c 100644
--- a/model_cards/huggingface/CodeBERTa-language-id/README.md
+++ b/model_cards/huggingface/CodeBERTa-language-id/README.md
@@ -1,6 +1,8 @@
---
language: code
thumbnail: https://cdn-media.huggingface.co/CodeBERTa/CodeBERTa.png
+datasets:
+- code_search_net
---
# CodeBERTa-language-id: The World’s fanciest programming language identification algo 🤯
diff --git a/model_cards/huggingface/CodeBERTa-small-v1/README.md b/model_cards/huggingface/CodeBERTa-small-v1/README.md
index b31bbe587983a..70943e7a5f462 100644
--- a/model_cards/huggingface/CodeBERTa-small-v1/README.md
+++ b/model_cards/huggingface/CodeBERTa-small-v1/README.md
@@ -1,6 +1,8 @@
---
language: code
thumbnail: https://cdn-media.huggingface.co/CodeBERTa/CodeBERTa.png
+datasets:
+- code_search_net
---
# CodeBERTa
| # What does this PR do?
TL;DR
Related to this PR on `huggingface/datasets`: https://github.com/huggingface/datasets/pull/1288
## Who can review?
@julien-c
| https://api.github.com/repos/huggingface/transformers/pulls/9005 | 2020-12-09T12:12:41Z | 2020-12-09T14:43:19Z | 2020-12-09T14:43:19Z | 2020-12-09T14:43:20Z | 299 | huggingface/transformers | 12,046 |
acme/setup.py: comment refers to "PyOpenSSL" not "mock" | diff --git a/acme/setup.py b/acme/setup.py
index 6d3dcdf84e3..fcca333f943 100644
--- a/acme/setup.py
+++ b/acme/setup.py
@@ -14,8 +14,8 @@
# 1.1.0+ is required to avoid the warnings described at
# https://github.com/certbot/josepy/issues/13.
'josepy>=1.1.0',
- # Connection.set_tlsext_host_name (>=0.13)
'mock',
+ # Connection.set_tlsext_host_name (>=0.13)
'PyOpenSSL>=0.13.1',
'pyrfc3339',
'pytz',
| just what the title says and entirely trivial but the ordering confused me for a bit. | https://api.github.com/repos/certbot/certbot/pulls/7619 | 2019-12-02T23:06:42Z | 2019-12-03T00:16:41Z | 2019-12-03T00:16:41Z | 2019-12-03T00:16:42Z | 166 | certbot/certbot | 3,084 |
Made Field.error_messages a cached property. | diff --git a/django/db/models/fields/__init__.py b/django/db/models/fields/__init__.py
index 0afc2a0c92a08..4d17c25f0fcca 100644
--- a/django/db/models/fields/__init__.py
+++ b/django/db/models/fields/__init__.py
@@ -217,12 +217,7 @@ def __init__(
self._validators = list(validators) # Store for deconstruction later
- messages = {}
- for c in reversed(self.__class__.__mro__):
- messages.update(getattr(c, "default_error_messages", {}))
- messages.update(error_messages or {})
self._error_messages = error_messages # Store for deconstruction later
- self.error_messages = messages
def __str__(self):
"""
@@ -669,6 +664,14 @@ def to_python(self, value):
"""
return value
+ @cached_property
+ def error_messages(self):
+ messages = {}
+ for c in reversed(self.__class__.__mro__):
+ messages.update(getattr(c, "default_error_messages", {}))
+ messages.update(self._error_messages or {})
+ return messages
+
@cached_property
def validators(self):
"""
| This speeds up field creation and reduces memory usage.
Refs https://github.com/django/django/pull/15410
Tested speed using this code:
```
import time
from django.db import models
times = []
for x in range(1000):
start = time.time()
for x in range(1000):
models.CharField(max_length=100)
elapsed = time.time() - start
times.append(elapsed)
times.sort()
print('%.04f min: %.04f, median: %.04f' % (elapsed, times[0], times[len(times) // 2]))
```
baseline:
min: 0.0077, median: 0.0082
with patch:
min: 0.0057, median: 0.0063
So about 23% faster creation. Some fields don't use model validation, like `annotate(output_field=DecimalField())`. | https://api.github.com/repos/django/django/pulls/15415 | 2022-02-09T15:13:09Z | 2022-02-16T19:30:04Z | 2022-02-16T19:30:04Z | 2022-02-16T20:36:27Z | 283 | django/django | 51,272 |
Fix --debug | diff --git a/certbot/main.py b/certbot/main.py
index 471dcd83880..931697cc36c 100644
--- a/certbot/main.py
+++ b/certbot/main.py
@@ -676,10 +676,8 @@ def _handle_exception(exc_type, exc_value, trace, config):
to the user. sys.exit is always called with a nonzero status.
"""
- logger.debug(
- "Exiting abnormally:%s%s",
- os.linesep,
- "".join(traceback.format_exception(exc_type, exc_value, trace)))
+ tb_str = "".join(traceback.format_exception(exc_type, exc_value, trace))
+ logger.debug("Exiting abnormally:%s%s", os.linesep, tb_str)
if issubclass(exc_type, Exception) and (config is None or not config.debug):
if config is None:
@@ -689,8 +687,9 @@ def _handle_exception(exc_type, exc_value, trace, config):
traceback.print_exception(
exc_type, exc_value, trace, file=logfd)
except: # pylint: disable=bare-except
- sys.exit("".join(
- traceback.format_exception(exc_type, exc_value, trace)))
+ sys.exit(tb_str)
+ if "--debug" in sys.argv:
+ sys.exit(tb_str)
if issubclass(exc_type, errors.Error):
sys.exit(exc_value)
@@ -715,8 +714,7 @@ def _handle_exception(exc_type, exc_value, trace, config):
msg += "logfiles in {0} for more details.".format(config.logs_dir)
sys.exit(msg)
else:
- sys.exit("".join(
- traceback.format_exception(exc_type, exc_value, trace)))
+ sys.exit(tb_str)
def make_or_verify_core_dir(directory, mode, uid, strict):
| Currently `--debug` doesn't work to print tracebacks for crashes before there is a `config` object. This PR fixes that. | https://api.github.com/repos/certbot/certbot/pulls/3877 | 2016-12-08T18:43:33Z | 2016-12-09T22:56:14Z | 2016-12-09T22:56:14Z | 2016-12-09T22:56:22Z | 404 | certbot/certbot | 3,289 |
[ghostnet] Fix in_features for linear layer in reset_classifier. | diff --git a/timm/models/ghostnet.py b/timm/models/ghostnet.py
index b7c0f5ddcc..d34b548521 100644
--- a/timm/models/ghostnet.py
+++ b/timm/models/ghostnet.py
@@ -276,7 +276,7 @@ def reset_classifier(self, num_classes, global_pool='avg'):
# cannot meaningfully change pooling of efficient head after creation
self.global_pool = SelectAdaptivePool2d(pool_type=global_pool)
self.flatten = nn.Flatten(1) if global_pool else nn.Identity() # don't flatten if pooling disabled
- self.classifier = Linear(self.pool_dim, num_classes) if num_classes > 0 else nn.Identity()
+ self.classifier = Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()
def forward_features(self, x):
x = self.conv_stem(x)
| Resetting the classifier for the ghostnet model results in a mismatch:
>
RuntimeError: mat1 and mat2 shapes cannot be multiplied (80x1280 and 960x6)
| https://api.github.com/repos/huggingface/pytorch-image-models/pulls/1951 | 2023-09-13T08:42:18Z | 2023-09-13T16:29:39Z | 2023-09-13T16:29:39Z | 2023-09-13T16:29:39Z | 205 | huggingface/pytorch-image-models | 16,356 |
Fix body parsing | diff --git a/fastapi/dependencies/utils.py b/fastapi/dependencies/utils.py
index a1cc0b9808d4c..33130a90ef339 100644
--- a/fastapi/dependencies/utils.py
+++ b/fastapi/dependencies/utils.py
@@ -634,7 +634,11 @@ async def request_body_to_args(
) and isinstance(received_body, FormData):
value = received_body.getlist(field.alias)
else:
- value = received_body.get(field.alias)
+ try:
+ value = received_body.get(field.alias)
+ except AttributeError:
+ errors.append(get_missing_field_error(field.alias))
+ continue
if (
value is None
or (isinstance(field_info, params.Form) and value == "")
@@ -645,18 +649,7 @@ async def request_body_to_args(
)
):
if field.required:
- if PYDANTIC_1:
- errors.append(
- ErrorWrapper(MissingError(), loc=("body", field.alias))
- )
- else: # pragma: nocover
- errors.append(
- ErrorWrapper( # type: ignore
- MissingError(),
- loc=("body", field.alias),
- config=BaseConfig,
- )
- )
+ errors.append(get_missing_field_error(field.alias))
else:
values[field.name] = deepcopy(field.default)
continue
@@ -685,6 +678,16 @@ async def request_body_to_args(
return values, errors
+def get_missing_field_error(field_alias: str) -> ErrorWrapper:
+ if PYDANTIC_1:
+ missing_field_error = ErrorWrapper(MissingError(), loc=("body", field_alias))
+ else: # pragma: no cover
+ missing_field_error = ErrorWrapper( # type: ignore
+ MissingError(), loc=("body", field_alias), config=BaseConfig,
+ )
+ return missing_field_error
+
+
def get_schema_compatible_field(*, field: ModelField) -> ModelField:
out_field = field
if lenient_issubclass(field.type_, UploadFile):
diff --git a/tests/test_tutorial/test_body_multiple_params/test_tutorial003.py b/tests/test_tutorial/test_body_multiple_params/test_tutorial003.py
index 54bf193e9a613..7dcf9edd846a5 100644
--- a/tests/test_tutorial/test_body_multiple_params/test_tutorial003.py
+++ b/tests/test_tutorial/test_body_multiple_params/test_tutorial003.py
@@ -166,6 +166,30 @@ def test_openapi_schema():
]
},
),
+ (
+ "/items/5",
+ [],
+ 422,
+ {
+ "detail": [
+ {
+ "loc": ["body", "item"],
+ "msg": "field required",
+ "type": "value_error.missing",
+ },
+ {
+ "loc": ["body", "user"],
+ "msg": "field required",
+ "type": "value_error.missing",
+ },
+ {
+ "loc": ["body", "importance"],
+ "msg": "field required",
+ "type": "value_error.missing",
+ },
+ ]
+ },
+ ),
],
)
def test_post_body(path, body, expected_status, expected_response):
| Closes #914
I believe this is the "correct" way to fix this issue, as the provided body could be invalid in a number of ways, and this should handle anything that json could get parsed into. (The error message also seems appropriate to me even for malformed bodies.)
There was actually already a test for this, but it only checked for a (malformed) body of `None`, not `[]`, so it was easy to add a test case as another parametric case.
| https://api.github.com/repos/tiangolo/fastapi/pulls/918 | 2020-01-25T06:59:01Z | 2020-02-04T04:01:59Z | 2020-02-04T04:01:59Z | 2020-02-04T04:02:37Z | 739 | tiangolo/fastapi | 22,709 |
[ie/twitter:broadcast] Support `--wait-for-video` | diff --git a/yt_dlp/extractor/periscope.py b/yt_dlp/extractor/periscope.py
index dcd02192669..3d1375b6450 100644
--- a/yt_dlp/extractor/periscope.py
+++ b/yt_dlp/extractor/periscope.py
@@ -4,6 +4,7 @@
parse_iso8601,
unescapeHTML,
)
+from ..utils.traversal import traverse_obj
class PeriscopeBaseIE(InfoExtractor):
@@ -20,8 +21,6 @@ def _parse_broadcast_data(self, broadcast, video_id):
title = broadcast.get('status') or 'Periscope Broadcast'
uploader = broadcast.get('user_display_name') or broadcast.get('username')
title = '%s - %s' % (uploader, title) if uploader else title
- is_live = broadcast.get('state').lower() == 'running'
-
thumbnails = [{
'url': broadcast[image],
} for image in ('image_url', 'image_url_medium', 'image_url_small') if broadcast.get(image)]
@@ -31,12 +30,16 @@ def _parse_broadcast_data(self, broadcast, video_id):
'title': title,
'timestamp': parse_iso8601(broadcast.get('created_at')) or int_or_none(
broadcast.get('created_at_ms'), scale=1000),
+ 'release_timestamp': int_or_none(broadcast.get('scheduled_start_ms'), scale=1000),
'uploader': uploader,
'uploader_id': broadcast.get('user_id') or broadcast.get('username'),
'thumbnails': thumbnails,
'view_count': int_or_none(broadcast.get('total_watched')),
'tags': broadcast.get('tags'),
- 'is_live': is_live,
+ 'live_status': {
+ 'running': 'is_live',
+ 'not_started': 'is_upcoming',
+ }.get(traverse_obj(broadcast, ('state', {str.lower}))) or 'was_live'
}
@staticmethod
diff --git a/yt_dlp/extractor/twitter.py b/yt_dlp/extractor/twitter.py
index 7bd78eb487e..d7609bc8132 100644
--- a/yt_dlp/extractor/twitter.py
+++ b/yt_dlp/extractor/twitter.py
@@ -1619,6 +1619,9 @@ def _real_extract(self, url):
info['title'] = broadcast.get('status') or info.get('title')
info['uploader_id'] = broadcast.get('twitter_username') or info.get('uploader_id')
info['uploader_url'] = format_field(broadcast, 'twitter_username', 'https://twitter.com/%s', default=None)
+ if info['live_status'] == 'is_upcoming':
+ return info
+
media_key = broadcast['media_key']
source = self._call_api(
f'live_video_stream/status/{media_key}', media_key)['source']
| Closes #8473
<details open><summary>Template</summary> <!-- OPEN is intentional -->
<!--
# PLEASE FOLLOW THE GUIDE BELOW
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes `[ ]` relevant to your *pull request* (like [x])
- Use *Preview* tab to see how your *pull request* will actually look like
-->
### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [x] Fix or improvement to an extractor (Make sure to add/update tests)
- [ ] New extractor ([Piracy websites will not be accepted](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy))
- [ ] Core bug fix/improvement
- [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes))
<!-- Do NOT edit/remove anything below this! -->
</details><details><summary>Copilot Summary</summary>
<!--
copilot:all
-->
### <samp>🤖 Generated by Copilot at e636a95</samp>
### Summary
🕒🔄🔧
<!--
1. 🕒 - This emoji represents the early return for upcoming videos, as it suggests waiting for the video to start and checking the time.
2. 🔄 - This emoji represents the improved handling of different broadcast states, as it suggests updating or refreshing the video status and adapting to changes.
3. 🔧 - This emoji represents the refactoring and utility function usage, as it suggests fixing or improving the code quality and functionality.
-->
Improved periscope and twitter extractors to handle live broadcasts better. Added `live_status` field to indicate the current state of the video, and skipped downloading videos that are not yet available.
> _Sing, O Muse, of the cunning code review_
> _That made the TwitterBroadcastIE more wise_
> _And taught it to avoid the videos due_
> _That have not yet appeared before the eyes._
### Walkthrough
* Import `traverse_obj` function from `utils.py` to access nested values in a dictionary or a list ([link](https://github.com/yt-dlp/yt-dlp/pull/8475/files?diff=unified&w=0#diff-082a09513b2f03bce9cb19e1d94cd04ee48a8109dbb27ca1f5973502d5c3f4afR5))
* Replace `is_live` field with `live_status` field in `_parse_broadcast_data` function to handle different states of broadcasts ([link](https://github.com/yt-dlp/yt-dlp/pull/8475/files?diff=unified&w=0#diff-082a09513b2f03bce9cb19e1d94cd04ee48a8109dbb27ca1f5973502d5c3f4afL23-L24), [link](https://github.com/yt-dlp/yt-dlp/pull/8475/files?diff=unified&w=0#diff-082a09513b2f03bce9cb19e1d94cd04ee48a8109dbb27ca1f5973502d5c3f4afL38-R42))
* Update `timestamp` and `release_timestamp` fields in `_parse_broadcast_data` function to parse ISO 8601 date string and convert milliseconds to seconds ([link](https://github.com/yt-dlp/yt-dlp/pull/8475/files?diff=unified&w=0#diff-082a09513b2f03bce9cb19e1d94cd04ee48a8109dbb27ca1f5973502d5c3f4afL32-R33))
* Return early from `_real_extract` function in `TwitterBroadcastIE` class if `live_status` is `is_upcoming` to avoid unnecessary requests and errors ([link](https://github.com/yt-dlp/yt-dlp/pull/8475/files?diff=unified&w=0#diff-8b22c3c3c6185164ce8c3e82541f91c728397dec66456eef7e05f664d5729103R1588-R1589))
</details>
| https://api.github.com/repos/yt-dlp/yt-dlp/pulls/8475 | 2023-10-30T04:11:21Z | 2023-11-11T20:05:07Z | 2023-11-11T20:05:07Z | 2023-11-11T20:05:08Z | 650 | yt-dlp/yt-dlp | 8,173 |
fix: Typo in the Indonesian readme doc | diff --git a/README.id.md b/README.id.md
index 4bb8faabd..786c163f3 100644
--- a/README.id.md
+++ b/README.id.md
@@ -27,7 +27,7 @@
Rich adalah library Python yang membantu _memperindah_ tampilan output suatu program di terminal.
-[Rich API](https://rich.readthedocs.io/en/latest/) dapat digunakan untuk mempermudah dalam penambahan gaya dan pewarnaan output di terminal. Rich juga mendukung fitur lain seperti pembuatan tabel, bar progress, penulisan markdown, penghighitan syntax source code, tracebacks, dan masih banyak lagi.
+[Rich API](https://rich.readthedocs.io/en/latest/) dapat digunakan untuk mempermudah dalam penambahan gaya dan pewarnaan output di terminal. Rich juga mendukung fitur lain seperti pembuatan tabel, bar progress, penulisan markdown, penghilightan syntax source code, tracebacks, dan masih banyak lagi.

| ## Type of changes
- [ ] Bug fix
- [ ] New feature
- [x] Documentation / docstrings
- [ ] Tests
- [ ] Other
## Checklist
- [ ] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [ ] I've added tests for new code.
- [x] I accept that @willmcgugan may be pedantic in the code review.
## Description
There's typo in the spelling of "penghilightan".
| https://api.github.com/repos/Textualize/rich/pulls/2808 | 2023-02-14T05:53:46Z | 2023-03-04T09:05:07Z | 2023-03-04T09:05:07Z | 2023-03-04T09:05:07Z | 249 | Textualize/rich | 47,960 |
Support the elliptical arc command for SVGMobject | diff --git a/manimlib/mobject/svg/svg_mobject.py b/manimlib/mobject/svg/svg_mobject.py
index deb902a607..5af3367938 100644
--- a/manimlib/mobject/svg/svg_mobject.py
+++ b/manimlib/mobject/svg/svg_mobject.py
@@ -11,6 +11,7 @@
from manimlib.constants import ORIGIN, UP, DOWN, LEFT, RIGHT
from manimlib.constants import BLACK
from manimlib.constants import WHITE
+from manimlib.constants import DEGREES, PI
from manimlib.mobject.geometry import Circle
from manimlib.mobject.geometry import Rectangle
@@ -21,6 +22,7 @@
from manimlib.utils.config_ops import digest_config
from manimlib.utils.directories import get_mobject_data_dir
from manimlib.utils.images import get_full_vector_image_path
+from manimlib.utils.simple_functions import clip
def string_to_numbers(num_string):
@@ -367,10 +369,18 @@ def get_commands_and_coord_strings(self):
def handle_command(self, command, new_points):
if command.islower():
# Treat it as a relative command
- new_points += self.relative_point
+ if command == "a":
+ # Only the last `self.dim` columns refer to points
+ new_points[:, -self.dim:] += self.relative_point
+ else:
+ new_points += self.relative_point
func, n_points = self.command_to_function(command)
- func(*new_points[:n_points])
+ command_points = new_points[:n_points]
+ if command.upper() == "A":
+ func(*command_points[0][:-self.dim], np.array(command_points[0][-self.dim:]))
+ else:
+ func(*command_points)
leftover_points = new_points[n_points:]
# Recursively handle the rest of the points
@@ -379,7 +389,10 @@ def handle_command(self, command, new_points):
# Treat following points as relative line coordinates
command = "l"
if command.islower():
- leftover_points -= self.relative_point
+ if command == "a":
+ leftover_points[:, -self.dim:] -= self.relative_point
+ else:
+ leftover_points -= self.relative_point
self.relative_point = self.get_last_point()
self.handle_command(command, leftover_points)
else:
@@ -388,20 +401,131 @@ def handle_command(self, command, new_points):
def string_to_points(self, command, coord_string):
numbers = string_to_numbers(coord_string)
+ if command.upper() == "A":
+ # Only the last `self.dim` columns refer to points
+ # Each "point" returned here has a size of `(5 + self.dim)`
+ params = np.array(numbers).reshape((-1, 7))
+ result = np.zeros((params.shape[0], 5 + self.dim))
+ result[:, :7] = params
+ return result
if command.upper() in ["H", "V"]:
i = {"H": 0, "V": 1}[command.upper()]
xy = np.zeros((len(numbers), 2))
xy[:, i] = numbers
if command.isupper():
xy[:, 1 - i] = self.relative_point[1 - i]
- elif command.upper() == "A":
- raise Exception("Not implemented")
else:
- xy = np.array(numbers).reshape((len(numbers) // 2, 2))
+ xy = np.array(numbers).reshape((-1, 2))
result = np.zeros((xy.shape[0], self.dim))
result[:, :2] = xy
return result
+ def add_elliptical_arc_to(self, rx, ry, x_axis_rotation, large_arc_flag, sweep_flag, point):
+ """
+ In fact, this method only suits 2d VMobjects.
+ """
+ def close_to_zero(a, threshold=1e-5):
+ return abs(a) < threshold
+
+ def solve_2d_linear_equation(a, b, c):
+ """
+ Using Crammer's rule to solve the linear equation `[a b]x = c`
+ where `a`, `b` and `c` are all 2d vectors.
+ """
+ def det(a, b):
+ return a[0] * b[1] - a[1] * b[0]
+ d = det(a, b)
+ if close_to_zero(d):
+ raise Exception("Cannot handle 0 determinant.")
+ return [det(c, b) / d, det(a, c) / d]
+
+ def get_arc_center_and_angles(x0, y0, rx, ry, phi, large_arc_flag, sweep_flag, x1, y1):
+ """
+ The parameter functions of an ellipse rotated `phi` radians counterclockwise is (on `alpha`):
+ x = cx + rx * cos(alpha) * cos(phi) + ry * sin(alpha) * sin(phi),
+ y = cy + rx * cos(alpha) * sin(phi) - ry * sin(alpha) * cos(phi).
+ Now we have two points sitting on the ellipse: `(x0, y0)`, `(x1, y1)`, corresponding to 4 equations,
+ and we want to hunt for 4 variables: `cx`, `cy`, `alpha0` and `alpha_1`.
+ Let `d_alpha = alpha1 - alpha0`, then:
+ if `sweep_flag = 0` and `large_arc_flag = 1`, then `PI <= d_alpha < 2 * PI`;
+ if `sweep_flag = 0` and `large_arc_flag = 0`, then `0 < d_alpha <= PI`;
+ if `sweep_flag = 1` and `large_arc_flag = 0`, then `-PI <= d_alpha < 0`;
+ if `sweep_flag = 1` and `large_arc_flag = 1`, then `-2 * PI < d_alpha <= -PI`.
+ """
+ xd = x1 - x0
+ yd = y1 - y0
+ if close_to_zero(xd) and close_to_zero(yd):
+ raise Exception("Cannot find arc center since the start point and the end point meet.")
+ # Find `p = cos(alpha1) - cos(alpha0)`, `q = sin(alpha1) - sin(alpha0)`
+ eq0 = [rx * np.cos(phi), ry * np.sin(phi), xd]
+ eq1 = [rx * np.sin(phi), -ry * np.cos(phi), yd]
+ p, q = solve_2d_linear_equation(*zip(eq0, eq1))
+ # Find `s = (alpha1 - alpha0) / 2`, `t = (alpha1 + alpha0) / 2`
+ # If `sin(s) = 0`, this requires `p = q = 0`,
+ # implying `xd = yd = 0`, which is impossible.
+ sin_s = (p ** 2 + q ** 2) ** 0.5 / 2
+ if sweep_flag:
+ sin_s = -sin_s
+ sin_s = clip(sin_s, -1, 1)
+ s = np.arcsin(sin_s)
+ if large_arc_flag:
+ if not sweep_flag:
+ s = PI - s
+ else:
+ s = -PI - s
+ sin_t = -p / (2 * sin_s)
+ cos_t = q / (2 * sin_s)
+ cos_t = clip(cos_t, -1, 1)
+ t = np.arccos(cos_t)
+ if sin_t <= 0:
+ t = -t
+ # We can make sure `0 < abs(s) < PI`, `-PI <= t < PI`.
+ alpha0 = t - s
+ alpha_1 = t + s
+ cx = x0 - rx * np.cos(alpha0) * np.cos(phi) - ry * np.sin(alpha0) * np.sin(phi)
+ cy = y0 - rx * np.cos(alpha0) * np.sin(phi) + ry * np.sin(alpha0) * np.cos(phi)
+ return cx, cy, alpha0, alpha_1
+
+ def get_point_on_ellipse(cx, cy, rx, ry, phi, angle):
+ return np.array([
+ cx + rx * np.cos(angle) * np.cos(phi) + ry * np.sin(angle) * np.sin(phi),
+ cy + rx * np.cos(angle) * np.sin(phi) - ry * np.sin(angle) * np.cos(phi),
+ 0
+ ])
+
+ def convert_elliptical_arc_to_quadratic_bezier_curve(
+ cx, cy, rx, ry, phi, start_angle, end_angle, n_components=8
+ ):
+ theta = (end_angle - start_angle) / n_components / 2
+ handles = np.array([
+ get_point_on_ellipse(cx, cy, rx / np.cos(theta), ry / np.cos(theta), phi, a)
+ for a in np.linspace(
+ start_angle + theta,
+ end_angle - theta,
+ n_components,
+ )
+ ])
+ anchors = np.array([
+ get_point_on_ellipse(cx, cy, rx, ry, phi, a)
+ for a in np.linspace(
+ start_angle + theta * 2,
+ end_angle,
+ n_components,
+ )
+ ])
+ return handles, anchors
+
+ phi = x_axis_rotation * DEGREES
+ x0, y0 = self.get_last_point()[:2]
+ cx, cy, start_angle, end_angle = get_arc_center_and_angles(
+ x0, y0, rx, ry, phi, large_arc_flag, sweep_flag, point[0], point[1]
+ )
+ handles, anchors = convert_elliptical_arc_to_quadratic_bezier_curve(
+ cx, cy, rx, ry, phi, start_angle, end_angle
+ )
+ for handle, anchor in zip(handles, anchors):
+ self.add_quadratic_bezier_curve_to(handle, anchor)
+
def command_to_function(self, command):
return self.get_command_to_function_map()[command.upper()]
@@ -419,7 +543,7 @@ def get_command_to_function_map(self):
"S": (self.add_smooth_cubic_curve_to, 2),
"Q": (self.add_quadratic_bezier_curve_to, 2),
"T": (self.add_smooth_curve_to, 1),
- "A": (self.add_quadratic_bezier_curve_to, 2), # TODO
+ "A": (self.add_elliptical_arc_to, 1),
"Z": (self.close_path, 0),
}
| <!-- Thanks for contributing to manim!
Please ensure that your pull request works with the latest version of manim.
-->
## Motivation
<!-- Outline your motivation: In what way do your changes improve the library? -->
Support the elliptical arc command for SVGMobject. Now all the commands of svg path can be implemented.
## Proposed changes
<!-- What you changed in those files -->
This is reached through approximating elliptical arcs with quadratic bezier curves, which is almost the same way to form a circular arc in `Arc` class. The key is to figure out the center of the ellipse, requiring us to solve simultaneous equations. I've built a function `add_elliptical_arc_to(self, rx, ry, x_axis_rotation, large_arc_flag, sweep_flag, point)`, inside which a bunch of functions were defined just to do those calculations. I guess it might be better to transfer them into the file `utils\space_ops.py`, but I daren't make too much changes across files in my first pull request.
Detailed annotations are left in the code. I deduced all those stuff by hand today .(:з」∠).
## Test
<!-- How do you test your changes -->
**Code**:
```python
class Test(Scene):
def construct(self):
self.add(SVGMobject("test"))
```
```html
<svg width="320px" height="320px" version="1.1" xmlns="http://www.w3.org/2000/svg">
<path d="M10 315
L 110 215
A 30 50 0 0 1 162.55 162.45
L 172.55 152.45
A 30 50 -45 0 1 215.1 109.9
L 315 10"/>
</svg>
```
**Result**:

| https://api.github.com/repos/3b1b/manim/pulls/1598 | 2021-08-08T13:34:28Z | 2021-08-09T23:19:15Z | 2021-08-09T23:19:15Z | 2021-08-09T23:19:15Z | 2,422 | 3b1b/manim | 18,220 |
Command Injection space alternatives | diff --git a/Command Injection/README.md b/Command Injection/README.md
index a4e0d0b36d..9df048adfc 100644
--- a/Command Injection/README.md
+++ b/Command Injection/README.md
@@ -96,6 +96,16 @@ Commands execution without spaces, $ or { } - Linux (Bash only)
IFS=,;`cat<<<uname,-a`
```
+Tabs work as separators in web apps where spaces are removed.
+
+```powershell
+;ls%09-al%09/home
+drwxr-xr-x 4 root root 4096 Jan 10 13:34 .
+drwxr-xr-x 18 root root 4096 Jan 10 13:33 ..
+drwx------ 2 root root 16384 Jan 10 13:31 lost+found
+drwxr-xr-x 4 test test 4096 Jan 13 08:30 test
+```
+
Works on Windows only.
```powershell
@@ -109,6 +119,14 @@ ping%PROGRAMFILES:~10,-5%IP
something%0Acat%20/etc/passwd
```
+You can also write files.
+
+```powershell
+;cat>/tmp/hi<<EOF%0ahello%0aEOF
+;cat</tmp/hi
+hello
+```
+
### Bypass characters filter via hex encoding
Linux
| This PR adds two use cases where spaces (`0x20`) and braces (`{ }`) are removed from a payload. | https://api.github.com/repos/swisskyrepo/PayloadsAllTheThings/pulls/475 | 2022-01-15T00:42:52Z | 2022-01-15T11:15:26Z | 2022-01-15T11:15:26Z | 2022-01-15T21:51:05Z | 327 | swisskyrepo/PayloadsAllTheThings | 8,689 |
Simplify empty line tracker | diff --git a/src/black/lines.py b/src/black/lines.py
index ea8fe52075..016a489310 100644
--- a/src/black/lines.py
+++ b/src/black/lines.py
@@ -49,7 +49,7 @@
class Line:
"""Holds leaves and comments. Can be printed with `str(line)`."""
- mode: Mode
+ mode: Mode = field(repr=False)
depth: int = 0
leaves: List[Leaf] = field(default_factory=list)
# keys ordered like `leaves`
@@ -579,16 +579,21 @@ def _maybe_empty_lines(self, current_line: Line) -> Tuple[int, int]:
else:
before = 0
depth = current_line.depth
+
+ previous_def = None
while self.previous_defs and self.previous_defs[-1].depth >= depth:
+ previous_def = self.previous_defs.pop()
+
+ if previous_def is not None:
+ assert self.previous_line is not None
if self.mode.is_pyi:
- assert self.previous_line is not None
if depth and not current_line.is_def and self.previous_line.is_def:
# Empty lines between attributes and methods should be preserved.
before = min(1, before)
elif (
Preview.blank_line_after_nested_stub_class in self.mode
- and self.previous_defs[-1].is_class
- and not self.previous_defs[-1].is_stub_class
+ and previous_def.is_class
+ and not previous_def.is_stub_class
):
before = 1
elif depth:
@@ -600,7 +605,7 @@ def _maybe_empty_lines(self, current_line: Line) -> Tuple[int, int]:
before = 1
elif (
not depth
- and self.previous_defs[-1].depth
+ and previous_def.depth
and current_line.leaves[-1].type == token.COLON
and (
current_line.leaves[0].value
@@ -617,7 +622,7 @@ def _maybe_empty_lines(self, current_line: Line) -> Tuple[int, int]:
before = 1
else:
before = 2
- self.previous_defs.pop()
+
if current_line.is_decorator or current_line.is_def or current_line.is_class:
return self._maybe_empty_lines_for_class_or_def(current_line, before)
| https://api.github.com/repos/psf/black/pulls/3797 | 2023-07-17T07:41:27Z | 2023-07-22T15:49:51Z | 2023-07-22T15:49:51Z | 2023-07-22T17:28:47Z | 522 | psf/black | 23,881 | |
feat: add troubleshooting guide to bug report template again | diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml
index 483e0de14..5b9cded68 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.yml
+++ b/.github/ISSUE_TEMPLATE/bug_report.yml
@@ -21,6 +21,7 @@ body:
5. Try a fresh installation of Fooocus in a different directory - see if a clean installation solves the issue
Before making a issue report please, check that the issue hasn't been reported recently.
options:
+ - label: The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)
- label: The issue exists on a clean installation of Fooocus
- label: The issue exists in the current version of Fooocus
- label: The issue has not been reported before recently
| A few issues were popping up lately, which could have been solved by following steps in the troubleshooting guide.
This PR adds the link as checkbox to the bug report template again with the goal of better guiding users to fix their issues themselves. | https://api.github.com/repos/lllyasviel/Fooocus/pulls/2489 | 2024-03-09T13:12:02Z | 2024-03-09T13:13:16Z | 2024-03-09T13:13:16Z | 2024-03-09T13:13:16Z | 212 | lllyasviel/Fooocus | 7,000 |
Fix typos in README.md | diff --git a/README.md b/README.md
index 45c51dbf7..1c8fe3890 100644
--- a/README.md
+++ b/README.md
@@ -228,7 +228,7 @@ following rules are enabled by default:
* `git_rm_recursive` – adds `-r` when you try to `rm` a directory;
* `git_rm_staged` – adds `-f` or `--cached` when you try to `rm` a file with staged changes
* `git_rebase_merge_dir` – offers `git rebase (--continue | --abort | --skip)` or removing the `.git/rebase-merge` dir when a rebase is in progress;
-* `git_remote_seturl_add` – runs `git remote add` when `git remote set_url` on nonexistant remote;
+* `git_remote_seturl_add` – runs `git remote add` when `git remote set_url` on nonexistent remote;
* `git_stash` – stashes your local modifications before rebasing or switching branch;
* `git_stash_pop` – adds your local modifications before popping stash, then resets;
* `git_tag_force` – adds `--force` to `git tag <tagname>` when the tag already exists;
@@ -261,7 +261,7 @@ following rules are enabled by default:
* `missing_space_before_subcommand` – fixes command with missing space like `npminstall`;
* `mkdir_p` – adds `-p` when you try to create a directory without parent;
* `mvn_no_command` – adds `clean package` to `mvn`;
-* `mvn_unknown_lifecycle_phase` – fixes misspelled lifecycle phases with `mvn`;
+* `mvn_unknown_lifecycle_phase` – fixes misspelled life cycle phases with `mvn`;
* `npm_missing_script` – fixes `npm` custom script name in `npm run-script <script>`;
* `npm_run_script` – adds missing `run-script` for custom `npm` scripts;
* `npm_wrong_command` – fixes wrong npm commands like `npm urgrade`;
@@ -280,7 +280,7 @@ following rules are enabled by default:
* `path_from_history` – replaces not found path with similar absolute path from history;
* `react_native_command_unrecognized` – fixes unrecognized `react-native` commands;
* `remove_shell_prompt_literal` – remove leading shell prompt symbol `$`, common when copying commands from documentations;
-* `remove_trailing_cedilla` – remove trailling cedillas `ç`, a common typo for european keyboard layouts;
+* `remove_trailing_cedilla` – remove trailing cedillas `ç`, a common typo for european keyboard layouts;
* `rm_dir` – adds `-rf` when you try to remove a directory;
* `scm_correction` – corrects wrong scm like `hg log` to `git log`;
* `sed_unterminated_s` – adds missing '/' to `sed`'s `s` commands;
| This PR fixes small typos found in the README. | https://api.github.com/repos/nvbn/thefuck/pulls/997 | 2019-10-28T17:50:32Z | 2019-11-02T18:07:33Z | 2019-11-02T18:07:33Z | 2019-11-02T18:07:42Z | 715 | nvbn/thefuck | 30,630 |
bugfix: Crash fix for intermittent crashes that occur when opening MaskEditor. | diff --git a/web/extensions/core/maskeditor.js b/web/extensions/core/maskeditor.js
index 4b0c12747f..6cb3a53850 100644
--- a/web/extensions/core/maskeditor.js
+++ b/web/extensions/core/maskeditor.js
@@ -314,11 +314,11 @@ class MaskEditorDialog extends ComfyDialog {
imgCtx.drawImage(orig_image, 0, 0, drawWidth, drawHeight);
// update mask
- backupCtx.drawImage(maskCanvas, 0, 0, maskCanvas.width, maskCanvas.height, 0, 0, backupCanvas.width, backupCanvas.height);
maskCanvas.width = drawWidth;
maskCanvas.height = drawHeight;
maskCanvas.style.top = imgCanvas.offsetTop + "px";
maskCanvas.style.left = imgCanvas.offsetLeft + "px";
+ backupCtx.drawImage(maskCanvas, 0, 0, maskCanvas.width, maskCanvas.height, 0, 0, backupCanvas.width, backupCanvas.height);
maskCtx.drawImage(backupCanvas, 0, 0, backupCanvas.width, backupCanvas.height, 0, 0, maskCanvas.width, maskCanvas.height);
});
| This commit addresses a bug where a crash occurs when copying the backup canvas to the mask canvas if the size is 0 before setting it. | https://api.github.com/repos/comfyanonymous/ComfyUI/pulls/732 | 2023-06-03T09:40:41Z | 2023-06-03T16:25:49Z | 2023-06-03T16:25:49Z | 2023-06-05T12:10:15Z | 263 | comfyanonymous/ComfyUI | 17,978 |
Move progress info to beginning of site title | diff --git a/javascript/progressbar.js b/javascript/progressbar.js
index 671fde34047..43d1d1ce091 100644
--- a/javascript/progressbar.js
+++ b/javascript/progressbar.js
@@ -23,7 +23,7 @@ function check_progressbar(id_part, id_progressbar, id_progressbar_span, id_skip
if(opts.show_progress_in_title && progressbar && progressbar.offsetParent){
if(progressbar.innerText){
- let newtitle = 'Stable Diffusion - ' + progressbar.innerText
+ let newtitle = '[' + progressbar.innerText.trim() + '] Stable Diffusion';
if(document.title != newtitle){
document.title = newtitle;
}
| Very small UI change, because who has so few tabs open that they can see to the end of a tab name? | https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/4894 | 2022-11-20T14:59:28Z | 2022-11-27T15:48:29Z | 2022-11-27T15:48:29Z | 2022-11-27T18:47:22Z | 155 | AUTOMATIC1111/stable-diffusion-webui | 39,881 |
pyauto | diff --git a/pyauto.py b/pyauto.py
new file mode 100644
index 0000000000..866b46a743
--- /dev/null
+++ b/pyauto.py
@@ -0,0 +1,25 @@
+#Author-Slayking1965
+#email-kingslayer8509@gmail.com
+import random
+import pyautogui
+import string
+
+
+chars = "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+
+chars = string.printable
+chars_list = list(chars)
+
+
+password = pyautogui.password("Enter a password : ")
+
+guess_password = ""
+
+while(guess_password != password):
+ guess_password = random.choices(chars_list, k=len(password))
+
+ print("<=================="+ str(guess_password)+ "==================>")
+
+ if(guess_password == list(password)):
+ print("Your password is : "+ "".join(guess_password))
+ break
| Hi,it is a sample program made out of a module named 'pyautogui'. Hope you liked it 👍 . Please accept my pull request sir/ma'am. | https://api.github.com/repos/geekcomputers/Python/pulls/995 | 2020-10-04T10:57:26Z | 2020-10-10T16:58:02Z | 2020-10-10T16:58:02Z | 2020-10-10T16:58:02Z | 201 | geekcomputers/Python | 31,487 |
🚨🚨 Fix group beam search | diff --git a/src/transformers/generation/beam_search.py b/src/transformers/generation/beam_search.py
index 792b2a17f5d6b..71a459c06852b 100644
--- a/src/transformers/generation/beam_search.py
+++ b/src/transformers/generation/beam_search.py
@@ -43,6 +43,10 @@
The id of the *padding* token.
eos_token_id (`Union[int, List[int]]`, *optional*):
The id of the *end-of-sequence* token. Optionally, use a list to set multiple *end-of-sequence* tokens.
+ beam_indices (`torch.LongTensor]`, *optional*):
+ Beam indices indicating to which beam hypothesis each token correspond.
+ group_index (`int`, *optional*):
+ The index of the group of beams. Used with [`~PreTrainedModel.group_beam_search`].
Return:
`UserDict`: A dictionary composed of the fields as defined above:
@@ -175,16 +179,22 @@ def __init__(
self.group_size = self.num_beams // self.num_beam_groups
self._is_init = False
+ # self._beam_hyps[i*self.num_beam_groups+j] is the beam_hyps of the j-th group in the i-th mini-batch.
+ # If group_beam_search is not used, the list consists of `batch_size` beam_hyps.
self._beam_hyps = [
BeamHypotheses(
- num_beams=self.num_beams,
+ num_beams=self.group_size,
length_penalty=self.length_penalty,
early_stopping=self.do_early_stopping,
max_length=max_length,
)
- for _ in range(batch_size)
+ for _ in range(batch_size * self.num_beam_groups)
]
- self._done = torch.tensor([False for _ in range(batch_size)], dtype=torch.bool, device=self.device)
+ # self._done[i*self.num_beam_groups+j] indicates whether the generation of the beam_hyps of the j-th group
+ # in the i-th mini-batch is complete.
+ self._done = torch.tensor(
+ [False for _ in range(batch_size * self.num_beam_groups)], dtype=torch.bool, device=self.device
+ )
if not isinstance(num_beams, int) or num_beams <= 1:
raise ValueError(
@@ -211,9 +221,11 @@ def process(
pad_token_id: Optional[int] = None,
eos_token_id: Optional[Union[int, List[int]]] = None,
beam_indices: Optional[torch.LongTensor] = None,
+ group_index: Optional[int] = 0,
) -> Dict[str, torch.Tensor]:
cur_len = input_ids.shape[-1] + 1 # add up to the length which the next_scores is calculated on
- batch_size = len(self._beam_hyps)
+ batch_size = len(self._beam_hyps) // self.num_beam_groups
+
if not (batch_size == (input_ids.shape[0] // self.group_size)):
if self.num_beam_groups > 1:
raise ValueError(
@@ -234,9 +246,10 @@ def process(
if isinstance(eos_token_id, int):
eos_token_id = [eos_token_id]
- for batch_idx, beam_hyp in enumerate(self._beam_hyps):
- if self._done[batch_idx]:
- if self.num_beams < len(beam_hyp):
+ for batch_idx in range(batch_size):
+ batch_group_idx = batch_idx * self.num_beam_groups + group_index
+ if self._done[batch_group_idx]:
+ if self.num_beams < len(self._beam_hyps[batch_group_idx]):
raise ValueError(f"Batch can only be done if at least {self.num_beams} beams have been generated")
if eos_token_id is None or pad_token_id is None:
raise ValueError("Generated beams >= num_beams -> eos_token_id and pad_token have to be defined")
@@ -264,7 +277,7 @@ def process(
else:
beam_index = None
- beam_hyp.add(
+ self._beam_hyps[batch_group_idx].add(
input_ids[batch_beam_idx].clone(),
next_score.item(),
beam_indices=beam_index,
@@ -287,7 +300,7 @@ def process(
)
# Check if we are done so that we can save a pad step if all(done)
- self._done[batch_idx] = self._done[batch_idx] or beam_hyp.is_done(
+ self._done[batch_group_idx] = self._done[batch_group_idx] or self._beam_hyps[batch_group_idx].is_done(
next_scores[batch_idx].max().item(), cur_len
)
@@ -310,20 +323,20 @@ def finalize(
eos_token_id: Optional[Union[int, List[int]]] = None,
beam_indices: Optional[torch.LongTensor] = None,
) -> Tuple[torch.LongTensor]:
- batch_size = len(self._beam_hyps)
+ batch_size = len(self._beam_hyps) // self.num_beam_groups
if isinstance(eos_token_id, int):
eos_token_id = [eos_token_id]
# finalize all open beam hypotheses and add to generated hypotheses
- for batch_idx, beam_hyp in enumerate(self._beam_hyps):
- if self._done[batch_idx]:
+ for batch_group_idx, beam_hyp in enumerate(self._beam_hyps):
+ if self._done[batch_group_idx]:
continue
# all open beam hypotheses are added to the beam hypothesis
# beam hypothesis class automatically keeps the best beams
- for beam_id in range(self.num_beams):
- batch_beam_idx = batch_idx * self.num_beams + beam_id
+ for index_per_group in range(self.group_size):
+ batch_beam_idx = batch_group_idx * self.group_size + index_per_group
final_score = final_beam_scores[batch_beam_idx].item()
final_tokens = input_ids[batch_beam_idx]
beam_index = beam_indices[batch_beam_idx] if beam_indices is not None else None
@@ -336,8 +349,10 @@ def finalize(
best_scores = torch.zeros(batch_size * self.num_beam_hyps_to_keep, device=self.device, dtype=torch.float32)
# retrieve best hypotheses
- for i, beam_hyp in enumerate(self._beam_hyps):
- sorted_hyps = sorted(beam_hyp.beams, key=lambda x: x[0])
+ for i in range(batch_size):
+ beam_hyps_in_batch = self._beam_hyps[i * self.num_beam_groups : (i + 1) * self.num_beam_groups]
+ candidate_beams = [beam for beam_hyp in beam_hyps_in_batch for beam in beam_hyp.beams]
+ sorted_hyps = sorted(candidate_beams, key=lambda x: x[0])
for j in range(self.num_beam_hyps_to_keep):
best_hyp_tuple = sorted_hyps.pop()
best_score = best_hyp_tuple[0]
diff --git a/src/transformers/generation/utils.py b/src/transformers/generation/utils.py
index e5da7a143b4cb..b50689ba3f96f 100644
--- a/src/transformers/generation/utils.py
+++ b/src/transformers/generation/utils.py
@@ -3522,10 +3522,10 @@ def group_beam_search(
else self.generation_config.return_dict_in_generate
)
- batch_size = len(beam_scorer._beam_hyps)
num_beams = beam_scorer.num_beams
num_beam_groups = beam_scorer.num_beam_groups
num_sub_beams = num_beams // num_beam_groups
+ batch_size = len(beam_scorer._beam_hyps) // num_beam_groups
device = input_ids.device
batch_beam_size, cur_len = input_ids.shape
@@ -3648,6 +3648,7 @@ def group_beam_search(
pad_token_id=pad_token_id,
eos_token_id=eos_token_id,
beam_indices=process_beam_indices,
+ group_index=beam_group_idx,
)
beam_scores[batch_group_indices] = beam_outputs["next_beam_scores"]
beam_next_tokens = beam_outputs["next_beam_tokens"]
| # What does this PR do?
Diverse beam search is a method that generates `num_beams//num_beam_groups` sentences for each group independently. However, the current code uses one BeamHypotheses shared by all groups. Therefore, group A will generate two sentences before group B outputs a sentence. So, I created BeamHypotheses for each group so that inferences can be made independently.
Changes are as follows.
inference code:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-xsum")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-xsum")
text = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration."
outputs = model.generate(
tokenizer.encode(text, return_tensors="pt", max_length=512),
num_beam_groups=2,
num_beams=2,
diversity_penalty=1000000.0,
num_return_sequences=2,
)
print("\n".join(tokenizer.batch_decode(outputs, skip_special_tokens=True)))
```
before:
```
A number Of research projects have investigated the role of the brain's encoder and decoder in the control of the encoded sequences.
A number Of research projects have investigated the role of the brain's encoder and decoder in the control of the encoded sequences..
```
after:
```
The study of the activity of the brain's encoders and decoders has revealed a range of different models of how the brain processes information.
A number Of research projects have investigated the role of the brain's encoder and decoder in the control of the encoded sequences.
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #24369
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| https://api.github.com/repos/huggingface/transformers/pulls/24407 | 2023-06-21T16:41:22Z | 2023-06-27T09:43:11Z | 2023-06-27T09:43:11Z | 2023-07-06T03:32:58Z | 1,842 | huggingface/transformers | 12,513 |
Update set_module_args in unit test docs (#55244) | diff --git a/docs/docsite/rst/dev_guide/testing_units_modules.rst b/docs/docsite/rst/dev_guide/testing_units_modules.rst
index 07d5d51ed16be5..dd1e77cc6f20ce 100644
--- a/docs/docsite/rst/dev_guide/testing_units_modules.rst
+++ b/docs/docsite/rst/dev_guide/testing_units_modules.rst
@@ -288,20 +288,15 @@ Passing Arguments
.. This section should be updated once https://github.com/ansible/ansible/pull/31456 is
closed since the function below will be provided in a library file.
-To pass arguments to a module correctly, use a function that stores the
-parameters in a special string variable. Module creation and argument processing is
+To pass arguments to a module correctly, use the ``set_module_args`` method which accepts a dictionary
+as its parameter. Module creation and argument processing is
handled through the :class:`AnsibleModule` object in the basic section of the utilities. Normally
this accepts input on ``STDIN``, which is not convenient for unit testing. When the special
-variable is set it will be treated as if the input came on ``STDIN`` to the module.::
+variable is set it will be treated as if the input came on ``STDIN`` to the module. Simply call that function before setting up your module::
- import json
- from ansible.module_utils._text import to_bytes
-
- def set_module_args(args):
- args = json.dumps({'ANSIBLE_MODULE_ARGS': args})
- basic._ANSIBLE_ARGS = to_bytes(args)
-
-simply call that function before setting up your module::
+ import json
+ from units.modules.utils import set_module_args
+ from ansible.module_utils._text import to_bytes
def test_already_registered(self):
set_module_args({
| ##### SUMMARY
Backports #55244
* Update docs/docsite/rst/dev_guide/testing_units_modules.rst
`set_unit_args()` should be imported and used in the unit test documentation.
Co-Authored-By: kbreit <kevin.breit@kevinbreit.net>
(cherry picked from commit 521e62aa3873e562c31df0c5aabc9e01bb6643f0)
##### ISSUE TYPE
- Docs Pull Request
##### COMPONENT NAME
docs.ansible.com
| https://api.github.com/repos/ansible/ansible/pulls/55604 | 2019-04-22T14:21:34Z | 2019-04-22T14:59:40Z | 2019-04-22T14:59:39Z | 2019-07-25T17:32:10Z | 402 | ansible/ansible | 48,841 |
Bump asyncsleepiq to v1.4.1 | diff --git a/homeassistant/components/sleepiq/manifest.json b/homeassistant/components/sleepiq/manifest.json
index d58c20b14b8f..62bd3930c774 100644
--- a/homeassistant/components/sleepiq/manifest.json
+++ b/homeassistant/components/sleepiq/manifest.json
@@ -11,5 +11,5 @@
"documentation": "https://www.home-assistant.io/integrations/sleepiq",
"iot_class": "cloud_polling",
"loggers": ["asyncsleepiq"],
- "requirements": ["asyncsleepiq==1.4.0"]
+ "requirements": ["asyncsleepiq==1.4.1"]
}
diff --git a/requirements_all.txt b/requirements_all.txt
index 44e75b0cf742..445c7fe87363 100644
--- a/requirements_all.txt
+++ b/requirements_all.txt
@@ -478,7 +478,7 @@ asyncinotify==4.0.2
asyncpysupla==0.0.5
# homeassistant.components.sleepiq
-asyncsleepiq==1.4.0
+asyncsleepiq==1.4.1
# homeassistant.components.aten_pe
# atenpdu==0.3.2
diff --git a/requirements_test_all.txt b/requirements_test_all.txt
index 5d6bc299ceb9..06e9f55180fa 100644
--- a/requirements_test_all.txt
+++ b/requirements_test_all.txt
@@ -424,7 +424,7 @@ arcam-fmj==1.4.0
async-upnp-client==0.38.0
# homeassistant.components.sleepiq
-asyncsleepiq==1.4.0
+asyncsleepiq==1.4.1
# homeassistant.components.aurora
auroranoaa==0.0.3
| <!--
You are amazing! Thanks for contributing to our project!
Please, DO NOT DELETE ANY TEXT from this template! (unless instructed).
-->
## Proposed change
<!--
Describe the big picture of your changes here to communicate to the
maintainers why we should accept this pull request. If it fixes a bug
or resolves a feature request, be sure to link to that issue in the
additional information section.
-->
Update asyncsleepiq to v1.4.1: https://github.com/kbickar/asyncsleepiq/releases/tag/v1.4.1
Fix issue for some users where the integration wouldn't load
## Type of change
<!--
What type of change does your PR introduce to Home Assistant?
NOTE: Please, check only 1! box!
If your PR requires multiple boxes to be checked, you'll most likely need to
split it into multiple PRs. This makes things easier and faster to code review.
-->
- [X] Dependency upgrade
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New integration (thank you!)
- [ ] New feature (which adds functionality to an existing integration)
- [ ] Deprecation (breaking change to happen in the future)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [ ] Code quality improvements to existing code or addition of tests
## Additional information
<!--
Details are important, and help maintainers processing your PR.
Please be sure to fill out additional details, if applicable.
-->
- This PR fixes or closes issue: fixes #106675
- This PR is related to issue:
- Link to documentation pull request:
## Checklist
<!--
Put an `x` in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
-->
- [ ] The code change is tested and works locally.
- [ ] Local tests pass. **Your PR cannot be merged unless tests pass**
- [ ] There is no commented out code in this PR.
- [ ] I have followed the [development checklist][dev-checklist]
- [ ] I have followed the [perfect PR recommendations][perfect-pr]
- [ ] The code has been formatted using Ruff (`ruff format homeassistant tests`)
- [ ] Tests have been added to verify that the new code works.
If user exposed functionality or configuration variables are added/changed:
- [ ] Documentation added/updated for [www.home-assistant.io][docs-repository]
If the code communicates with devices, web services, or third-party tools:
- [ ] The [manifest file][manifest-docs] has all fields filled out correctly.
Updated and included derived files by running: `python3 -m script.hassfest`.
- [ ] New or updated dependencies have been added to `requirements_all.txt`.
Updated by running `python3 -m script.gen_requirements_all`.
- [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
- [ ] Untested files have been added to `.coveragerc`.
<!--
This project is very active and we have a high turnover of pull requests.
Unfortunately, the number of incoming pull requests is higher than what our
reviewers can review and merge so there is a long backlog of pull requests
waiting for review. You can help here!
By reviewing another pull request, you will help raise the code quality of
that pull request and the final review will be faster. This way the general
pace of pull request reviews will go up and your wait time will go down.
When picking a pull request to review, try to choose one that hasn't yet
been reviewed.
Thanks for helping out!
-->
To help with the load of incoming pull requests:
- [ ] I have reviewed two other [open pull requests][prs] in this repository.
[prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure
<!--
Thank you for contributing <3
Below, some useful links you could explore:
-->
[dev-checklist]: https://developers.home-assistant.io/docs/development_checklist/
[manifest-docs]: https://developers.home-assistant.io/docs/creating_integration_manifest/
[quality-scale]: https://developers.home-assistant.io/docs/integration_quality_scale_index/
[docs-repository]: https://github.com/home-assistant/home-assistant.io
[perfect-pr]: https://developers.home-assistant.io/docs/review-process/#creating-the-perfect-pr
| https://api.github.com/repos/home-assistant/core/pulls/106682 | 2023-12-29T21:50:35Z | 2023-12-30T00:45:04Z | 2023-12-30T00:45:04Z | 2024-01-02T17:09:24Z | 414 | home-assistant/core | 38,872 |
update test cookie handling for Werkzeug 2.3 | diff --git a/src/flask/testing.py b/src/flask/testing.py
index 8cb2d1bd94..a972a3f5e9 100644
--- a/src/flask/testing.py
+++ b/src/flask/testing.py
@@ -11,7 +11,6 @@
from werkzeug.wrappers import Request as BaseRequest
from .cli import ScriptInfo
-from .globals import _cv_request
from .sessions import SessionMixin
if t.TYPE_CHECKING: # pragma: no cover
@@ -137,40 +136,45 @@ def session_transaction(
:meth:`~flask.Flask.test_request_context` which are directly
passed through.
"""
- if self.cookie_jar is None:
- raise RuntimeError(
- "Session transactions only make sense with cookies enabled."
+ # new cookie interface for Werkzeug >= 2.3
+ cookie_storage = self._cookies if hasattr(self, "_cookies") else self.cookie_jar
+
+ if cookie_storage is None:
+ raise TypeError(
+ "Cookies are disabled. Create a client with 'use_cookies=True'."
)
+
app = self.application
- environ_overrides = kwargs.setdefault("environ_overrides", {})
- self.cookie_jar.inject_wsgi(environ_overrides)
- outer_reqctx = _cv_request.get(None)
- with app.test_request_context(*args, **kwargs) as c:
- session_interface = app.session_interface
- sess = session_interface.open_session(app, c.request)
- if sess is None:
- raise RuntimeError(
- "Session backend did not open a session. Check the configuration"
- )
-
- # Since we have to open a new request context for the session
- # handling we want to make sure that we hide out own context
- # from the caller. By pushing the original request context
- # (or None) on top of this and popping it we get exactly that
- # behavior. It's important to not use the push and pop
- # methods of the actual request context object since that would
- # mean that cleanup handlers are called
- token = _cv_request.set(outer_reqctx) # type: ignore[arg-type]
- try:
- yield sess
- finally:
- _cv_request.reset(token)
-
- resp = app.response_class()
- if not session_interface.is_null_session(sess):
- session_interface.save_session(app, sess, resp)
- headers = resp.get_wsgi_headers(c.request.environ)
- self.cookie_jar.extract_wsgi(c.request.environ, headers)
+ ctx = app.test_request_context(*args, **kwargs)
+
+ if hasattr(self, "_add_cookies_to_wsgi"):
+ self._add_cookies_to_wsgi(ctx.request.environ)
+ else:
+ self.cookie_jar.inject_wsgi(ctx.request.environ) # type: ignore[union-attr]
+
+ with ctx:
+ sess = app.session_interface.open_session(app, ctx.request)
+
+ if sess is None:
+ raise RuntimeError("Session backend did not open a session.")
+
+ yield sess
+ resp = app.response_class()
+
+ if app.session_interface.is_null_session(sess):
+ return
+
+ with ctx:
+ app.session_interface.save_session(app, sess, resp)
+
+ if hasattr(self, "_update_cookies_from_response"):
+ self._update_cookies_from_response(
+ ctx.request.host.partition(":")[0], resp.headers.getlist("Set-Cookie")
+ )
+ else:
+ self.cookie_jar.extract_wsgi( # type: ignore[union-attr]
+ ctx.request.environ, resp.headers
+ )
def _copy_environ(self, other):
out = {**self.environ_base, **other}
diff --git a/tests/test_basic.py b/tests/test_basic.py
index 9aca667938..a622fa93ab 100644
--- a/tests/test_basic.py
+++ b/tests/test_basic.py
@@ -261,8 +261,9 @@ def index():
return "Hello World"
rv = client.get("/", "http://example.com/")
- assert "domain=.example.com" in rv.headers["set-cookie"].lower()
- assert "httponly" in rv.headers["set-cookie"].lower()
+ cookie = rv.headers["set-cookie"].lower()
+ # or condition for Werkzeug < 2.3
+ assert "domain=example.com" in cookie or "domain=.example.com" in cookie
def test_session_using_server_name_and_port(app, client):
@@ -274,8 +275,9 @@ def index():
return "Hello World"
rv = client.get("/", "http://example.com:8080/")
- assert "domain=.example.com" in rv.headers["set-cookie"].lower()
- assert "httponly" in rv.headers["set-cookie"].lower()
+ cookie = rv.headers["set-cookie"].lower()
+ # or condition for Werkzeug < 2.3
+ assert "domain=example.com" in cookie or "domain=.example.com" in cookie
def test_session_using_server_name_port_and_path(app, client):
@@ -337,7 +339,8 @@ def clear():
rv = client.get("/", "http://www.example.com:8080/test/")
cookie = rv.headers["set-cookie"].lower()
- assert "domain=.example.com" in cookie
+ # or condition for Werkzeug < 2.3
+ assert "domain=example.com" in cookie or "domain=.example.com" in cookie
assert "path=/" in cookie
assert "secure" in cookie
assert "httponly" not in cookie
@@ -346,7 +349,8 @@ def clear():
rv = client.get("/clear", "http://www.example.com:8080/test/")
cookie = rv.headers["set-cookie"].lower()
assert "session=;" in cookie
- assert "domain=.example.com" in cookie
+ # or condition for Werkzeug < 2.3
+ assert "domain=example.com" in cookie or "domain=.example.com" in cookie
assert "path=/" in cookie
assert "secure" in cookie
assert "samesite" in cookie
diff --git a/tests/test_json.py b/tests/test_json.py
index ba9f38dc14..500eb64d51 100644
--- a/tests/test_json.py
+++ b/tests/test_json.py
@@ -267,25 +267,6 @@ def _has_encoding(name):
return False
-@pytest.mark.skipif(
- not _has_encoding("euc-kr"), reason="The euc-kr encoding is required."
-)
-def test_modified_url_encoding(app, client):
- class ModifiedRequest(flask.Request):
- url_charset = "euc-kr"
-
- app.request_class = ModifiedRequest
- app.url_map.charset = "euc-kr"
-
- @app.route("/")
- def index():
- return flask.request.args["foo"]
-
- rv = client.get("/", query_string={"foo": "정상처리"}, charset="euc-kr")
- assert rv.status_code == 200
- assert rv.get_data(as_text=True) == "정상처리"
-
-
def test_json_key_sorting(app, client):
app.debug = True
assert app.json.sort_keys
diff --git a/tests/test_testing.py b/tests/test_testing.py
index 40f6fe1b8b..14c5ade0bf 100644
--- a/tests/test_testing.py
+++ b/tests/test_testing.py
@@ -206,10 +206,10 @@ def test_session_transactions_keep_context(app, client, req_ctx):
def test_session_transaction_needs_cookies(app):
c = app.test_client(use_cookies=False)
- with pytest.raises(RuntimeError) as e:
+
+ with pytest.raises(TypeError, match="Cookies are disabled."):
with c.session_transaction():
pass
- assert "cookies" in str(e.value)
def test_test_client_context_binding(app, client):
| https://api.github.com/repos/pallets/flask/pulls/5053 | 2023-04-12T17:35:33Z | 2023-04-12T17:57:14Z | 2023-04-12T17:57:14Z | 2023-04-27T00:05:33Z | 1,773 | pallets/flask | 20,066 | |
I: Grammar, spacing and typo fixes | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index 8df991e15..d0c4ca172 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -1037,7 +1037,7 @@ Interface rule summary:
* [I.7: State postconditions](#Ri-post)
* [I.8: Prefer `Ensures()` for expressing postconditions](#Ri-ensures)
* [I.9: If an interface is a template, document its parameters using concepts](#Ri-concepts)
-* [I.10: Use exceptions to signal a failure to perform a required tasks](#Ri-except)
+* [I.10: Use exceptions to signal a failure to perform a required task](#Ri-except)
* [I.11: Never transfer ownership by a raw pointer (`T*`)](#Ri-raw)
* [I.12: Declare a pointer that must not be null as `not_null`](#Ri-nullptr)
* [I.13: Do not pass an array as a single pointer](#Ri-array)
@@ -1133,7 +1133,7 @@ Global constants are useful.
The rule against global variables applies to namespace scope variables as well.
-**Alternative**: If you use global (more generally namespace scope data) to avoid copying, consider passing the data as an object by reference to `const`.
+**Alternative**: If you use global (more generally namespace scope) data to avoid copying, consider passing the data as an object by reference to `const`.
Another solution is to define the data as the state of some object and the operations as member functions.
**Warning**: Beware of data races: If one thread can access nonlocal data (or data passed by reference) while another thread executes the callee, we can have a data race.
@@ -1455,7 +1455,7 @@ Stating the postcondition would have made it clear:
// ... no m.unlock() ...
}
-The bug is now obvious (but only to a human reading comments)
+The bug is now obvious (but only to a human reading comments).
Better still, use [RAII](#Rr-raii) to ensure that the postcondition ("the lock must be released") is enforced in code:
@@ -1532,7 +1532,7 @@ Use the ISO Concepts TS style of requirements specification. For example:
Soon (maybe in 2017), most compilers will be able to check `requires` clauses once the `//` is removed.
For now, the concept TS is supported only in GCC 6.1.
-**See also**: See [generic programming](#SS-GP) and [concepts](#SS-t-concepts).
+**See also**: [Generic programming](#SS-GP) and [concepts](#SS-t-concepts).
##### Enforcement
@@ -1558,7 +1558,7 @@ This is a major source of errors.
What is an error?
An error means that the function cannot achieve its advertised purpose (including establishing postconditions).
-Calling code that ignores the error could lead to wrong results or undefined systems state.
+Calling code that ignores an error could lead to wrong results or undefined systems state.
For example, not being able to connect to a remote server is not by itself an error:
the server can refuse a connection for all kinds of reasons, so the natural thing is to return a result that the caller always has to check.
However, if failing to make a connection is considered an error, then a failure should throw an exception.
@@ -1579,7 +1579,7 @@ If you can't use exceptions (e.g. because your code is full of old-style raw-poi
}
// ... use val ...
-This style unfortunately leads to uninitialized variable.
+This style unfortunately leads to uninitialized variables.
A facility [structured bindings](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0144r1.pdf) to deal with that will become available in C++17.
[val, error_code] = do_something();
@@ -1588,7 +1588,6 @@ A facility [structured bindings](http://www.open-std.org/jtc1/sc22/wg21/docs/pap
}
// ... use val ...
-
##### Note
We don't consider "performance" a valid reason not to use exceptions.
@@ -1637,7 +1636,7 @@ Consider returning the result by value (use move semantics if the result is larg
However that is less elegant and less efficient unless reference semantics are needed.
**Alternative**: Sometimes older code can't be modified because of ABI compatibility requirements or lack of resources.
-In that case, mark owning pointers using `owner` from [guideline support library](#S-gsl):
+In that case, mark owning pointers using `owner` from the [guideline support library](#S-gsl):
owner<X*> compute(args) // It is now clear that ownership is transferred
{
@@ -1687,11 +1686,11 @@ By stating the intent in source, implementers and tools can provide better diagn
##### Note
-`not_null` is defined in the [guideline support library](#S-gsl)
+`not_null` is defined in the [guideline support library](#S-gsl).
##### Note
-The assumption that the pointer to `char` pointed to a C-style string (a zero-terminated string of characters) was still implicit, and a potential source of confusion and errors. Use `zstring` in preference to `const char*`.
+The assumption that the pointer to `char` pointed to a C-style string (a zero-terminated string of characters) was still implicit, and a potential source of confusion and errors. Use `czstring` in preference to `const char*`.
// we can assume that p cannot be nullptr
// we can assume that p points to a zero-terminated array of characters
@@ -1841,7 +1840,7 @@ There are functions that are best expressed with four individual arguments, but
##### Enforcement
-* Warn when a functions declares two iterators (including pointers) of the same type instead of a range or a view.
+* Warn when a function declares two iterators (including pointers) of the same type instead of a range or a view.
* (Not enforceable) This is a philosophical guideline that is infeasible to check directly.
### <a name="Ri-unrelated"></a>I.24: Avoid adjacent unrelated parameters of the same type
| https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/824 | 2017-01-01T01:01:25Z | 2017-01-02T20:51:19Z | 2017-01-02T20:51:19Z | 2017-01-02T22:44:08Z | 1,420 | isocpp/CppCoreGuidelines | 16,026 | |
Single letter permission changes (chmod +x) | diff --git a/share/adapters/chmod.sh b/share/adapters/chmod.sh
index 0ddfb897..9c20eddf 100755
--- a/share/adapters/chmod.sh
+++ b/share/adapters/chmod.sh
@@ -54,39 +54,93 @@ chmod_calc(){
fi
done
# If permission string is given -> calc number
- elif [[ ${#1} -eq 9 && $1 =~ ^[r,s,S,t,T,w,x,-]+$ ]]
+ elif [[ $1 =~ ^[r,s,S,t,T,w,x,-]+$ ]]
then
- p_s=$1
- num=0
- # Process specials
- [[ 'sS' =~ ${p_s:2:1} ]] && setuid='X' && num=$((num+4))
- [[ 'sS' =~ ${p_s:5:1} ]] && setgid='X' && num=$((num+2))
- [[ 'tT' =~ ${p_s:8:1} ]] && sticky='X' && num=$((num+1))
- [ ${num} -gt 0 ] && p_n+="$num"
- # Calculate rest of p_n number while populating arrays for table
- for (( i=0; i<${#p_s}; i+=0 ))
- do
+ # FULL STRING
+ if [[ ${#1} -eq 9 ]]
+ then
+ p_s=$1
num=0
- [[ "r-" =~ ${p_s:$i:1} ]] || return 1
- [[ ${p_s:$i:1} == 'r' ]] && R+=('X') || R+=(' ')
- [[ ${p_s:$((i++)):1} == 'r' ]] && let num++
- num=$(( num << 1 ))
- [[ "w-" =~ ${p_s:$i:1} ]] || return 1
- [[ ${p_s:$i:1} == 'w' ]] && W+=('X') || W+=(' ')
- [[ ${p_s:$((i++)):1} == 'w' ]] && let num++
- num=$(( num << 1 ))
- if [ $i -lt 6 ]
+ # Process specials
+ [[ 'sS' =~ ${p_s:2:1} ]] && setuid='X' && num=$((num+4))
+ [[ 'sS' =~ ${p_s:5:1} ]] && setgid='X' && num=$((num+2))
+ [[ 'tT' =~ ${p_s:8:1} ]] && sticky='X' && num=$((num+1))
+ [ ${num} -gt 0 ] && p_n+="$num"
+ # Calculate rest of p_n number while populating arrays for table
+ for (( i=0; i<${#p_s}; i+=0 ))
+ do
+ num=0
+ [[ "r-" =~ ${p_s:$i:1} ]] || return 1
+ [[ ${p_s:$i:1} == 'r' ]] && R+=('X') || R+=(' ')
+ [[ ${p_s:$((i++)):1} == 'r' ]] && let num++
+ num=$(( num << 1 ))
+ [[ "w-" =~ ${p_s:$i:1} ]] || return 1
+ [[ ${p_s:$i:1} == 'w' ]] && W+=('X') || W+=(' ')
+ [[ ${p_s:$((i++)):1} == 'w' ]] && let num++
+ num=$(( num << 1 ))
+ if [ $i -lt 6 ]
+ then
+ [[ "sSx-" =~ ${p_s:$i:1} ]] || return 1
+ [[ "sx" =~ ${p_s:$i:1} ]] && X+=('X') || X+=(' ')
+ [[ "sx" =~ ${p_s:$((i++)):1} ]] && let num++
+ else
+ [[ "tTx-" =~ ${p_s:$i:1} ]] || return 1
+ [[ "tx" =~ ${p_s:$i:1} ]] && X+=('X') || X+=(' ')
+ [[ "tx" =~ ${p_s:$((i++)):1} ]] && let num++
+ fi
+ p_n+="$num"
+ done
+ # PARTIAL STRING
+ elif [[ $1 =~ ^[r,s,t,w,x]+$ ]]
+ then
+ p_s='---------'
+ p_n0=0
+ p_n1=0
+ p_n2=0
+ p_n3=0
+ R=(' ' ' ' ' ')
+ W=(' ' ' ' ' ')
+ X=(' ' ' ' ' ')
+ if [[ $1 =~ 'r' ]]
then
- [[ "sSx-" =~ ${p_s:$i:1} ]] || return 1
- [[ "sx" =~ ${p_s:$i:1} ]] && X+=('X') || X+=(' ')
- [[ "sx" =~ ${p_s:$((i++)):1} ]] && let num++
- else
- [[ "tTx-" =~ ${p_s:$i:1} ]] || return 1
- [[ "tx" =~ ${p_s:$i:1} ]] && X+=('X') || X+=(' ')
- [[ "tx" =~ ${p_s:$((i++)):1} ]] && let num++
+ p_s=$(echo $p_s | sed 's/./r/1; s/./r/4; s/./r/7;')
+ let p_n1+=4
+ let p_n2+=4
+ let p_n3+=4
+ R=('X' 'X' 'X')
fi
- p_n+="$num"
- done
+ if [[ $1 =~ 'w' ]]
+ then
+ p_s=$(echo $p_s | sed 's/./w/2')
+ let p_n1+=2
+ W=('X' ' ' ' ')
+ fi
+ if [[ $1 =~ 'x' ]]
+ then
+ p_s=$(echo $p_s | sed 's/./x/3; s/./x/6; s/./x/9;')
+ let p_n1+=1
+ let p_n2+=1
+ let p_n3+=1
+ X=('X' 'X' 'X')
+ fi
+ if [[ $1 =~ 's' ]]
+ then
+ [[ ${p_s:2:1} == 'x' ]] && p_s=$(echo $p_s | sed 's/./s/3') || p_s=$(echo $p_s | sed 's/./S/3')
+ [[ ${p_s:5:1} == 'x' ]] && p_s=$(echo $p_s | sed 's/./s/6') || p_s=$(echo $p_s | sed 's/./S/6')
+ let p_n0+=6
+ setuid='X'
+ setgid='X'
+ fi
+ if [[ $1 =~ 't' ]]
+ then
+ let p_n0+=1
+ [[ ${p_s:8:1} == 'x' ]] && p_s=$(echo $p_s | sed 's/./t/9') || p_s=$(echo $p_s | sed 's/./T/9')
+ sticky='X'
+ fi
+ p_n="${p_n0}${p_n1}${p_n2}${p_n3}"
+ else
+ return 1
+ fi
else
return 1
fi
| Previously chmod.sh only accepted strings of length 9 to convert to an equivalent permission number.
Now shorter strings are accepted for use cases such as:
- ```chmod +x```
- ```chmod +sx```
- etc ... | https://api.github.com/repos/chubin/cheat.sh/pulls/217 | 2020-07-03T13:01:40Z | 2020-07-06T19:40:55Z | 2020-07-06T19:40:55Z | 2020-07-06T20:16:30Z | 1,765 | chubin/cheat.sh | 15,250 |
Update quickstart.rst doc for awareness of Flask extensions | diff --git a/docs/quickstart.rst b/docs/quickstart.rst
index 39957d7749..4906d02536 100644
--- a/docs/quickstart.rst
+++ b/docs/quickstart.rst
@@ -873,6 +873,15 @@ can do it like this::
from werkzeug.contrib.fixers import LighttpdCGIRootFix
app.wsgi_app = LighttpdCGIRootFix(app.wsgi_app)
+Using Flask Extensions
+----------------------
+
+Extensions are packages that help you accomplish common tasks. For
+example, Flask-SQLAlchemy provides SQLAlchemy support that makes it simple
+and easy to use with Flask.
+
+For more on Flask extensions, have a look at :ref:`extensions`.
+
Deploying to a Web Server
-------------------------
| Addresses concerns in issue #532 regarding not knowing about flask extensions as it is not mentioned the quick start guide. Maybe this can help increase the awareness of flask extensions to new users?
| https://api.github.com/repos/pallets/flask/pulls/1376 | 2015-03-12T04:14:56Z | 2015-03-12T08:36:07Z | 2015-03-12T08:36:07Z | 2020-11-14T04:53:04Z | 178 | pallets/flask | 20,781 |
add Awesome Core ML Models | diff --git a/README.md b/README.md
index 0893f1f1..4c619f87 100644
--- a/README.md
+++ b/README.md
@@ -1298,6 +1298,7 @@ be
* [Swift Brain](https://github.com/vlall/Swift-Brain) - The first neural network / machine learning library written in Swift. This is a project for AI algorithms in Swift for iOS and OS X development. This project includes algorithms focused on Bayes theorem, neural networks, SVMs, Matrices, etc..
* [Perfect TensorFlow](https://github.com/PerfectlySoft/Perfect-TensorFlow) - Swift Language Bindings of TensorFlow. Using native TensorFlow models on both macOS / Linux.
* [Awesome CoreML](https://github.com/NilStack/awesome-CoreML-models) - A curated list of pretrained CoreML models
+* [Awesome Core ML Models](https://github.com/likedan/Awesome-CoreML-Models) - A curated list of machine learning models in CoreML format.
<a name="tensor"></a>
## TensorFlow
| https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/422 | 2017-09-11T07:57:22Z | 2017-09-11T15:42:10Z | 2017-09-11T15:42:10Z | 2017-09-11T15:42:14Z | 240 | josephmisiti/awesome-machine-learning | 52,209 | |
fix typo | diff --git a/tools/program.py b/tools/program.py
index cbca715a8e..fb9e3802a0 100755
--- a/tools/program.py
+++ b/tools/program.py
@@ -212,7 +212,7 @@ def train(config,
stats['lr'] = lr
train_stats.update(stats)
- if cal_metric_during_train: # onlt rec and cls need
+ if cal_metric_during_train: # only rec and cls need
batch = [item.numpy() for item in batch]
post_result = post_process_class(preds, batch[1])
eval_class(post_result, batch)
@@ -238,21 +238,21 @@ def train(config,
# eval
if global_step > start_eval_step and \
(global_step - start_eval_step) % eval_batch_step == 0 and dist.get_rank() == 0:
- cur_metirc = eval(model, valid_dataloader, post_process_class,
+ cur_metric = eval(model, valid_dataloader, post_process_class,
eval_class)
- cur_metirc_str = 'cur metirc, {}'.format(', '.join(
- ['{}: {}'.format(k, v) for k, v in cur_metirc.items()]))
- logger.info(cur_metirc_str)
+ cur_metric_str = 'cur metric, {}'.format(', '.join(
+ ['{}: {}'.format(k, v) for k, v in cur_metric.items()]))
+ logger.info(cur_metric_str)
# logger metric
if vdl_writer is not None:
- for k, v in cur_metirc.items():
+ for k, v in cur_metric.items():
if isinstance(v, (float, int)):
vdl_writer.add_scalar('EVAL/{}'.format(k),
- cur_metirc[k], global_step)
- if cur_metirc[main_indicator] >= best_model_dict[
+ cur_metric[k], global_step)
+ if cur_metric[main_indicator] >= best_model_dict[
main_indicator]:
- best_model_dict.update(cur_metirc)
+ best_model_dict.update(cur_metric)
best_model_dict['best_epoch'] = epoch
save_model(
model,
@@ -263,7 +263,7 @@ def train(config,
prefix='best_accuracy',
best_model_dict=best_model_dict,
epoch=epoch)
- best_str = 'best metirc, {}'.format(', '.join([
+ best_str = 'best metric, {}'.format(', '.join([
'{}: {}'.format(k, v) for k, v in best_model_dict.items()
]))
logger.info(best_str)
@@ -294,7 +294,7 @@ def train(config,
prefix='iter_epoch_{}'.format(epoch),
best_model_dict=best_model_dict,
epoch=epoch)
- best_str = 'best metirc, {}'.format(', '.join(
+ best_str = 'best metric, {}'.format(', '.join(
['{}: {}'.format(k, v) for k, v in best_model_dict.items()]))
logger.info(best_str)
if dist.get_rank() == 0 and vdl_writer is not None:
@@ -323,13 +323,13 @@ def eval(model, valid_dataloader, post_process_class, eval_class):
eval_class(post_result, batch)
pbar.update(1)
total_frame += len(images)
- # Get final metirc,eg. acc or hmean
- metirc = eval_class.get_metric()
+ # Get final metric,eg. acc or hmean
+ metric = eval_class.get_metric()
pbar.close()
model.train()
- metirc['fps'] = total_frame / total_time
- return metirc
+ metric['fps'] = total_frame / total_time
+ return metric
def preprocess(is_train=False):
| https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/1831 | 2021-01-26T07:16:51Z | 2021-01-26T07:44:12Z | 2021-01-26T07:44:12Z | 2021-01-26T07:44:12Z | 837 | PaddlePaddle/PaddleOCR | 42,197 | |
Add xeno-canto API to Animals | diff --git a/README.md b/README.md
index 584836d8f2..cdddef3970 100644
--- a/README.md
+++ b/README.md
@@ -182,6 +182,7 @@ API | Description | Auth | HTTPS | CORS |
| [RescueGroups](https://userguide.rescuegroups.org/display/APIDG/API+Developers+Guide+Home) | Adoption | No | Yes | Unknown |
| [Shibe.Online](http://shibe.online/) | Random pictures of Shiba Inu, cats or birds | No | Yes | Yes |
| [The Dog](https://thedogapi.com/) | A public service all about Dogs, free to use when making your fancy new App, Website or Service | `apiKey` | Yes | No |
+| [xeno-canto](https://xeno-canto.org/explore/api) | Bird recordings | No | Yes | Unknown |
| [Zoo Animals](https://zoo-animal-api.herokuapp.com/) | Facts and pictures of zoo animals | No | Yes | Yes |
**[⬆ Back to Index](#index)**
| <!-- Thank you for taking the time to work on a Pull Request for this project! -->
<!-- To ensure your PR is dealt with swiftly please check the following: -->
- [x] My submission is formatted according to the guidelines in the [contributing guide](/CONTRIBUTING.md)
- [x] My addition is ordered alphabetically
- [x] My submission has a useful description
- [x] The description does not have more than 100 characters
- [x] The description does not end with punctuation
- [x] Each table column is padded with one space on either side
- [x] I have searched the repository for any relevant issues or pull requests
- [x] Any category I am creating has the minimum requirement of 3 items
- [x] All changes have been [squashed][squash-link] into a single commit
[squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
| https://api.github.com/repos/public-apis/public-apis/pulls/3059 | 2022-02-16T00:24:57Z | 2022-03-02T04:37:20Z | 2022-03-02T04:37:20Z | 2022-03-02T04:37:50Z | 246 | public-apis/public-apis | 35,310 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.