title stringlengths 2 169 | diff stringlengths 235 19.5k | body stringlengths 0 30.5k | url stringlengths 48 84 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 | updated_at stringlengths 20 20 | diff_len float64 101 3.99k | repo_name stringclasses 83
values | __index_level_0__ int64 15 52.7k |
|---|---|---|---|---|---|---|---|---|---|---|
Update Dockerfile | diff --git a/Dockerfile b/Dockerfile
index 95c098f9f51..e0653e0f9b3 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -12,7 +12,7 @@ RUN python -m pip install --upgrade pip
RUN pip uninstall -y nvidia-tensorboard nvidia-tensorboard-plugin-dlprof
RUN pip install --no-cache -r requirements.txt coremltools onnx gsutil notebook wandb>=0.12.2
RUN pip install --no-cache -U torch torchvision numpy
-# RUN pip install --no-cache torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
+# RUN pip install --no-cache torch==1.9.1+cu111 torchvision==0.10.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
# Create working directory
RUN mkdir -p /usr/src/app
|
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Update to PyTorch 1.9.1 in Dockerfile 🐋🔥
### 📊 Key Changes
- Commented out the installation line for PyTorch 1.9.0 and torchvision 0.10.0.
- Added a new line to install PyTorch 1.9.1 and torchvision 0.10.1.
### 🎯 Purpose & Impact
- **Keeping Updated:** Ensuring the Docker container uses the latest stable PyTorch version for better performance and feature support. 🚀
- **Consistency:** Users will now work with more recent versions of PyTorch and torchvision, keeping the development environment consistent with the latest best practices. 💻
- **Potential Impact:** Users could expect more stability and possibly new features or enhanced performance from the newer PyTorch release. Additionally, they should verify their code for compatibility with these updates. ⚙️ | https://api.github.com/repos/ultralytics/yolov5/pulls/4935 | 2021-09-27T02:16:04Z | 2021-09-27T02:16:14Z | 2021-09-27T02:16:14Z | 2024-01-19T15:27:21Z | 230 | ultralytics/yolov5 | 25,205 |
Update README to latest Composer version | diff --git a/README.md b/README.md
index 5bfea3cb2394..bce74a025ffa 100755
--- a/README.md
+++ b/README.md
@@ -92,7 +92,7 @@ DeepSpeed has been integrated with several different popular open-source DL fram
<img src="docs/assets/images/transformers-light.png#gh-light-mode-only" width="250px"><img src="docs/assets/images/transformers-dark.png#gh-dark-mode-only" width="250px"> | [Transformers with DeepSpeed](https://huggingface.co/docs/transformers/main/main_classes/deepspeed) |
| <img src="docs/assets/images/accelerate-light.png#gh-light-mode-only" width="250px"><img src="docs/assets/images/accelerate-dark.png#gh-dark-mode-only" width="250px"> | [Accelerate with DeepSpeed](https://huggingface.co/docs/accelerate/main/en/deepspeed) |
| <img src="docs/assets/images/lightning-light.svg#gh-light-mode-only" width="200px"><img src="docs/assets/images/lightning-dark.svg#gh-dark-mode-only" width="200px"> | [Lightning with DeepSpeed](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.strategies.DeepSpeedStrategy.html) |
-| <img src="docs/assets/images/mosaicml.svg" width="200px"> | [MosaicML with DeepSpeed](https://docs.mosaicml.com/en/v0.8.0/trainer/using_the_trainer.html?highlight=deepspeed#deepspeed-integration) |
+| <img src="docs/assets/images/mosaicml.svg" width="200px"> | [MosaicML with DeepSpeed](https://docs.mosaicml.com/en/latest/trainer/using_the_trainer.html?highlight=deepspeed#deepspeed-integration) |
---
diff --git a/docs/index.md b/docs/index.md
index 7303e7c41611..773b27800d99 100755
--- a/docs/index.md
+++ b/docs/index.md
@@ -75,7 +75,7 @@ DeepSpeed has been integrated with several different popular open-source DL fram
| <img src="assets/images/transformers-light.png" width="300px"> | [Transformers with DeepSpeed](https://huggingface.co/docs/transformers/main/main_classes/deepspeed) |
| <img src="assets/images/accelerate-light.png" width="300px">| [Accelerate with DeepSpeed](https://huggingface.co/docs/accelerate/main/en/deepspeed) |
| <img src="assets/images/lightning-light.svg" width="250px"> | [Lightning with DeepSpeed](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.strategies.DeepSpeedStrategy.html) |
-| <img src="assets/images/mosaicml.svg" width="250px"> | [MosaicML with DeepSpeed](https://docs.mosaicml.com/en/v0.8.0/trainer/using_the_trainer.html?highlight=deepspeed#deepspeed-integration) |
+| <img src="assets/images/mosaicml.svg" width="250px"> | [MosaicML with DeepSpeed](https://docs.mosaicml.com/en/latest/trainer/using_the_trainer.html?highlight=deepspeed#deepspeed-integration) |
DeepSpeed is an integral part of [Microsoft’s AI at Scale initiative](https://www.microsoft.com/en-us/research/project/ai-at-scale/) to enable next-generation AI capabilities at scale.
| Thanks for including our [Composer](https://github.com/mosaicml/composer) library! DeepSpeed has helped us easily train large scale language models. This is a small PR to update the README to always link to the `latest` version of our docs. | https://api.github.com/repos/microsoft/DeepSpeed/pulls/2177 | 2022-08-03T16:28:58Z | 2022-08-04T16:29:44Z | 2022-08-04T16:29:44Z | 2022-08-04T16:29:44Z | 781 | microsoft/DeepSpeed | 10,772 |
Remove deprecated substitute interfaces | diff --git a/homeassistant/components/binary_sensor/__init__.py b/homeassistant/components/binary_sensor/__init__.py
index 8f2b6bc59b3bf7..4ba29e9b2ba96d 100644
--- a/homeassistant/components/binary_sensor/__init__.py
+++ b/homeassistant/components/binary_sensor/__init__.py
@@ -14,7 +14,6 @@
from homeassistant.helpers.entity import Entity
from homeassistant.const import (STATE_ON, STATE_OFF)
from homeassistant.helpers.config_validation import PLATFORM_SCHEMA # noqa
-from homeassistant.helpers.deprecation import deprecated_substitute
DOMAIN = 'binary_sensor'
SCAN_INTERVAL = timedelta(seconds=30)
@@ -66,7 +65,6 @@ def state(self):
return STATE_ON if self.is_on else STATE_OFF
@property
- @deprecated_substitute('sensor_class')
def device_class(self):
"""Return the class of this device, from component DEVICE_CLASSES."""
return None
diff --git a/homeassistant/components/media_player/__init__.py b/homeassistant/components/media_player/__init__.py
index a53f7f1367a1d2..870252cc55e53d 100644
--- a/homeassistant/components/media_player/__init__.py
+++ b/homeassistant/components/media_player/__init__.py
@@ -21,7 +21,6 @@
from homeassistant.helpers.entity import Entity
from homeassistant.helpers.entity_component import EntityComponent
from homeassistant.helpers.config_validation import PLATFORM_SCHEMA # noqa
-from homeassistant.helpers.deprecation import deprecated_substitute
from homeassistant.components.http import HomeAssistantView, KEY_AUTHENTICATED
from homeassistant.helpers.aiohttp_client import async_get_clientsession
import homeassistant.helpers.config_validation as cv
@@ -589,7 +588,6 @@ def shuffle(self):
return None
@property
- @deprecated_substitute('supported_media_commands')
def supported_features(self):
"""Flag media player features that are supported."""
return 0
| ## Description:
This PR removes the deprecated substitute interfaces from media_player and binary_sensor. These would only have been used by custom components and have been issuing warnings since 0.39. | https://api.github.com/repos/home-assistant/core/pulls/8701 | 2017-07-29T22:48:57Z | 2017-07-29T23:18:07Z | 2017-07-29T23:18:07Z | 2017-12-11T08:59:48Z | 429 | home-assistant/core | 39,156 |
Fixed wrong linking to CONTRIBUTING | diff --git a/README.rst b/README.rst
index 86d85ed1d16..b65230dc493 100644
--- a/README.rst
+++ b/README.rst
@@ -80,7 +80,7 @@ Documentation: https://letsencrypt.readthedocs.org/
Software project: https://github.com/letsencrypt/lets-encrypt-preview
-Notes for developers: CONTRIBUTING.rst_
+Notes for developers: CONTRIBUTING.md_
Main Website: https://letsencrypt.org/
| File has been renamed at some point,
Cheers,
Christian
| https://api.github.com/repos/certbot/certbot/pulls/323 | 2015-03-27T19:31:56Z | 2015-03-27T19:33:25Z | 2015-03-27T19:33:25Z | 2016-05-06T19:22:19Z | 116 | certbot/certbot | 2,014 |
Bump frontend to 20220901.0 | diff --git a/homeassistant/components/frontend/manifest.json b/homeassistant/components/frontend/manifest.json
index 1bf8962d615635..ebaa83f8d46eb6 100644
--- a/homeassistant/components/frontend/manifest.json
+++ b/homeassistant/components/frontend/manifest.json
@@ -2,7 +2,7 @@
"domain": "frontend",
"name": "Home Assistant Frontend",
"documentation": "https://www.home-assistant.io/integrations/frontend",
- "requirements": ["home-assistant-frontend==20220831.0"],
+ "requirements": ["home-assistant-frontend==20220901.0"],
"dependencies": [
"api",
"auth",
diff --git a/homeassistant/package_constraints.txt b/homeassistant/package_constraints.txt
index 56a6e5efd0542d..1fe755a9321675 100644
--- a/homeassistant/package_constraints.txt
+++ b/homeassistant/package_constraints.txt
@@ -19,7 +19,7 @@ cryptography==37.0.4
fnvhash==0.1.0
hass-nabucasa==0.55.0
home-assistant-bluetooth==1.3.0
-home-assistant-frontend==20220831.0
+home-assistant-frontend==20220901.0
httpx==0.23.0
ifaddr==0.1.7
jinja2==3.1.2
diff --git a/requirements_all.txt b/requirements_all.txt
index f36995134b4280..ff853b243b0a3e 100644
--- a/requirements_all.txt
+++ b/requirements_all.txt
@@ -848,7 +848,7 @@ hole==0.7.0
holidays==0.14.2
# homeassistant.components.frontend
-home-assistant-frontend==20220831.0
+home-assistant-frontend==20220901.0
# homeassistant.components.home_connect
homeconnect==0.7.2
diff --git a/requirements_test_all.txt b/requirements_test_all.txt
index 5fef48b1fe43d2..3e10d0676dab9e 100644
--- a/requirements_test_all.txt
+++ b/requirements_test_all.txt
@@ -625,7 +625,7 @@ hole==0.7.0
holidays==0.14.2
# homeassistant.components.frontend
-home-assistant-frontend==20220831.0
+home-assistant-frontend==20220901.0
# homeassistant.components.home_connect
homeconnect==0.7.2
| <!--
You are amazing! Thanks for contributing to our project!
Please, DO NOT DELETE ANY TEXT from this template! (unless instructed).
-->
## Breaking change
<!--
If your PR contains a breaking change for existing users, it is important
to tell them what breaks, how to make it work again and why we did this.
This piece of text is published with the release notes, so it helps if you
write it towards our users, not us.
Note: Remove this section if this PR is NOT a breaking change.
-->
## Proposed change
<!--
Describe the big picture of your changes here to communicate to the
maintainers why we should accept this pull request. If it fixes a bug
or resolves a feature request, be sure to link to that issue in the
additional information section.
-->
https://github.com/home-assistant/frontend/releases/tag/20220901.0
## Type of change
<!--
What type of change does your PR introduce to Home Assistant?
NOTE: Please, check only 1! box!
If your PR requires multiple boxes to be checked, you'll most likely need to
split it into multiple PRs. This makes things easier and faster to code review.
-->
- [ ] Dependency upgrade
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New integration (thank you!)
- [ ] New feature (which adds functionality to an existing integration)
- [ ] Deprecation (breaking change to happen in the future)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [ ] Code quality improvements to existing code or addition of tests
## Additional information
<!--
Details are important, and help maintainers processing your PR.
Please be sure to fill out additional details, if applicable.
-->
- This PR fixes or closes issue: fixes #
- This PR is related to issue:
- Link to documentation pull request:
## Checklist
<!--
Put an `x` in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
-->
- [ ] The code change is tested and works locally.
- [ ] Local tests pass. **Your PR cannot be merged unless tests pass**
- [ ] There is no commented out code in this PR.
- [ ] I have followed the [development checklist][dev-checklist]
- [ ] The code has been formatted using Black (`black --fast homeassistant tests`)
- [ ] Tests have been added to verify that the new code works.
If user exposed functionality or configuration variables are added/changed:
- [ ] Documentation added/updated for [www.home-assistant.io][docs-repository]
If the code communicates with devices, web services, or third-party tools:
- [ ] The [manifest file][manifest-docs] has all fields filled out correctly.
Updated and included derived files by running: `python3 -m script.hassfest`.
- [ ] New or updated dependencies have been added to `requirements_all.txt`.
Updated by running `python3 -m script.gen_requirements_all`.
- [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
- [ ] Untested files have been added to `.coveragerc`.
The integration reached or maintains the following [Integration Quality Scale][quality-scale]:
<!--
The Integration Quality Scale scores an integration on the code quality
and user experience. Each level of the quality scale consists of a list
of requirements. We highly recommend getting your integration scored!
-->
- [ ] No score or internal
- [ ] 🥈 Silver
- [ ] 🥇 Gold
- [ ] 🏆 Platinum
<!--
This project is very active and we have a high turnover of pull requests.
Unfortunately, the number of incoming pull requests is higher than what our
reviewers can review and merge so there is a long backlog of pull requests
waiting for review. You can help here!
By reviewing another pull request, you will help raise the code quality of
that pull request and the final review will be faster. This way the general
pace of pull request reviews will go up and your wait time will go down.
When picking a pull request to review, try to choose one that hasn't yet
been reviewed.
Thanks for helping out!
-->
To help with the load of incoming pull requests:
- [ ] I have reviewed two other [open pull requests][prs] in this repository.
[prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure
<!--
Thank you for contributing <3
Below, some useful links you could explore:
-->
[dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html
[manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html
[quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html
[docs-repository]: https://github.com/home-assistant/home-assistant.io
| https://api.github.com/repos/home-assistant/core/pulls/77689 | 2022-09-02T01:24:24Z | 2022-09-02T01:24:30Z | 2022-09-02T01:24:30Z | 2022-09-03T01:30:49Z | 573 | home-assistant/core | 39,481 |
TST Improves testing for missing value support in random forest | diff --git a/sklearn/ensemble/tests/test_forest.py b/sklearn/ensemble/tests/test_forest.py
index 72111c9bb481c..31e9859076c92 100644
--- a/sklearn/ensemble/tests/test_forest.py
+++ b/sklearn/ensemble/tests/test_forest.py
@@ -1819,7 +1819,7 @@ def test_round_samples_to_one_when_samples_too_low(class_weight):
],
)
def test_missing_values_is_resilient(make_data, Forest):
- """Check that forest can deal with missing values and have decent performance."""
+ """Check that forest can deal with missing values and has decent performance."""
rng = np.random.RandomState(0)
n_samples, n_features = 1000, 10
@@ -1828,6 +1828,8 @@ def test_missing_values_is_resilient(make_data, Forest):
# Create dataset with missing values
X_missing = X.copy()
X_missing[rng.choice([False, True], size=X.shape, p=[0.95, 0.05])] = np.nan
+ assert np.isnan(X_missing).any()
+
X_missing_train, X_missing_test, y_train, y_test = train_test_split(
X_missing, y, random_state=0
)
@@ -1864,6 +1866,7 @@ def test_missing_value_is_predictive(Forest):
predictive_feature = rng.standard_normal(size=n_samples)
predictive_feature[y_mask] = np.nan
+ assert np.isnan(predictive_feature).any()
X_predictive = X_non_predictive.copy()
X_predictive[:, 5] = predictive_feature
| <!--
Thanks for contributing a pull request! Please ensure you have taken a look at
the contribution guidelines: https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md
-->
#### Reference Issues/PRs
<!--
Example: Fixes #1234. See also #3456.
Please use keywords (e.g., Fixes) to create link to the issues or pull requests
you resolved, so that they will automatically be closed when your pull request
is merged. See https://github.com/blog/1506-closing-issues-via-pull-requests
-->
Follow up to #26391
#### What does this implement/fix? Explain your changes.
This PR addresses the concerns from https://github.com/scikit-learn/scikit-learn/pull/26391#pullrequestreview-1553273729
<!--
Please be aware that we are a loose team of volunteers so patience is
necessary; assistance handling other issues is very welcome. We value
all user contributions, no matter how minor they are. If we are slow to
review, either the pull request needs some benchmarking, tinkering,
convincing, etc. or more likely the reviewers are simply busy. In either
case, we ask for your understanding during the review process.
For more information, see our FAQ on this topic:
http://scikit-learn.org/dev/faq.html#why-is-my-pull-request-not-getting-any-attention.
Thanks for contributing!
-->
| https://api.github.com/repos/scikit-learn/scikit-learn/pulls/26939 | 2023-07-29T21:40:20Z | 2023-08-02T13:24:31Z | 2023-08-02T13:24:31Z | 2023-08-02T13:24:31Z | 365 | scikit-learn/scikit-learn | 46,821 |
Add new tamper script which can replaces instances like 'IFNULL(A, B)... | diff --git a/tamper/ifnull2casewhenisnull.py b/tamper/ifnull2casewhenisnull.py
new file mode 100644
index 00000000000..cabea4cc8f5
--- /dev/null
+++ b/tamper/ifnull2casewhenisnull.py
@@ -0,0 +1,65 @@
+#!/usr/bin/env python
+
+"""
+Copyright (c) 2006-2017 sqlmap developers (http://sqlmap.org/)
+See the file 'doc/COPYING' for copying permission
+"""
+
+from lib.core.enums import PRIORITY
+
+__priority__ = PRIORITY.HIGHEST
+
+def dependencies():
+ pass
+
+def tamper(payload, **kwargs):
+ """
+ Replaces instances like 'IFNULL(A, B)' with 'CASE WHEN ISNULL(A) THEN (B) ELSE (A) END'
+
+ Requirement:
+ * MySQL
+ * SQLite (possibly)
+ * SAP MaxDB (possibly)
+
+ Tested against:
+ * MySQL 5.0 and 5.5
+
+ Notes:
+ * Useful to bypass very weak and bespoke web application firewalls
+ that filter the IFNULL() and IF() functions
+
+ >>> tamper('IFNULL(1, 2)')
+ 'CASE WHEN ISNULL(1) THEN (2) ELSE (1) END'
+ """
+
+ if payload and payload.find("IFNULL") > -1:
+ while payload.find("IFNULL(") > -1:
+ index = payload.find("IFNULL(")
+ depth = 1
+ comma, end = None, None
+
+ for i in xrange(index + len("IFNULL("), len(payload)):
+ if depth == 1 and payload[i] == ',':
+ comma = i
+
+ elif depth == 1 and payload[i] == ')':
+ end = i
+ break
+
+ elif payload[i] == '(':
+ depth += 1
+
+ elif payload[i] == ')':
+ depth -= 1
+
+ if comma and end:
+ _ = payload[index + len("IFNULL("):comma]
+ __ = payload[comma + 1:end].lstrip()
+ newVal = "CASE WHEN ISNULL(%s) THEN (%s) ELSE (%s) END" % (_, __, _)
+ payload = payload[:index] + newVal + payload[end + 1:]
+ else:
+ break
+
+ return payload
+
+
| Hello, I created very similar tamper script like already exists here https://github.com/sqlmapproject/sqlmap/blob/master/tamper/ifnull2ifisnull.py
In my case WAF blocked my payloads if there are exists IFNULL() and IF() functions, so I just little modified **ifnull2ifisnull** tamper script.
> tamper('IFNULL(1, 2)')
'CASE WHEN ISNULL(1) THEN (2) ELSE (1) END'
I guess it could be useful for other too. | https://api.github.com/repos/sqlmapproject/sqlmap/pulls/2791 | 2017-11-21T15:00:15Z | 2017-11-22T12:27:50Z | 2017-11-22T12:27:50Z | 2017-11-22T12:27:50Z | 577 | sqlmapproject/sqlmap | 15,022 |
Ability to download older assets | diff --git a/utils/downloads.py b/utils/downloads.py
index 776a8bba175..ad54cc6cb38 100644
--- a/utils/downloads.py
+++ b/utils/downloads.py
@@ -43,8 +43,8 @@ def safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''):
LOGGER.info('')
-def attempt_download(file, repo='ultralytics/yolov5'): # from utils.downloads import *; attempt_download()
- # Attempt file download if does not exist
+def attempt_download(file, repo='ultralytics/yolov5', release='latest'):
+ # Attempt file download from GitHub release assets if not found locally
from utils.general import LOGGER
file = Path(str(file).strip().replace("'", ''))
@@ -62,8 +62,10 @@ def attempt_download(file, repo='ultralytics/yolov5'): # from utils.downloads i
# GitHub assets
file.parent.mkdir(parents=True, exist_ok=True) # make parent dir (if required)
+ if release != 'latest' and not release.startswith('tags/'):
+ release = f'tags/{release}' # prepend i.e. tags/v6.1
try:
- response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json() # github api
+ response = requests.get(f'https://api.github.com/repos/{repo}/releases/{release}').json() # github api
assets = [x['name'] for x in response['assets']] # release assets, i.e. ['yolov5s.pt', 'yolov5m.pt', ...]
tag = response['tag_name'] # i.e. 'v1.0'
except Exception: # fallback plan
| Sometimes, one would need to download the assets for a previously released version. Instead of doing it manually (either from browser, either changing the *URL*), it's possible to make use of *attempt_download*'s newly added parameter (and a small change in *data/scripts/download\_weights.sh*):
The release can be specified in 2 ways:
- `attempt_download(..., release='v5.0')`
- `attempt_download(..., release='tags/v5.0')`
Of course, current existing behavior is the default.
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Enhanced `attempt_download` function to support specific GitHub release assets.
### 📊 Key Changes
- `attempt_download` now accepts an additional `release` parameter to specify the GitHub release version.
- Default behavior targets the `latest` release if no other is specified.
- Constructed the URL to fetch from GitHub API now includes the specified release.
- Added a check to prepend "tags/" to the release string if it is not the `latest` and does not already start with "tags/".
### 🎯 Purpose & Impact
- Users can now download model weights or other assets from a specific release, providing greater control over the version of files used.
- Enhances reproducibility and flexibility for developers and researchers who need specific versions of code or data.
- Reduces potential issues from always pulling the latest version which might not be compatible with certain codebases or project requirements. | https://api.github.com/repos/ultralytics/yolov5/pulls/7767 | 2022-05-11T10:30:24Z | 2022-05-11T10:52:11Z | 2022-05-11T10:52:11Z | 2024-01-19T10:45:49Z | 398 | ultralytics/yolov5 | 25,401 |
Hitbtc v2 : update-ratelimits | diff --git a/js/hitbtc.js b/js/hitbtc.js
index f49d6016042a..759aa72a2f45 100644
--- a/js/hitbtc.js
+++ b/js/hitbtc.js
@@ -15,7 +15,10 @@ module.exports = class hitbtc extends Exchange {
'id': 'hitbtc',
'name': 'HitBTC',
'countries': [ 'HK' ],
- 'rateLimit': 1500,
+ // 300 requests per second => 1000ms / 300 = 3.333ms between requests on average (Trading)
+ // 100 requests per second => ( 1000ms / rateLimit ) / 100 => cost = 3.0003 (Market Data)
+ // 10 requests per second => ( 1000ms / rateLimit ) / 10 => cost = 30.003 (Other Requests)
+ 'rateLimit': 3.333,
'version': '2',
'pro': true,
'has': {
@@ -84,84 +87,84 @@ module.exports = class hitbtc extends Exchange {
},
'api': {
'public': {
- 'get': [
- 'currency', // Available Currencies
- 'currency/{currency}', // Get currency info
- 'symbol', // Available Currency Symbols
- 'symbol/{symbol}', // Get symbol info
- 'ticker', // Ticker list for all symbols
- 'ticker/{symbol}', // Ticker for symbol
- 'trades',
- 'trades/{symbol}', // Trades
- 'orderbook',
- 'orderbook/{symbol}', // Orderbook
- 'candles',
- 'candles/{symbol}', // Candles
- ],
+ 'get': {
+ 'currency': 3, // Available Currencies
+ 'currency/{currency}': 3, // Get currency info
+ 'symbol': 3, // Available Currency Symbols
+ 'symbol/{symbol}': 3, // Get symbol info
+ 'ticker': 3, // Ticker list for all symbols
+ 'ticker/{symbol}': 3, // Ticker for symbol
+ 'trades': 3,
+ 'trades/{symbol}': 3, // Trades
+ 'orderbook': 3,
+ 'orderbook/{symbol}': 3, // Orderbook
+ 'candles': 3,
+ 'candles/{symbol}': 3, // Candles
+ },
},
'private': {
- 'get': [
- 'trading/balance', // Get trading balance
- 'order', // List your current open orders
- 'order/{clientOrderId}', // Get a single order by clientOrderId
- 'trading/fee/all', // Get trading fee rate
- 'trading/fee/{symbol}', // Get trading fee rate
- 'margin/account',
- 'margin/account/{symbol}',
- 'margin/position',
- 'margin/position/{symbol}',
- 'margin/order',
- 'margin/order/{clientOrderId}',
- 'history/order', // Get historical orders
- 'history/trades', // Get historical trades
- 'history/order/{orderId}/trades', // Get historical trades by specified order
- 'account/balance', // Get main acccount balance
- 'account/crypto/address/{currency}', // Get current address
- 'account/crypto/addresses/{currency}', // Get last 10 deposit addresses for currency
- 'account/crypto/used-addresses/{currency}', // Get last 10 unique addresses used for withdraw by currency
- 'account/crypto/estimate-withdraw',
- 'account/crypto/is-mine/{address}',
- 'account/transactions', // Get account transactions
- 'account/transactions/{id}', // Get account transaction by id
- 'sub-acc',
- 'sub-acc/acl',
- 'sub-acc/balance/{subAccountUserID}',
- 'sub-acc/deposit-address/{subAccountUserId}/{currency}',
- ],
- 'post': [
- 'order', // Create new order
- 'margin/order',
- 'account/crypto/address/{currency}', // Create new crypto deposit address
- 'account/crypto/withdraw', // Withdraw crypto
- 'account/crypto/transfer-convert',
- 'account/transfer', // Transfer amount to trading account or to main account
- 'account/transfer/internal',
- 'sub-acc/freeze',
- 'sub-acc/activate',
- 'sub-acc/transfer',
- ],
- 'put': [
- 'order/{clientOrderId}', // Create new order
- 'margin/account/{symbol}',
- 'margin/order/{clientOrderId}',
- 'account/crypto/withdraw/{id}', // Commit crypto withdrawal
- 'sub-acc/acl/{subAccountUserId}',
- ],
- 'delete': [
- 'order', // Cancel all open orders
- 'order/{clientOrderId}', // Cancel order
- 'margin/account',
- 'margin/account/{symbol}',
- 'margin/position',
- 'margin/position/{symbol}',
- 'margin/order',
- 'margin/order/{clientOrderId}',
- 'account/crypto/withdraw/{id}', // Rollback crypto withdrawal
- ],
+ 'get': {
+ 'trading/balance': 30, // Get trading balance
+ 'order': 30, // List your current open orders
+ 'order/{clientOrderId}': 30, // Get a single order by clientOrderId
+ 'trading/fee/all': 30, // Get trading fee rate
+ 'trading/fee/{symbol}': 30, // Get trading fee rate
+ 'margin/account': 30,
+ 'margin/account/{symbol}': 30,
+ 'margin/position': 30,
+ 'margin/position/{symbol}': 30,
+ 'margin/order': 30,
+ 'margin/order/{clientOrderId}': 30,
+ 'history/order': 30, // Get historical orders
+ 'history/trades': 30, // Get historical trades
+ 'history/order/{orderId}/trades': 30, // Get historical trades by specified order
+ 'account/balance': 30, // Get main acccount balance
+ 'account/crypto/address/{currency}': 30, // Get current address
+ 'account/crypto/addresses/{currency}': 30, // Get last 10 deposit addresses for currency
+ 'account/crypto/used-addresses/{currency}': 30, // Get last 10 unique addresses used for withdraw by currency
+ 'account/crypto/estimate-withdraw': 30,
+ 'account/crypto/is-mine/{address}': 30,
+ 'account/transactions': 30, // Get account transactions
+ 'account/transactions/{id}': 30, // Get account transaction by id
+ 'sub-acc': 30,
+ 'sub-acc/acl': 30,
+ 'sub-acc/balance/{subAccountUserID}': 30,
+ 'sub-acc/deposit-address/{subAccountUserId}/{currency}': 30,
+ },
+ 'post': {
+ 'order': 1, // Create new order
+ 'margin/order': 1,
+ 'account/crypto/address/{currency}': 1, // Create new crypto deposit address
+ 'account/crypto/withdraw': 1, // Withdraw crypto
+ 'account/crypto/transfer-convert': 1,
+ 'account/transfer': 1, // Transfer amount to trading account or to main account
+ 'account/transfer/internal': 1,
+ 'sub-acc/freeze': 1,
+ 'sub-acc/activate': 1,
+ 'sub-acc/transfer': 1,
+ },
+ 'put': {
+ 'order/{clientOrderId}': 1, // Create new order
+ 'margin/account/{symbol}': 1,
+ 'margin/order/{clientOrderId}': 1,
+ 'account/crypto/withdraw/{id}': 1, // Commit crypto withdrawal
+ 'sub-acc/acl/{subAccountUserId}': 1,
+ },
+ 'delete': {
+ 'order': 1, // Cancel all open orders
+ 'order/{clientOrderId}': 1, // Cancel order
+ 'margin/account': 1,
+ 'margin/account/{symbol}': 1,
+ 'margin/position': 1,
+ 'margin/position/{symbol}': 1,
+ 'margin/order': 1,
+ 'margin/order/{clientOrderId}': 1,
+ 'account/crypto/withdraw/{id}': 1, // Rollback crypto withdrawal
+ },
// outdated?
- 'patch': [
- 'order/{clientOrderId}', // Cancel Replace order
- ],
+ 'patch': {
+ 'order/{clientOrderId}': 1, // Cancel Replace order
+ },
},
},
'precisionMode': TICK_SIZE,
| https://api.github.com/repos/ccxt/ccxt/pulls/11648 | 2022-01-25T12:03:02Z | 2022-01-25T13:16:16Z | 2022-01-25T13:16:16Z | 2022-04-22T13:54:17Z | 2,035 | ccxt/ccxt | 13,210 | |
Translating all german keys not yet translated to german | diff --git a/website/public/locales/de/dashboard.json b/website/public/locales/de/dashboard.json
index b418eaaa29..66e865320d 100644
--- a/website/public/locales/de/dashboard.json
+++ b/website/public/locales/de/dashboard.json
@@ -3,6 +3,6 @@
"dashboard": "Dashboard",
"evaluate": "Auswerten",
"go": "Los",
- "grab_a_task": "Schnapp dir eine Aufgabe!",
+ "grab_a_task": "Schnappen Sie sich eine Aufgabe!",
"label": "Label"
}
diff --git a/website/public/locales/de/labelling.json b/website/public/locales/de/labelling.json
index 6569976ce3..19854d06a1 100644
--- a/website/public/locales/de/labelling.json
+++ b/website/public/locales/de/labelling.json
@@ -19,6 +19,6 @@
"political_content": "Politisch",
"political_content.explanation": "Enthält politische Meinungen.",
"sexual_content": "Sexueller Inhalt",
- "sexual_content.explanation": "Contains sexual content.",
+ "sexual_content.explanation": "Enthält sexuelle Inhalte.",
"spam.question": "Ist die Nachricht Spam?"
}
diff --git a/website/public/locales/de/message.json b/website/public/locales/de/message.json
index 6a74a1c30d..4eb709d3e8 100644
--- a/website/public/locales/de/message.json
+++ b/website/public/locales/de/message.json
@@ -1,13 +1,13 @@
{
- "copy_message_id": "Copy message ID",
+ "copy_message_id": "Message ID kopieren",
"label_action": "Label",
"label_title": "Label",
"message": "Nachricht",
- "message_deleted": "Message deleted",
+ "message_deleted": "Nachricht gelöscht",
"open_new_tab_action": "In neuem Tab öffnen",
"parent": "Vorgänger",
"reactions": "Reaktionen",
- "recent_messages": "Recent Messages",
+ "recent_messages": "Kürzliche Nachrichten",
"report_action": "Melden",
"report_placeholder": "Warum sollte diese Nachricht überprüft werden?",
"report_title": "Meldung",
@@ -15,6 +15,6 @@
"stop_tree": "Stop tree",
"submit_labels": "Absenden",
"tree_stopped": "Tree stopped {{id}}",
- "view_user": "View user",
- "your_recent_messages": "Your Recent Messages"
+ "view_user": "User anzeigen",
+ "your_recent_messages": "Ihre kürzliche Nachrichten"
}
diff --git a/website/public/locales/de/tasks.json b/website/public/locales/de/tasks.json
index a3e25217f6..c56466664d 100644
--- a/website/public/locales/de/tasks.json
+++ b/website/public/locales/de/tasks.json
@@ -1,5 +1,5 @@
{
- "available_task_count": "{{count}} tasks available",
+ "available_task_count": "{{count}} Aufgaben verfügbar",
"classify_assistant_reply": {
"label": "Antwort des Assistenten klassifizieren",
"desc": "Labeln Sie die Antwort.",
diff --git a/website/public/locales/de/tos.json b/website/public/locales/de/tos.json
index 4d3d62b42f..d424f5b5a1 100644
--- a/website/public/locales/de/tos.json
+++ b/website/public/locales/de/tos.json
@@ -1,6 +1,6 @@
{
- "accept": "Accept",
- "content": "To continue using Open Assistant, you have to accept our Terms of Service first.",
- "decline": "Decline",
- "title": "Terms of Service for Open Assistant"
+ "accept": "Akzeptieren",
+ "content": "Um Open Assistant weiterhin nutzen zu können, müssen Sie zunächst unsere Nutzungsbedingungen akzeptieren.",
+ "decline": "Ablehnen",
+ "title": "Nutzungsbedingungen für Open Assistant"
}
| I translated not yet translated keys and fixed one case where we used informal speech, because everywhere else formal speech is being used.
closes #1230 | https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/1360 | 2023-02-08T14:02:41Z | 2023-02-09T02:49:09Z | 2023-02-09T02:49:09Z | 2023-02-09T02:50:04Z | 961 | LAION-AI/Open-Assistant | 37,462 |
[ie/niconico] correctly check if the user has access to the video | diff --git a/yt_dlp/extractor/niconico.py b/yt_dlp/extractor/niconico.py
index 05a1a3ddb8c..5383d71ec48 100644
--- a/yt_dlp/extractor/niconico.py
+++ b/yt_dlp/extractor/niconico.py
@@ -36,6 +36,8 @@
class NiconicoIE(InfoExtractor):
IE_NAME = 'niconico'
IE_DESC = 'ニコニコ動画'
+ _GEO_COUNTRIES = ['JP']
+ _GEO_BYPASS = False
_TESTS = [{
'url': 'http://www.nicovideo.jp/watch/sm22312215',
@@ -478,15 +480,27 @@ def _real_extract(self, url):
raise
raise ExtractorError(clean_html(error_msg), expected=True)
- club_joined = traverse_obj(api_data, ('channel', 'viewer', 'follow', 'isFollowed', {bool}))
- if club_joined is None:
- fail_msg = self._html_search_regex(
+ availability = self._availability(**(traverse_obj(api_data, ('payment', 'video', {
+ 'needs_premium': ('isPremium', {bool}),
+ 'needs_subscription': ('isAdmission', {bool}),
+ })) or {'needs_auth': True}))
+ formats = [*self._yield_dmc_formats(api_data, video_id),
+ *self._yield_dms_formats(api_data, video_id)]
+ if not formats:
+ fail_msg = clean_html(self._html_search_regex(
r'<p[^>]+\bclass="fail-message"[^>]*>(?P<msg>.+?)</p>',
- webpage, 'fail message', default=None, group='msg')
+ webpage, 'fail message', default=None, group='msg'))
if fail_msg:
- self.raise_login_required(clean_html(fail_msg), metadata_available=True)
- elif not club_joined:
- self.raise_login_required('This video is for members only', metadata_available=True)
+ self.to_screen(f'Niconico said: {fail_msg}')
+ if fail_msg and 'された地域と同じ地域からのみ視聴できます。' in fail_msg:
+ availability = None
+ self.raise_geo_restricted(countries=self._GEO_COUNTRIES, metadata_available=True)
+ elif availability == 'premium_only':
+ self.raise_login_required('This video requires premium', metadata_available=True)
+ elif availability == 'subscriber_only':
+ self.raise_login_required('This video is for members only', metadata_available=True)
+ elif availability == 'needs_auth':
+ self.raise_login_required(metadata_available=False)
# Start extracting information
tags = None
@@ -512,8 +526,8 @@ def get_video_info(*items, get_first=True, **kwargs):
'id': video_id,
'_api_data': api_data,
'title': get_video_info(('originalTitle', 'title')) or self._og_search_title(webpage, default=None),
- 'formats': [*self._yield_dmc_formats(api_data, video_id),
- *self._yield_dms_formats(api_data, video_id)],
+ 'formats': formats,
+ 'availability': availability,
'thumbnails': [{
'id': key,
'url': url,
| **IMPORTANT**: PRs without the template will be CLOSED
### Description of your *pull request* and other information
<!--
Explanation of your *pull request* in arbitrary form goes here. Please **make sure the description explains the purpose and effect** of your *pull request* and is worded well enough to be understood. Provide as much **context and examples** as possible
-->
This PR is a quick fix about a failed modification in #9282 .
> Can't we just check if api_data["media"] exists or not?
Of course, I agree.
Videos for Premium users do not work with this check. If the user gets no format, they do not have access to that video.
### Credits
Thanks to @betsu0 for the detailed feedback. Thanks to @fireattack for the professional analysis.
<br>
<details open><summary>Template</summary> <!-- OPEN is intentional -->
<!--
# PLEASE FOLLOW THE GUIDE BELOW
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes `[ ]` relevant to your *pull request* (like [x])
- Use *Preview* tab to see how your *pull request* will actually look like
-->
### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply:
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [x] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [x] Fix or improvement to an extractor (Make sure to add/update tests)
- [ ] New extractor ([Piracy websites will not be accepted](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy))
- [ ] Core bug fix/improvement
- [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes))
</details>
| https://api.github.com/repos/yt-dlp/yt-dlp/pulls/9338 | 2024-03-02T16:32:34Z | 2024-03-03T23:14:54Z | 2024-03-03T23:14:54Z | 2024-03-04T01:12:56Z | 743 | yt-dlp/yt-dlp | 7,492 |
Fix DeepInfra: Model is not supported | diff --git a/g4f/Provider/DeepInfra.py b/g4f/Provider/DeepInfra.py
index 09b9464e23..8e36128d5b 100644
--- a/g4f/Provider/DeepInfra.py
+++ b/g4f/Provider/DeepInfra.py
@@ -17,7 +17,8 @@ class DeepInfra(AsyncGeneratorProvider, ProviderModelMixin):
def get_models(cls):
if not cls.models:
url = 'https://api.deepinfra.com/models/featured'
- cls.models = requests.get(url).json()
+ models = requests.get(url).json()
+ cls.models = [model['model_name'] for model in models]
return cls.models
@classmethod
| Last commit broke DeepInfra | https://api.github.com/repos/xtekky/gpt4free/pulls/1529 | 2024-01-30T03:16:38Z | 2024-01-30T07:14:05Z | 2024-01-30T07:14:05Z | 2024-02-04T18:46:42Z | 168 | xtekky/gpt4free | 37,896 |
Clarify batch file ERROR: message | diff --git a/youtube_dl/__init__.py b/youtube_dl/__init__.py
index 165c975dd75..9a659fc654d 100644
--- a/youtube_dl/__init__.py
+++ b/youtube_dl/__init__.py
@@ -94,7 +94,7 @@ def _real_main(argv=None):
if opts.verbose:
write_string('[debug] Batch file urls: ' + repr(batch_urls) + '\n')
except IOError:
- sys.exit('ERROR: batch file could not be read')
+ sys.exit('ERROR: batch file %s could not be read' % opts.batchfile)
all_urls = batch_urls + [url.strip() for url in args] # batch_urls are already striped in read_batch_urls
_enc = preferredencoding()
all_urls = [url.decode(_enc, 'ignore') if isinstance(url, bytes) else url for url in all_urls]
| Adding the name of the file which couldn't be read & a brief description of the option.
Just "ERROR: batch file could not be read" can be confusing for users who are inexperienced (and likely using Windows).
eg. https://www.reddit.com/r/Piracy/comments/cfl0ap/youtubedl_error_batch_file_could_not_be_read_how/
(The user specified option -audio-file, instead of --audio-file)
## Please follow the guide below
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
- Use *Preview* tab to see how your *pull request* will actually look like
---
### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) sections
- [x] [Searched](https://github.com/ytdl-org/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [ ] Bug fix
- [x] Improvement
- [ ] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
Explanation of your *pull request* in arbitrary form goes here. Please make sure the description explains the purpose and effect of your *pull request* and is worded well enough to be understood. Provide as much context and examples as possible.
| https://api.github.com/repos/ytdl-org/youtube-dl/pulls/21915 | 2019-07-28T05:29:40Z | 2019-07-31T20:54:39Z | 2019-07-31T20:54:39Z | 2019-07-31T20:54:40Z | 202 | ytdl-org/youtube-dl | 49,784 |
Added a new bypass variant + fixed a payload | diff --git a/XSS injection/README.md b/XSS injection/README.md
index 6169b46d4d..db5f401365 100644
--- a/XSS injection/README.md
+++ b/XSS injection/README.md
@@ -465,7 +465,7 @@ You can bypass a single quote with ' in an on mousedown event handler
Bypass dot filter
```javascript
-<script>window['alert'](document['domain'])<script>
+<script>window['alert'](document['domain'])</script>
```
Bypass parenthesis for string - Firefox/Opera
@@ -654,6 +654,12 @@ Bypass using [Katakana](https://github.com/aemkei/katakana.js)
javascript:([,ウ,,,,ア]=[]+{},[ネ,ホ,ヌ,セ,,ミ,ハ,ヘ,,,ナ]=[!!ウ]+!ウ+ウ.ウ)[ツ=ア+ウ+ナ+ヘ+ネ+ホ+ヌ+ア+ネ+ウ+ホ][ツ](ミ+ハ+セ+ホ+ネ+'(-~ウ)')()
```
+Bypass using ECMAScript6 variation:
+
+```
+<script>alert`1`</script>
+```
+
Bypass using Octal encoding
```javascript
| Added new ECMAScript6 XSS bypass variant and fixed a payload missing `/` in `</script>`. | https://api.github.com/repos/swisskyrepo/PayloadsAllTheThings/pulls/46 | 2019-02-20T05:49:41Z | 2019-02-20T07:14:28Z | 2019-02-20T07:14:28Z | 2019-02-20T07:14:28Z | 291 | swisskyrepo/PayloadsAllTheThings | 8,800 |
Add a global lock file to Certbot (#4369) | diff --git a/certbot/cli.py b/certbot/cli.py
index c0af490d241..1ddbc45c996 100644
--- a/certbot/cli.py
+++ b/certbot/cli.py
@@ -1162,6 +1162,8 @@ def _paths_parser(helpful):
help="Logs directory.")
add("paths", "--server", default=flag_default("server"),
help=config_help("server"))
+ add("paths", "--lock-path", default=flag_default("lock_path"),
+ help=config_help('lock_path'))
def _plugins_parsing(helpful, plugins):
diff --git a/certbot/constants.py b/certbot/constants.py
index b286ca26aa1..fb08236c503 100644
--- a/certbot/constants.py
+++ b/certbot/constants.py
@@ -32,6 +32,7 @@
auth_cert_path="./cert.pem",
auth_chain_path="./chain.pem",
strict_permissions=False,
+ lock_path="/tmp/.certbot.lock",
)
STAGING_URI = "https://acme-staging.api.letsencrypt.org/directory"
diff --git a/certbot/interfaces.py b/certbot/interfaces.py
index 2df2abfe8aa..611d596c77f 100644
--- a/certbot/interfaces.py
+++ b/certbot/interfaces.py
@@ -222,6 +222,9 @@ class IConfig(zope.interface.Interface):
key_dir = zope.interface.Attribute("Keys storage.")
temp_checkpoint_dir = zope.interface.Attribute(
"Temporary checkpoint directory.")
+ lock_path = zope.interface.Attribute(
+ "Path to the lock file used to prevent multiple instances of "
+ "Certbot from modifying your server's configuration at once.")
no_verify_ssl = zope.interface.Attribute(
"Disable verification of the ACME server's certificate.")
diff --git a/certbot/main.py b/certbot/main.py
index 118c0f9586b..e8e17b39b23 100644
--- a/certbot/main.py
+++ b/certbot/main.py
@@ -8,6 +8,7 @@
import time
import traceback
+import fasteners
import zope.component
from acme import jose
@@ -866,6 +867,56 @@ def _post_logging_setup(config, plugins, cli_args):
logger.debug("Discovered plugins: %r", plugins)
+def acquire_file_lock(lock_path):
+ """Obtain a lock on the file at the specified path.
+
+ :param str lock_path: path to the file to be locked
+
+ :returns: lock file object representing the acquired lock
+ :rtype: fasteners.InterProcessLock
+
+ :raises .Error: if the lock is held by another process
+
+ """
+ lock = fasteners.InterProcessLock(lock_path)
+ logger.debug("Attempting to acquire lock file %s", lock_path)
+
+ try:
+ lock.acquire(blocking=False)
+ except IOError as err:
+ logger.debug(err)
+ logger.warning(
+ "Unable to access lock file %s. You should set --lock-file "
+ "to a writeable path to ensure multiple instances of "
+ "Certbot don't attempt modify your configuration "
+ "simultaneously.", lock_path)
+ else:
+ if not lock.acquired:
+ raise errors.Error(
+ "Another instance of Certbot is already running.")
+
+ return lock
+
+
+def _run_subcommand(config, plugins):
+ """Executes the Certbot subcommand specified in the configuration.
+
+ :param .IConfig config: parsed configuration object
+ :param .PluginsRegistry plugins: available plugins
+
+ :returns: return value from the specified subcommand
+ :rtype: str or int
+
+ """
+ lock = acquire_file_lock(config.lock_path)
+
+ try:
+ return config.func(config, plugins)
+ finally:
+ if lock.acquired:
+ lock.release()
+
+
def main(cli_args=sys.argv[1:]):
"""Command line argument parsing and main script execution."""
sys.excepthook = functools.partial(_handle_exception, config=None)
@@ -893,7 +944,7 @@ def main(cli_args=sys.argv[1:]):
zope.component.provideUtility(report)
atexit.register(report.atexit_print_messages)
- return config.func(config, plugins)
+ return _run_subcommand(config, plugins)
if __name__ == "__main__":
diff --git a/certbot/tests/main_test.py b/certbot/tests/main_test.py
index 3520eb063da..6f94d1099ca 100644
--- a/certbot/tests/main_test.py
+++ b/certbot/tests/main_test.py
@@ -4,6 +4,7 @@
import itertools
import mock
+import multiprocessing
import os
import shutil
import tempfile
@@ -448,7 +449,8 @@ def setUp(self):
os.mkdir(self.logs_dir)
self.standard_args = ['--config-dir', self.config_dir,
'--work-dir', self.work_dir,
- '--logs-dir', self.logs_dir, '--text']
+ '--logs-dir', self.logs_dir, '--text',
+ '--lock-path', os.path.join(self.tmp_dir, 'certbot.lock')]
def tearDown(self):
# Reset globals in cli
@@ -1305,5 +1307,54 @@ def test_handle_exception(self, mock_sys):
traceback.format_exception_only(KeyboardInterrupt, interrupt)))
+class TestAcquireFileLock(unittest.TestCase):
+ """Test main.acquire_file_lock."""
+
+ def setUp(self):
+ self.tempdir = tempfile.mkdtemp()
+ self.lock_path = os.path.join(self.tempdir, 'certbot.lock')
+
+ def tearDown(self):
+ shutil.rmtree(self.tempdir)
+
+ @mock.patch('certbot.main.logger')
+ def test_bad_path(self, mock_logger):
+ lock = main.acquire_file_lock(os.getcwd())
+ self.assertTrue(mock_logger.warning.called)
+ self.assertFalse(lock.acquired)
+
+ def test_held_lock(self):
+ # start child and wait for it to grab the lock
+ cv = multiprocessing.Condition()
+ cv.acquire()
+ child_args = (cv, self.lock_path,)
+ child = multiprocessing.Process(target=_hold_lock, args=child_args)
+ child.start()
+ cv.wait()
+
+ # assert we can't grab lock and terminate the child
+ self.assertRaises(errors.Error, main.acquire_file_lock, self.lock_path)
+ cv.notify()
+ cv.release()
+ child.join()
+ self.assertEqual(child.exitcode, 0)
+
+
+def _hold_lock(cv, lock_path):
+ """Acquire a file lock at lock_path and wait to release it.
+
+ :param multiprocessing.Condition cv: condition for syncronization
+ :param str lock_path: path to the file lock
+
+ """
+ import fasteners
+ lock = fasteners.InterProcessLock(lock_path)
+ lock.acquire()
+ cv.acquire()
+ cv.notify()
+ cv.wait()
+ lock.release()
+
+
if __name__ == '__main__':
unittest.main() # pragma: no cover
diff --git a/letsencrypt-auto-source/letsencrypt-auto b/letsencrypt-auto-source/letsencrypt-auto
index 54cc429cf2d..97606678ba4 100755
--- a/letsencrypt-auto-source/letsencrypt-auto
+++ b/letsencrypt-auto-source/letsencrypt-auto
@@ -727,6 +727,9 @@ cryptography==1.5.3 \
enum34==1.1.2 \
--hash=sha256:2475d7fcddf5951e92ff546972758802de5260bf409319a9f1934e6bbc8b1dc7 \
--hash=sha256:35907defb0f992b75ab7788f65fedc1cf20ffa22688e0e6f6f12afc06b3ea501
+fasteners==0.14.1 \
+ --hash=sha256:564a115ff9698767df401efca29620cbb1a1c2146b7095ebd304b79cc5807a7c \
+ --hash=sha256:427c76773fe036ddfa41e57d89086ea03111bbac57c55fc55f3006d027107e18
funcsigs==0.4 \
--hash=sha256:ff5ad9e2f8d9e5d1e8bbfbcf47722ab527cf0d51caeeed9da6d0f40799383fde \
--hash=sha256:d83ce6df0b0ea6618700fe1db353526391a8a3ada1b7aba52fed7a61da772033
@@ -739,6 +742,9 @@ ipaddress==1.0.16 \
linecache2==1.0.0 \
--hash=sha256:e78be9c0a0dfcbac712fe04fbf92b96cddae80b1b842f24248214c8496f006ef \
--hash=sha256:4b26ff4e7110db76eeb6f5a7b64a82623839d595c2038eeda662f2a2db78e97c
+monotonic==1.3 \
+ --hash=sha256:a8c7690953546c6bc8a4f05d347718db50de1225b29f4b9f346c0c6f19bdc286 \
+ --hash=sha256:2b469e2d7dd403f7f7f79227fe5ad551ee1e76f8bb300ae935209884b93c7c1b
ordereddict==1.1 \
--hash=sha256:1c35b4ac206cef2d24816c89f89cf289dd3d38cf7c449bb3fab7bf6d43f01b1f
parsedatetime==2.1 \
diff --git a/letsencrypt-auto-source/pieces/letsencrypt-auto-requirements.txt b/letsencrypt-auto-source/pieces/letsencrypt-auto-requirements.txt
index d70d24e2a88..fbf416d6651 100644
--- a/letsencrypt-auto-source/pieces/letsencrypt-auto-requirements.txt
+++ b/letsencrypt-auto-source/pieces/letsencrypt-auto-requirements.txt
@@ -65,6 +65,9 @@ cryptography==1.5.3 \
enum34==1.1.2 \
--hash=sha256:2475d7fcddf5951e92ff546972758802de5260bf409319a9f1934e6bbc8b1dc7 \
--hash=sha256:35907defb0f992b75ab7788f65fedc1cf20ffa22688e0e6f6f12afc06b3ea501
+fasteners==0.14.1 \
+ --hash=sha256:564a115ff9698767df401efca29620cbb1a1c2146b7095ebd304b79cc5807a7c \
+ --hash=sha256:427c76773fe036ddfa41e57d89086ea03111bbac57c55fc55f3006d027107e18
funcsigs==0.4 \
--hash=sha256:ff5ad9e2f8d9e5d1e8bbfbcf47722ab527cf0d51caeeed9da6d0f40799383fde \
--hash=sha256:d83ce6df0b0ea6618700fe1db353526391a8a3ada1b7aba52fed7a61da772033
@@ -77,6 +80,9 @@ ipaddress==1.0.16 \
linecache2==1.0.0 \
--hash=sha256:e78be9c0a0dfcbac712fe04fbf92b96cddae80b1b842f24248214c8496f006ef \
--hash=sha256:4b26ff4e7110db76eeb6f5a7b64a82623839d595c2038eeda662f2a2db78e97c
+monotonic==1.3 \
+ --hash=sha256:a8c7690953546c6bc8a4f05d347718db50de1225b29f4b9f346c0c6f19bdc286 \
+ --hash=sha256:2b469e2d7dd403f7f7f79227fe5ad551ee1e76f8bb300ae935209884b93c7c1b
ordereddict==1.1 \
--hash=sha256:1c35b4ac206cef2d24816c89f89cf289dd3d38cf7c449bb3fab7bf6d43f01b1f
parsedatetime==2.1 \
diff --git a/setup.py b/setup.py
index 0c47b973f60..6cc39e2118c 100644
--- a/setup.py
+++ b/setup.py
@@ -42,6 +42,7 @@ def read_file(filename, encoding='utf8'):
'ConfigArgParse>=0.9.3',
'configobj',
'cryptography>=0.7', # load_pem_x509_certificate
+ 'fasteners',
'parsedatetime>=1.3', # Calendar.parseDT
'PyOpenSSL',
'pyrfc3339',
| * add fasteners as a dependency
* add LOCK_FILE constant
* Add lock file to Certbot
* Move code to _run_subcommand
* move lock file path into CLI_CONSTANTS
* add --lock-path flag
* move locking code to separate function
* Add TestAcquireFileLock
* assert we log
* test lock contention
* add fasteners to certbot-auto
* Use a different lock file for each test in MainTest
(cherry picked from commit 32122cfa21fe7ba43189658158949cfe16dc6717) | https://api.github.com/repos/certbot/certbot/pulls/4394 | 2017-03-22T16:36:39Z | 2017-03-22T21:16:59Z | 2017-03-22T21:16:59Z | 2017-03-22T21:17:02Z | 3,068 | certbot/certbot | 3,684 |
[Serve] [Doc] Mock ray.serve.generated package for doc building | diff --git a/doc/source/conf.py b/doc/source/conf.py
index 756170923f8ff..05cc18898b7dc 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -61,6 +61,8 @@ def __getattr__(cls, name):
"ray.core.generated.common_pb2",
"ray.core.generated.gcs_pb2",
"ray.core.generated.ray.protocol.Task",
+ "ray.serve.generated",
+ "ray.serve.generated.serve_pb2",
"scipy.signal",
"scipy.stats",
"setproctitle",
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
Serve's API doc failed to build because it can't import from protobuf in readthedocs environment. We used the a common pattern to mock it out.
```
WARNING: autodoc: failed to import function 'serve.get_replica_context' from module 'ray'; the following exception was raised:
No module named 'ray.serve.generated'
WARNING: autodoc: failed to import function 'serve.start' from module 'ray'; the following exception was raised:
No module named 'ray.serve.generated'
WARNING: autodoc: failed to import function 'serve.deployment' from module 'ray'; the following exception was raised:
No module named 'ray.serve.generated'
WARNING: autodoc: failed to import function 'serve.list_deployments' from module 'ray'; the following exception was raised:
No module named 'ray.serve.generated'
WARNING: autodoc: failed to import function 'serve.get_deployment' from module 'ray'; the following exception was raised:
No module named 'ray.serve.generated'
WARNING: autodoc: failed to import function 'serve.shutdown' from module 'ray'; the following exception was raised:
No module named 'ray.serve.generated'
WARNING: autodoc: failed to import class 'serve.api.Deployment' from module 'ray'; the following exception was raised:
No module named 'ray.serve.generated'
WARNING: autodoc: failed to import class 'serve.handle.RayServeHandle' from module 'ray'; the following exception was raised:
No module named 'ray.serve.generated'
WARNING: autodoc: failed to import function 'serve.batch' from module 'ray'; the following exception was raised:
No module named 'ray.serve.generated'
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [ 0%] _help
writing output... [ 0%] actors
```
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number
Fixes #18684
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/18767 | 2021-09-20T20:55:40Z | 2021-09-20T21:33:33Z | 2021-09-20T21:33:33Z | 2021-09-20T21:33:33Z | 135 | ray-project/ray | 19,402 |
Add classifier for 3.12 | diff --git a/pyproject.toml b/pyproject.toml
index ea5c9f84684..390c11e0342 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -57,6 +57,7 @@ classifiers = [
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
+ "Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Quality Assurance",
]
| https://api.github.com/repos/psf/black/pulls/3866 | 2023-09-09T04:04:13Z | 2023-09-09T05:16:25Z | 2023-09-09T05:16:25Z | 2023-09-10T04:02:21Z | 133 | psf/black | 23,848 | |
[NFC] polish code style | diff --git a/colossalai/nn/_ops/_utils.py b/colossalai/nn/_ops/_utils.py
index 56bb5f465184..24877bbb552f 100644
--- a/colossalai/nn/_ops/_utils.py
+++ b/colossalai/nn/_ops/_utils.py
@@ -1,12 +1,11 @@
-import torch
-from typing import Union, Optional, List
-from colossalai.tensor import ColoTensor
+from typing import List, Optional, Union
+
import torch
import torch.distributed as dist
-from colossalai.global_variables import tensor_parallel_env as env
+from colossalai.global_variables import tensor_parallel_env as env
from colossalai.nn.layer.utils import divide
-from colossalai.tensor import ProcessGroup, ColoTensorSpec
+from colossalai.tensor import ColoTensor, ColoTensorSpec, ProcessGroup
GeneralTensor = Union[ColoTensor, torch.Tensor]
Number = Union[int, float]
@@ -135,7 +134,7 @@ def backward(ctx, grad_output):
class _SplitForwardGatherBackward(torch.autograd.Function):
"""
Split the input and keep only the corresponding chuck to the rank.
-
+
Args:
input_: input matrix.
process_group: parallel mode.
| ## 📌 Checklist before creating the PR
- [ ] I have created an issue for this PR for traceability
- [ ] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`
- [ ] I have added relevant tags if possible for us to better distinguish different PRs
## 🚨 Issue number
> Link this PR to your issue with words like fixed to automatically close the linked issue upon merge
>
> e.g. `fixed #1234`, `closed #1234`, `resolved #1234`
## 📝 What does this PR do?
Polish code style.
## 💥 Checklist before requesting a review
- [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
- [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
- [ ] I have performed a self-review of my code
- [ ] I have added thorough tests.
- [ ] I have added docstrings for all the functions/methods I implemented
## ⭐️ Do you enjoy contributing to Colossal-AI?
- [ ] 🌝 Yes, I do.
- [ ] 🌚 No, I don't.
Tell us more if you don't enjoy contributing to Colossal-AI.
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/3268 | 2023-03-28T01:43:39Z | 2023-03-28T02:30:30Z | 2023-03-28T02:30:30Z | 2023-03-28T02:36:27Z | 272 | hpcaitech/ColossalAI | 11,407 |
Refactor fibaro scene test | diff --git a/tests/components/fibaro/conftest.py b/tests/components/fibaro/conftest.py
index 2b6580c3191b..e15d6509a00c 100644
--- a/tests/components/fibaro/conftest.py
+++ b/tests/components/fibaro/conftest.py
@@ -2,16 +2,21 @@
from collections.abc import Generator
from unittest.mock import AsyncMock, Mock, patch
-from pyfibaro.fibaro_scene import SceneModel
import pytest
-from homeassistant.components.fibaro import DOMAIN
-from homeassistant.config_entries import ConfigEntry
-from homeassistant.const import Platform
+from homeassistant.components.fibaro import CONF_IMPORT_PLUGINS, DOMAIN
+from homeassistant.const import CONF_PASSWORD, CONF_URL, CONF_USERNAME
from homeassistant.core import HomeAssistant
from tests.common import MockConfigEntry
+TEST_SERIALNUMBER = "HC2-111111"
+TEST_NAME = "my_fibaro_home_center"
+TEST_URL = "http://192.168.1.1/api/"
+TEST_USERNAME = "user"
+TEST_PASSWORD = "password"
+TEST_VERSION = "4.360"
+
@pytest.fixture
def mock_setup_entry() -> Generator[AsyncMock, None, None]:
@@ -22,10 +27,10 @@ def mock_setup_entry() -> Generator[AsyncMock, None, None]:
yield mock_setup_entry
-@pytest.fixture(name="fibaro_scene")
-def mock_scene() -> SceneModel:
+@pytest.fixture
+def mock_scene() -> Mock:
"""Fixture for an individual scene."""
- scene = Mock(SceneModel)
+ scene = Mock()
scene.fibaro_id = 1
scene.name = "Test scene"
scene.room_id = 1
@@ -33,23 +38,57 @@ def mock_scene() -> SceneModel:
return scene
-async def setup_platform(
- hass: HomeAssistant,
- platform: Platform,
- room_name: str | None,
- scenes: list[SceneModel],
-) -> ConfigEntry:
- """Set up the fibaro platform and prerequisites."""
- hass.config.components.add(DOMAIN)
- config_entry = MockConfigEntry(domain=DOMAIN, title="Test")
- config_entry.add_to_hass(hass)
-
- controller_mock = Mock()
- controller_mock.hub_serial = "HC2-111111"
- controller_mock.get_room_name.return_value = room_name
- controller_mock.read_scenes.return_value = scenes
-
- hass.data[DOMAIN] = {config_entry.entry_id: controller_mock}
- await hass.config_entries.async_forward_entry_setup(config_entry, platform)
+@pytest.fixture
+def mock_room() -> Mock:
+ """Fixture for an individual room."""
+ room = Mock()
+ room.fibaro_id = 1
+ room.name = "Room 1"
+ return room
+
+
+@pytest.fixture
+def mock_config_entry(hass: HomeAssistant) -> MockConfigEntry:
+ """Return the default mocked config entry."""
+ mock_config_entry = MockConfigEntry(
+ domain=DOMAIN,
+ data={
+ CONF_URL: TEST_URL,
+ CONF_USERNAME: TEST_USERNAME,
+ CONF_PASSWORD: TEST_PASSWORD,
+ CONF_IMPORT_PLUGINS: True,
+ },
+ )
+ mock_config_entry.add_to_hass(hass)
+ return mock_config_entry
+
+
+@pytest.fixture
+def mock_fibaro_client() -> Generator[Mock, None, None]:
+ """Return a mocked FibaroClient."""
+ info_mock = Mock()
+ info_mock.serial_number = TEST_SERIALNUMBER
+ info_mock.hc_name = TEST_NAME
+ info_mock.current_version = TEST_VERSION
+
+ with patch(
+ "homeassistant.components.fibaro.FibaroClient", autospec=True
+ ) as fibaro_client_mock:
+ client = fibaro_client_mock.return_value
+ client.set_authentication.return_value = None
+ client.connect.return_value = True
+ client.read_info.return_value = info_mock
+ client.read_rooms.return_value = []
+ client.read_scenes.return_value = []
+ client.read_devices.return_value = []
+ client.register_update_handler.return_value = None
+ client.unregister_update_handler.return_value = None
+ yield client
+
+
+async def init_integration(
+ hass: HomeAssistant, mock_config_entry: MockConfigEntry
+) -> None:
+ """Set up the fibaro integration for testing."""
+ assert await hass.config_entries.async_setup(mock_config_entry.entry_id)
await hass.async_block_till_done()
- return config_entry
diff --git a/tests/components/fibaro/test_scene.py b/tests/components/fibaro/test_scene.py
index 09e0543976fd..0ce618e903c2 100644
--- a/tests/components/fibaro/test_scene.py
+++ b/tests/components/fibaro/test_scene.py
@@ -1,21 +1,30 @@
"""Test the Fibaro scene platform."""
-
-from pyfibaro.fibaro_scene import SceneModel
+from unittest.mock import Mock
from homeassistant.components.scene import DOMAIN as SCENE_DOMAIN
-from homeassistant.const import ATTR_ENTITY_ID, SERVICE_TURN_ON, Platform
+from homeassistant.const import ATTR_ENTITY_ID, SERVICE_TURN_ON
from homeassistant.core import HomeAssistant
from homeassistant.helpers import entity_registry as er
-from .conftest import setup_platform
+from .conftest import init_integration
+
+from tests.common import MockConfigEntry
-async def test_entity_attributes(hass: HomeAssistant, fibaro_scene: SceneModel) -> None:
+async def test_entity_attributes(
+ hass: HomeAssistant,
+ mock_fibaro_client: Mock,
+ mock_config_entry: MockConfigEntry,
+ mock_scene: Mock,
+ mock_room: Mock,
+) -> None:
"""Test that the attributes of the entity are correct."""
# Arrange
+ mock_fibaro_client.read_rooms.return_value = [mock_room]
+ mock_fibaro_client.read_scenes.return_value = [mock_scene]
entity_registry = er.async_get(hass)
# Act
- await setup_platform(hass, Platform.SCENE, "Room 1", [fibaro_scene])
+ await init_integration(hass, mock_config_entry)
# Assert
entry = entity_registry.async_get("scene.room_1_test_scene")
@@ -25,13 +34,20 @@ async def test_entity_attributes(hass: HomeAssistant, fibaro_scene: SceneModel)
async def test_entity_attributes_without_room(
- hass: HomeAssistant, fibaro_scene: SceneModel
+ hass: HomeAssistant,
+ mock_fibaro_client: Mock,
+ mock_config_entry: MockConfigEntry,
+ mock_scene: Mock,
+ mock_room: Mock,
) -> None:
"""Test that the attributes of the entity are correct."""
# Arrange
+ mock_room.name = None
+ mock_fibaro_client.read_rooms.return_value = [mock_room]
+ mock_fibaro_client.read_scenes.return_value = [mock_scene]
entity_registry = er.async_get(hass)
# Act
- await setup_platform(hass, Platform.SCENE, None, [fibaro_scene])
+ await init_integration(hass, mock_config_entry)
# Assert
entry = entity_registry.async_get("scene.unknown_test_scene")
@@ -39,10 +55,19 @@ async def test_entity_attributes_without_room(
assert entry.unique_id == "hc2_111111.scene.1"
-async def test_activate_scene(hass: HomeAssistant, fibaro_scene: SceneModel) -> None:
+async def test_activate_scene(
+ hass: HomeAssistant,
+ mock_fibaro_client: Mock,
+ mock_config_entry: MockConfigEntry,
+ mock_scene: Mock,
+ mock_room: Mock,
+) -> None:
"""Test activate scene is called."""
# Arrange
- await setup_platform(hass, Platform.SCENE, "Room 1", [fibaro_scene])
+ mock_fibaro_client.read_rooms.return_value = [mock_room]
+ mock_fibaro_client.read_scenes.return_value = [mock_scene]
+ # Act
+ await init_integration(hass, mock_config_entry)
# Act
await hass.services.async_call(
SCENE_DOMAIN,
@@ -51,4 +76,4 @@ async def test_activate_scene(hass: HomeAssistant, fibaro_scene: SceneModel) ->
blocking=True,
)
# Assert
- assert fibaro_scene.start.call_count == 1
+ assert mock_scene.start.call_count == 1
| <!--
You are amazing! Thanks for contributing to our project!
Please, DO NOT DELETE ANY TEXT from this template! (unless instructed).
-->
## Proposed change
<!--
Describe the big picture of your changes here to communicate to the
maintainers why we should accept this pull request. If it fixes a bug
or resolves a feature request, be sure to link to that issue in the
additional information section.
-->
Refactor scene test to properly init integration as suggested in late review.
https://github.com/home-assistant/core/pull/100547#discussion_r1329770102
With this change also other fibaro tests can reuse the mocks and it should be easy possible to add more tests.
If you are already interested on how this would affect the fibaro config flow test see this commit: https://github.com/home-assistant/core/commit/db154a764cc8ba0d1598654993fab59ea6a4c4ae
## Type of change
<!--
What type of change does your PR introduce to Home Assistant?
NOTE: Please, check only 1! box!
If your PR requires multiple boxes to be checked, you'll most likely need to
split it into multiple PRs. This makes things easier and faster to code review.
-->
- [ ] Dependency upgrade
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New integration (thank you!)
- [ ] New feature (which adds functionality to an existing integration)
- [ ] Deprecation (breaking change to happen in the future)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [X] Code quality improvements to existing code or addition of tests
## Additional information
<!--
Details are important, and help maintainers processing your PR.
Please be sure to fill out additional details, if applicable.
-->
- This PR fixes or closes issue: fixes #
- This PR is related to issue:
- Link to documentation pull request:
## Checklist
<!--
Put an `x` in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
-->
- [X] The code change is tested and works locally.
- [X] Local tests pass. **Your PR cannot be merged unless tests pass**
- [X] There is no commented out code in this PR.
- [X] I have followed the [development checklist][dev-checklist]
- [X] I have followed the [perfect PR recommendations][perfect-pr]
- [X] The code has been formatted using Black (`black --fast homeassistant tests`)
- [ ] Tests have been added to verify that the new code works.
If user exposed functionality or configuration variables are added/changed:
- [ ] Documentation added/updated for [www.home-assistant.io][docs-repository]
If the code communicates with devices, web services, or third-party tools:
- [ ] The [manifest file][manifest-docs] has all fields filled out correctly.
Updated and included derived files by running: `python3 -m script.hassfest`.
- [ ] New or updated dependencies have been added to `requirements_all.txt`.
Updated by running `python3 -m script.gen_requirements_all`.
- [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
- [ ] Untested files have been added to `.coveragerc`.
<!--
This project is very active and we have a high turnover of pull requests.
Unfortunately, the number of incoming pull requests is higher than what our
reviewers can review and merge so there is a long backlog of pull requests
waiting for review. You can help here!
By reviewing another pull request, you will help raise the code quality of
that pull request and the final review will be faster. This way the general
pace of pull request reviews will go up and your wait time will go down.
When picking a pull request to review, try to choose one that hasn't yet
been reviewed.
Thanks for helping out!
-->
To help with the load of incoming pull requests:
- [ ] I have reviewed two other [open pull requests][prs] in this repository.
[prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure
<!--
Thank you for contributing <3
Below, some useful links you could explore:
-->
[dev-checklist]: https://developers.home-assistant.io/docs/development_checklist/
[manifest-docs]: https://developers.home-assistant.io/docs/creating_integration_manifest/
[quality-scale]: https://developers.home-assistant.io/docs/integration_quality_scale_index/
[docs-repository]: https://github.com/home-assistant/home-assistant.io
[perfect-pr]: https://developers.home-assistant.io/docs/review-process/#creating-the-perfect-pr
| https://api.github.com/repos/home-assistant/core/pulls/102452 | 2023-10-21T11:00:28Z | 2023-10-22T21:36:41Z | 2023-10-22T21:36:41Z | 2023-10-24T18:21:33Z | 1,885 | home-assistant/core | 38,762 |
[3.10] bpo-45583: Correct datamodel documentation of int() (GH-29182) | diff --git a/Doc/reference/datamodel.rst b/Doc/reference/datamodel.rst
index 310167e86d0cb7..195e8c2d16f103 100644
--- a/Doc/reference/datamodel.rst
+++ b/Doc/reference/datamodel.rst
@@ -2541,8 +2541,8 @@ left undefined.
return the value of the object truncated to an :class:`~numbers.Integral`
(typically an :class:`int`).
- If :meth:`__int__` is not defined then the built-in function :func:`int`
- falls back to :meth:`__trunc__`.
+ The built-in function :func:`int` falls back to :meth:`__trunc__` if neither
+ :meth:`__int__` nor :meth:`__index__` is defined.
.. _context-managers:
| It should be noted that this part of the documentation is redundant with
function.rst's documentation of int. This one was correctly updated with Python 3.8.
(cherry picked from commit d9c1868c25ec6466e8d8ae21fe9315a8a03836ab)
Co-authored-by: Arthur Milchior <arthur@milchior.fr>
<!-- issue-number: [bpo-45583](https://bugs.python.org/issue45583) -->
https://bugs.python.org/issue45583
<!-- /issue-number -->
| https://api.github.com/repos/python/cpython/pulls/29285 | 2021-10-28T19:48:45Z | 2021-10-28T20:17:06Z | 2021-10-28T20:17:06Z | 2021-10-28T20:17:13Z | 196 | python/cpython | 4,182 |
Fix grammatical error | diff --git a/docs/tutorial/views.rst b/docs/tutorial/views.rst
index c9c6a7cacb..86689111b7 100644
--- a/docs/tutorial/views.rst
+++ b/docs/tutorial/views.rst
@@ -157,7 +157,7 @@ Here's what the ``register`` view function is doing:
stores messages that can be retrieved when rendering the template.
#. When the user initially navigates to ``auth/register``, or
- there was an validation error, an HTML page with the registration
+ there was a validation error, an HTML page with the registration
form should be shown. :func:`render_template` will render a template
containing the HTML, which you'll write in the next step of the
tutorial.
| Incorrect articles was used | https://api.github.com/repos/pallets/flask/pulls/2872 | 2018-07-24T18:00:40Z | 2018-07-24T18:04:58Z | 2018-07-24T18:04:58Z | 2020-11-14T03:20:17Z | 169 | pallets/flask | 20,901 |
feat: add servers option for OpenAPI | diff --git a/fastapi/applications.py b/fastapi/applications.py
index 3306aab3d95eb..c21087911ebf4 100644
--- a/fastapi/applications.py
+++ b/fastapi/applications.py
@@ -38,6 +38,7 @@ def __init__(
version: str = "0.1.0",
openapi_url: Optional[str] = "/openapi.json",
openapi_tags: Optional[List[Dict[str, Any]]] = None,
+ servers: Optional[List[Dict[str, Union[str, Any]]]] = None,
default_response_class: Type[Response] = JSONResponse,
docs_url: Optional[str] = "/docs",
redoc_url: Optional[str] = "/redoc",
@@ -70,6 +71,7 @@ def __init__(
self.title = title
self.description = description
self.version = version
+ self.servers = servers
self.openapi_url = openapi_url
self.openapi_tags = openapi_tags
# TODO: remove when discarding the openapi_prefix parameter
@@ -106,6 +108,7 @@ def openapi(self, openapi_prefix: str = "") -> Dict:
routes=self.routes,
openapi_prefix=openapi_prefix,
tags=self.openapi_tags,
+ servers=self.servers,
)
return self.openapi_schema
diff --git a/fastapi/openapi/models.py b/fastapi/openapi/models.py
index a7c4460fab43a..13dc59f189527 100644
--- a/fastapi/openapi/models.py
+++ b/fastapi/openapi/models.py
@@ -63,7 +63,7 @@ class ServerVariable(BaseModel):
class Server(BaseModel):
- url: AnyUrl
+ url: Union[AnyUrl, str]
description: Optional[str] = None
variables: Optional[Dict[str, ServerVariable]] = None
diff --git a/fastapi/openapi/utils.py b/fastapi/openapi/utils.py
index b6221ca202826..5a0c89a894cb3 100644
--- a/fastapi/openapi/utils.py
+++ b/fastapi/openapi/utils.py
@@ -86,7 +86,7 @@ def get_openapi_security_definitions(flat_dependant: Dependant) -> Tuple[Dict, L
def get_openapi_operation_parameters(
*,
all_route_params: Sequence[ModelField],
- model_name_map: Dict[Union[Type[BaseModel], Type[Enum]], str]
+ model_name_map: Dict[Union[Type[BaseModel], Type[Enum]], str],
) -> List[Dict[str, Any]]:
parameters = []
for param in all_route_params:
@@ -112,7 +112,7 @@ def get_openapi_operation_parameters(
def get_openapi_operation_request_body(
*,
body_field: Optional[ModelField],
- model_name_map: Dict[Union[Type[BaseModel], Type[Enum]], str]
+ model_name_map: Dict[Union[Type[BaseModel], Type[Enum]], str],
) -> Optional[Dict]:
if not body_field:
return None
@@ -318,12 +318,15 @@ def get_openapi(
description: str = None,
routes: Sequence[BaseRoute],
openapi_prefix: str = "",
- tags: Optional[List[Dict[str, Any]]] = None
+ tags: Optional[List[Dict[str, Any]]] = None,
+ servers: Optional[List[Dict[str, Union[str, Any]]]] = None,
) -> Dict:
info = {"title": title, "version": version}
if description:
info["description"] = description
output: Dict[str, Any] = {"openapi": openapi_version, "info": info}
+ if servers:
+ output["servers"] = servers
components: Dict[str, Dict] = {}
paths: Dict[str, Dict] = {}
flat_models = get_flat_models_from_routes(routes)
diff --git a/tests/test_openapi_servers.py b/tests/test_openapi_servers.py
new file mode 100644
index 0000000000000..a210154f60e88
--- /dev/null
+++ b/tests/test_openapi_servers.py
@@ -0,0 +1,60 @@
+from fastapi import FastAPI
+from fastapi.testclient import TestClient
+
+app = FastAPI(
+ servers=[
+ {"url": "/", "description": "Default, relative server"},
+ {
+ "url": "http://staging.localhost.tiangolo.com:8000",
+ "description": "Staging but actually localhost still",
+ },
+ {"url": "https://prod.example.com"},
+ ]
+)
+
+
+@app.get("/foo")
+def foo():
+ return {"message": "Hello World"}
+
+
+client = TestClient(app)
+
+
+openapi_schema = {
+ "openapi": "3.0.2",
+ "info": {"title": "FastAPI", "version": "0.1.0"},
+ "servers": [
+ {"url": "/", "description": "Default, relative server"},
+ {
+ "url": "http://staging.localhost.tiangolo.com:8000",
+ "description": "Staging but actually localhost still",
+ },
+ {"url": "https://prod.example.com"},
+ ],
+ "paths": {
+ "/foo": {
+ "get": {
+ "summary": "Foo",
+ "operationId": "foo_foo_get",
+ "responses": {
+ "200": {
+ "description": "Successful Response",
+ "content": {"application/json": {"schema": {}}},
+ }
+ },
+ }
+ }
+ },
+}
+
+
+def test_openapi_servers():
+ response = client.get("/openapi.json")
+ assert response.status_code == 200, response.text
+ assert response.json() == openapi_schema
+
+
+def test_app():
+ response = client.get("/foo")
+ assert response.status_code == 200, response.text
| Closes/related to #872
The missing server option has been a drawback when it comes to the swagger UI (e.g. provide prod and test servers) and portability of the openapi.json to services that is based on knowing the servers.
Example usage:
```python
from fastapi import FastAPI
from fastapi.openapi.models import Server
server1 = Server(url="http://example.com", description="optional description")
server2 = Server(url="http://test.com")
app = FastAPI(servers=[server1, server2])
@app.get("/")
async def root():
return {"message": "Hello World"}
```
It would also work to just pass a dictionary, but then the typechecker won't be happy:
```python
from fastapi import FastAPI
app = FastAPI(servers=[{"url": "http://example.com", "description": "test"}])
``` | https://api.github.com/repos/tiangolo/fastapi/pulls/1547 | 2020-06-10T19:32:26Z | 2020-06-14T13:38:30Z | 2020-06-14T13:38:30Z | 2020-06-15T07:30:42Z | 1,341 | tiangolo/fastapi | 22,750 |
Added Decimal to Binary Converter | diff --git a/Decimal_To_Binary.py b/Decimal_To_Binary.py
new file mode 100644
index 0000000000..48dca10b38
--- /dev/null
+++ b/Decimal_To_Binary.py
@@ -0,0 +1,42 @@
+'''
+PYTHON 3
+Author: Sandeep Pillai (www.github.com/Corruption13)
+
+Program: Decimal to Binary converter.
+
+THis program accepts fractional values, the accuracy can be set below:
+'''
+decimal_accuracy = 7
+
+
+def dtbconverter(num): # Function inputs a float value and returns a list as output
+ # Reasoning for list instead of integer: to avoid integer overflow error.
+
+ whole= [] # The part before decimal point
+ fractional = ['.'] # The part after decimal point
+
+ decimal = round(num%1, decimal_accuracy) # Extract fractional number part of decimal
+ w_num = int(num) # Extract whole number part of decimal.
+
+ i=0 # Some fractional decimal numbers have infinite binary values, so we limit this loop below.
+
+ #Loop to find binary of decimal part
+ while(decimal!=1 and i<decimal_accuracy):
+ decimal = decimal*2
+ fractional.append(int(decimal//1))
+ decimal = round(decimal%1, decimal_accuracy)
+ if(decimal == 0): break # Removes trailing zeros.
+ i = i + 1
+
+ #Loop to find binary of whole number part.
+ while(w_num!=0):
+ whole.append(w_num%2)
+ w_num = w_num//2
+ whole.reverse()
+
+ return whole + fractional ### End of dtbconverter() - 16 lines
+
+
+#Test lines.
+number = float(input("Enter ANY base-10 Number: "))
+print("The Binary Equivalant: " , *dtbconverter(number))
| I've added a complete Decimal (Base 10) Number to Binary Number converter to the root folder. | https://api.github.com/repos/geekcomputers/Python/pulls/417 | 2018-10-19T15:31:16Z | 2018-11-04T21:49:36Z | 2018-11-04T21:49:36Z | 2018-11-04T21:49:42Z | 443 | geekcomputers/Python | 31,731 |
[cifar tutorial] improve readability | diff --git a/docs/_tutorials/cifar-10.md b/docs/_tutorials/cifar-10.md
index 91f1b57034db..c7b53e58357a 100644
--- a/docs/_tutorials/cifar-10.md
+++ b/docs/_tutorials/cifar-10.md
@@ -108,7 +108,7 @@ The first step to apply DeepSpeed is adding DeepSpeed arguments to CIFAR-10 mode
### Initialization
-We use `deepspeed.initialize` to create `model_engine`, `optimizer` and `trainloader`. Below is its definition.
+We create `model_engine`, `optimizer` and `trainloader` with the help of `deepspeed.initialize`, which is defined as following:
```python
def initialize(args,
@@ -122,7 +122,7 @@ def initialize(args,
collate_fn=None):
```
-For CIFAR-10 model, we initialize DeepSpeed its model (net) is created as below, to pass the raw `model`, `optimizer`, `args`, `parametersnd` and `trainset`.
+Here we initialize DeepSpeed with CIFAR-10 model (`net`), `args`, `parameters` and `trainset`:
```python
parameters = filter(lambda p: p.requires_grad, net.parameters())
@@ -132,11 +132,11 @@ For CIFAR-10 model, we initialize DeepSpeed its model (net) is created as below,
# 1) Distributed model
# 2) Distributed data loader
# 3) DeepSpeed optimizer
- model_engine, optimizer, trainloader, __ = deepspeed.initialize(args=args, model=net, model_parameters=parameters, training_data=trainset)
+ model_engine, optimizer, trainloader, _ = deepspeed.initialize(args=args, model=net, model_parameters=parameters, training_data=trainset)
```
-The original device and optimizer can be removed after initializing DeepSpeed.
+After initializing DeepSpeed, the original `device` and `optimizer` are removed:
```python
#device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
| This PR:
* fixes a typo `parametersnd`
* fixes `__` to `_`
* removes incorrect param in a description (`optimizer` isn't passed to `initialize` in the code snippet)
* rewrites text to be more readable (some of it wasn't quite parseable)
Thanks. | https://api.github.com/repos/microsoft/DeepSpeed/pulls/567 | 2020-12-02T04:15:53Z | 2020-12-02T19:10:48Z | 2020-12-02T19:10:48Z | 2020-12-02T19:11:35Z | 460 | microsoft/DeepSpeed | 10,788 |
BUG/WARN: Passing EA object to dtype instead of an instance | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index b79797fa86431..5d46f819b07f9 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -837,6 +837,7 @@ ExtensionArray
- Bug in :meth:`Series.rank` returning wrong order for small values with ``Float64`` dtype (:issue:`52471`)
- Bug in :meth:`Series.unique` for boolean ``ArrowDtype`` with ``NA`` values (:issue:`54667`)
- Bug in :meth:`~arrays.ArrowExtensionArray.__iter__` and :meth:`~arrays.ArrowExtensionArray.__getitem__` returning python datetime and timedelta objects for non-nano dtypes (:issue:`53326`)
+- Bug when passing an :class:`ExtensionArray` subclass to ``dtype`` keywords. This will now raise a ``UserWarning`` to encourage passing an instance instead (:issue:`31356`, :issue:`54592`)
- Bug where the :class:`DataFrame` repr would not work when a column had an :class:`ArrowDtype` with a ``pyarrow.ExtensionDtype`` (:issue:`54063`)
- Bug where the ``__from_arrow__`` method of masked ExtensionDtypes (e.g. :class:`Float64Dtype`, :class:`BooleanDtype`) would not accept PyArrow arrays of type ``pyarrow.null()`` (:issue:`52223`)
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index c2e498e75b7d3..3db36fc50e343 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -1614,6 +1614,15 @@ def pandas_dtype(dtype) -> DtypeObj:
# registered extension types
result = registry.find(dtype)
if result is not None:
+ if isinstance(result, type):
+ # GH 31356, GH 54592
+ warnings.warn(
+ f"Instantiating {result.__name__} without any arguments."
+ f"Pass a {result.__name__} instance to silence this warning.",
+ UserWarning,
+ stacklevel=find_stack_level(),
+ )
+ result = result()
return result
# try a numpy dtype
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 471e456146178..165bf61302145 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -782,3 +782,9 @@ def test_pandas_dtype_numpy_warning():
match="Converting `np.integer` or `np.signedinteger` to a dtype is deprecated",
):
pandas_dtype(np.integer)
+
+
+def test_pandas_dtype_ea_not_instance():
+ # GH 31356 GH 54592
+ with tm.assert_produces_warning(UserWarning):
+ assert pandas_dtype(CategoricalDtype) == CategoricalDtype()
| - [x] closes #31356 (Replace xxxx with the GitHub issue number)
- [x] closes #54592 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Instantiating an empty instance is an imperfect solution since the EADtype could require positional arguments, so raises a UserWarning to encourage users to pass instances instead
| https://api.github.com/repos/pandas-dev/pandas/pulls/54721 | 2023-08-23T22:37:14Z | 2023-08-24T16:13:19Z | 2023-08-24T16:13:19Z | 2023-08-24T16:13:35Z | 703 | pandas-dev/pandas | 44,831 |
Add formatters-python for atom to editor_integration | diff --git a/docs/editor_integration.md b/docs/editor_integration.md
index 73107d6a4a1..0457fbd53d9 100644
--- a/docs/editor_integration.md
+++ b/docs/editor_integration.md
@@ -253,7 +253,8 @@ Sublime Text, Visual Studio Code and many more), you can use the
## Atom/Nuclide
-Use [python-black](https://atom.io/packages/python-black).
+Use [python-black](https://atom.io/packages/python-black) or
+[formatters-python](https://atom.io/packages/formatters-python).
## Gradle (the build tool)
| https://api.github.com/repos/psf/black/pulls/1834 | 2020-11-21T12:18:18Z | 2021-03-04T00:46:27Z | 2021-03-04T00:46:27Z | 2021-03-04T00:46:27Z | 137 | psf/black | 23,926 | |
Update README.md | diff --git a/README.md b/README.md
index bf20d9455d..648c2a33ea 100644
--- a/README.md
+++ b/README.md
@@ -3,7 +3,7 @@
FastChat is an open platform for training, serving, and evaluating large language model based chatbots.
- FastChat powers Chatbot Arena (https://chat.lmsys.org/), serving over 10 million chat requests for 70+ LLMs.
-- Chatbot Arena has collected over 500K human votes from side-by-side LLM battles to compile an online [LLM Elo leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
+- Chatbot Arena has collected over 500K human votes from side-by-side LLM battles to compile an online [LLM Elo leaderboard](https://leaderboard.lmsys.org).
FastChat's core features include:
- The training and evaluation code for state-of-the-art models (e.g., Vicuna, MT-Bench).
| <!-- Thank you for your contribution! -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number (if applicable)
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've run `format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed.
- [ ] I've made sure the relevant tests are passing (if applicable).
| https://api.github.com/repos/lm-sys/FastChat/pulls/3239 | 2024-04-12T05:19:10Z | 2024-04-12T05:19:18Z | 2024-04-12T05:19:18Z | 2024-04-12T05:19:21Z | 223 | lm-sys/FastChat | 41,442 |
Fixed an erroneous import in example code of docs (class-based-views/intro). | diff --git a/docs/topics/class-based-views/intro.txt b/docs/topics/class-based-views/intro.txt
index 11d1f84ffeb98..7764e417fc51f 100644
--- a/docs/topics/class-based-views/intro.txt
+++ b/docs/topics/class-based-views/intro.txt
@@ -71,7 +71,7 @@ something like::
In a class-based view, this would become::
from django.http import HttpResponse
- from django.views.base import View
+ from django.views.generic.base import View
class MyView(View):
def get(self, request):
@@ -113,7 +113,7 @@ and methods in the subclass. So that if your parent class had an attribute
``greeting`` like this::
from django.http import HttpResponse
- from django.views.base import View
+ from django.views.generic.base import View
class GreetingView(View):
greeting = "Good Day"
| Simply i replaced `from django.views.base import View` with `from django.views.generic.base import View`.
| https://api.github.com/repos/django/django/pulls/901 | 2013-03-13T17:26:30Z | 2013-03-15T13:23:22Z | 2013-03-15T13:23:22Z | 2014-07-07T22:28:18Z | 208 | django/django | 50,882 |
Fixed error handling in train.py | diff --git a/scripts/train.py b/scripts/train.py
index a966368069..b08d1cdf1a 100644
--- a/scripts/train.py
+++ b/scripts/train.py
@@ -178,7 +178,7 @@ def processThread(self):
print('Saving model weights has been cancelled!')
exit(0)
except Exception as e:
- print(e)
+ raise e
exit(1)
def set_tf_allow_growth(self):
@@ -200,4 +200,4 @@ def show(self, image, name=''):
cv2.imwrite('_sample_{}.jpg'.format(name), image)
except Exception as e:
print("could not preview sample")
- print(e)
+ raise e
| Fixed the error handling in train.py so it doesn't swallow tracelogs.
This is an improvement to follow Python best practices of raising any exceptions that aren't handled so that tracelogs don't get lost. | https://api.github.com/repos/deepfakes/faceswap/pulls/293 | 2018-03-14T01:41:40Z | 2018-03-16T16:16:18Z | 2018-03-16T16:16:18Z | 2018-03-16T16:16:18Z | 159 | deepfakes/faceswap | 18,647 |
[doc] Improve the object reference documentation | diff --git a/doc/source/ray-core/doc_code/obj_capture.py b/doc/source/ray-core/doc_code/obj_capture.py
new file mode 100644
index 0000000000000..3f37813b22d64
--- /dev/null
+++ b/doc/source/ray-core/doc_code/obj_capture.py
@@ -0,0 +1,16 @@
+import ray
+
+# Put the values (1, 2, 3) into Ray's object store.
+a, b, c = ray.put(1), ray.put(2), ray.put(3)
+
+
+@ray.remote
+def print_via_capture():
+ """This function prints the values of (a, b, c) to stdout."""
+ print(ray.get([a, b, c]))
+
+
+# Passing object references via closure-capture. Inside the `print_via_capture`
+# function, the global object refs (a, b, c) can be retrieved and printed.
+print_via_capture.remote()
+# -> prints [1, 2, 3]
diff --git a/doc/source/ray-core/doc_code/obj_ref.py b/doc/source/ray-core/doc_code/obj_ref.py
new file mode 100644
index 0000000000000..176fd377a8566
--- /dev/null
+++ b/doc/source/ray-core/doc_code/obj_ref.py
@@ -0,0 +1,18 @@
+import ray
+
+
+@ray.remote
+def echo_and_get(x_list): # List[ObjectRef]
+ """This function prints its input values to stdout."""
+ print("args:", x_list)
+ print("values:", ray.get(x_list))
+
+
+# Put the values (1, 2, 3) into Ray's object store.
+a, b, c = ray.put(1), ray.put(2), ray.put(3)
+
+# Passing an object as a nested argument to `echo_and_get`. Ray does not
+# de-reference nested args, so `echo_and_get` sees the references.
+echo_and_get.remote([a, b, c])
+# -> prints args: [ObjectRef(...), ObjectRef(...), ObjectRef(...)]
+# values: [1, 2, 3]
diff --git a/doc/source/ray-core/doc_code/obj_val.py b/doc/source/ray-core/doc_code/obj_val.py
new file mode 100644
index 0000000000000..8406fac5cadcb
--- /dev/null
+++ b/doc/source/ray-core/doc_code/obj_val.py
@@ -0,0 +1,20 @@
+import ray
+
+
+@ray.remote
+def echo(a: int, b: int, c: int):
+ """This function prints its input values to stdout."""
+ print(a, b, c)
+
+
+# Passing the literal values (1, 2, 3) to `echo`.
+echo.remote(1, 2, 3)
+# -> prints "1 2 3"
+
+# Put the values (1, 2, 3) into Ray's object store.
+a, b, c = ray.put(1), ray.put(2), ray.put(3)
+
+# Passing an object as a top-level argument to `echo`. Ray will de-reference top-level
+# arguments, so `echo` will see the literal values (1, 2, 3) in this case as well.
+echo.remote(a, b, c)
+# -> prints "1 2 3"
diff --git a/doc/source/ray-core/objects.rst b/doc/source/ray-core/objects.rst
index c2c86d8cfb768..c0c39b8fd1c09 100644
--- a/doc/source/ray-core/objects.rst
+++ b/doc/source/ray-core/objects.rst
@@ -128,41 +128,51 @@ If the current node's object store does not contain the object, the object is do
assert(*results[1] == 1);
assert(*results[2] == 2);
-Passing Objects by Reference
-----------------------------
+Passing Object Arguments
+------------------------
Ray object references can be freely passed around a Ray application. This means that they can be passed as arguments to tasks, actor methods, and even stored in other objects. Objects are tracked via *distributed reference counting*, and their data is automatically freed once all references to the object are deleted.
-.. code-block:: python
+There are two different ways one can pass an object to a Ray task or method. Depending on the way an object is passed, Ray will decide whether to *de-reference* the object prior to task execution.
- @ray.remote
- def echo(x):
- print(x)
+**Passing an object as a top-level argmuent**: When an object is passed directly as a top-level argument to a task, Ray will de-reference the object. This means that Ray will fetch the underlying data for all top-level object reference arguments, not executing the task until the object data becomes fully available.
- # Put an object in Ray's object store.
- object_ref = ray.put(1)
+.. literalinclude:: doc_code/obj_val.py
- # Pass-by-value: send the object to a task as a top-level argument.
- # The object will be de-referenced, so the task only sees its value.
- echo.remote(object_ref)
- # -> prints "1"
+**Passing an object as a nested argument**: When an object is passed within a nested object, for example, within a Python list, Ray will *not* de-reference it. This means that the task will need to call ``ray.get()`` on the reference to fetch the concrete value. However, if the task never calls ``ray.get()``, then the object value never needs to be transferred to the machine the task is running on. We recommend passing objects as top-level arguments where possible, but nested arguments can be useful for passing objects on to other tasks without needing to see the data.
- # Pass-by-reference: when passed inside a Python list or other data structure,
- # the object ref is preserved. The object data is not transferred to the worker
- # when it is passed by reference, until ray.get() is called on the reference.
- echo.remote({"obj": object_ref})
- # -> prints "{"obj": ObjectRef(...)}"
+.. literalinclude:: doc_code/obj_ref.py
- # Objects can be nested within each other. Ray will keep the inner object
- # alive via reference counting until all outer object references are deleted.
- object_ref_2 = ray.put([object_ref])
+The top-level vs not top-level passing convention also applies to actor constructors and actor method calls:
- # Examples of passing objects to actors.
+.. code-block:: python
+
+ # Examples of passing objects to actor constructors.
actor_handle = Actor.remote(obj) # by-value
actor_handle = Actor.remote([obj]) # by-reference
+
+ # Examples of passing objects to actor method calls.
actor_handle.method.remote(obj) # by-value
actor_handle.method.remote([obj]) # by-reference
+Closure Capture of Objects
+--------------------------
+
+You can also pass objects to tasks via *closure-capture*. This can be convenient when you have a large object that you want to share verbatim between many tasks or actors, and don't want to pass it repeatedly as an argument. Be aware however that defining a task that closes over an object ref will pin the object via reference-counting, so the object will not be evicted until the job completes.
+
+.. literalinclude:: doc_code/obj_capture.py
+
+Nested Objects
+--------------
+
+Ray also supports nested object references. This allows you to build composite objects that themselves hold references to further sub-objects.
+
+.. code-block:: python
+
+ # Objects can be nested within each other. Ray will keep the inner object
+ # alive via reference counting until all outer object references are deleted.
+ object_ref_2 = ray.put([object_ref])
+
More about Ray Objects
----------------------
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
Improve the explanatory text and examples for object reference passing. Per https://discuss.ray.io/t/question-about-multiprocessing-large-array-using-ray-remote/6083 | https://api.github.com/repos/ray-project/ray/pulls/24636 | 2022-05-10T04:24:33Z | 2022-05-11T01:39:16Z | 2022-05-11T01:39:16Z | 2022-05-11T01:39:16Z | 1,762 | ray-project/ray | 19,617 |
Support of the `--gradio-auth` flag (like `--gradio-auth-path` but without the need of a file) | diff --git a/README.md b/README.md
index 8403d82488..3ad4dfcd64 100644
--- a/README.md
+++ b/README.md
@@ -280,6 +280,7 @@ Optionally, you can use the following command-line flags:
| `--listen-port LISTEN_PORT` | The listening port that the server will use. |
| `--share` | Create a public URL. This is useful for running the web UI on Google Colab or similar. |
| `--auto-launch` | Open the web UI in the default browser upon launch. |
+| `--gradio-auth USER:PWD` | set gradio authentication like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3" |
| `--gradio-auth-path GRADIO_AUTH_PATH` | Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3" |
#### API
diff --git a/modules/shared.py b/modules/shared.py
index 0b34d3394e..f1c9940d16 100644
--- a/modules/shared.py
+++ b/modules/shared.py
@@ -163,6 +163,7 @@ def str2bool(v):
parser.add_argument('--listen-port', type=int, help='The listening port that the server will use.')
parser.add_argument('--share', action='store_true', help='Create a public URL. This is useful for running the web UI on Google Colab or similar.')
parser.add_argument('--auto-launch', action='store_true', default=False, help='Open the web UI in the default browser upon launch.')
+parser.add_argument("--gradio-auth", type=str, help='set gradio authentication like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3"', default=None)
parser.add_argument("--gradio-auth-path", type=str, help='Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3"', default=None)
# API
diff --git a/server.py b/server.py
index 9709746035..d8654ecdb2 100644
--- a/server.py
+++ b/server.py
@@ -529,11 +529,14 @@ def create_interface():
# Authentication variables
auth = None
+ gradio_auth_creds = []
+ if shared.args.gradio_auth:
+ gradio_auth_creds += [x.strip() for x in shared.args.gradio_auth.strip('"').replace('\n', '').split(',') if x.strip()]
if shared.args.gradio_auth_path is not None:
- gradio_auth_creds = []
with open(shared.args.gradio_auth_path, 'r', encoding="utf8") as file:
for line in file.readlines():
gradio_auth_creds += [x.strip() for x in line.split(',') if x.strip()]
+ if gradio_auth_creds:
auth = [tuple(cred.split(':')) for cred in gradio_auth_creds]
# Importing the extension files and executing their setup() functions
| ## Context
To set up _gradio authentication_, we currently need to set a flag AND to create a file (`--gradio-auth-path GRADIO_AUTH_PATH`).
We could easily have a _flag-only_ solution like with [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui).
## Proposed solution
Be able to set up _gradio authentication_ by simply passing a `--gradio-auth username:password` flag to the _server.py_ script (e.g. via the `CMD_FLAGS` constant). | https://api.github.com/repos/oobabooga/text-generation-webui/pulls/2283 | 2023-05-22T20:03:43Z | 2023-05-23T23:39:26Z | 2023-05-23T23:39:26Z | 2023-05-24T10:25:02Z | 707 | oobabooga/text-generation-webui | 26,114 |
Remove py35 from CI | diff --git a/.travis.yml b/.travis.yml
index 33a920bb6d8..b883c5b78d3 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -19,16 +19,10 @@ matrix:
python: 3.8
- env: TOXENV=pinned
- python: 3.5.2
+ python: 3.6.1
- env: TOXENV=asyncio-pinned
- python: 3.5.2 # We use additional code to support 3.5.3 and earlier
- - env: TOXENV=pypy3-pinned PYPY_VERSION=3-v5.9.0
-
- - env: TOXENV=py
- python: 3.5
- - env: TOXENV=asyncio
- python: 3.5 # We use specific code to support >= 3.5.4, < 3.6
- - env: TOXENV=pypy3 PYPY_VERSION=3.5-v7.0.0
+ python: 3.6.1
+ - env: TOXENV=pypy3-pinned PYPY_VERSION=3.6-v7.2.0
- env: TOXENV=py
python: 3.6
diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index 710e4209092..c03e258c7a3 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -4,11 +4,9 @@ pool:
vmImage: 'windows-latest'
strategy:
matrix:
- Python35:
- python.version: '3.5'
- TOXENV: windows-pinned
Python36:
python.version: '3.6'
+ TOXENV: windows-pinned
Python37:
python.version: '3.7'
Python38:
diff --git a/tests/test_utils_python.py b/tests/test_utils_python.py
index c298d0bd217..3115cc92f1f 100644
--- a/tests/test_utils_python.py
+++ b/tests/test_utils_python.py
@@ -3,8 +3,8 @@
import operator
import platform
import unittest
+from datetime import datetime
from itertools import count
-from sys import version_info
from warnings import catch_warnings
from scrapy.utils.python import (
@@ -216,15 +216,15 @@ def __call__(self, a, b, c):
self.assertEqual(get_func_args(str.split), [])
self.assertEqual(get_func_args(" ".join), [])
self.assertEqual(get_func_args(operator.itemgetter(2)), [])
- else:
- self.assertEqual(
- get_func_args(str.split, stripself=True), ['sep', 'maxsplit'])
- self.assertEqual(
- get_func_args(operator.itemgetter(2), stripself=True), ['obj'])
- if version_info < (3, 6):
- self.assertEqual(get_func_args(" ".join, stripself=True), ['list'])
- else:
+ elif platform.python_implementation() == 'PyPy':
+ self.assertEqual(get_func_args(str.split, stripself=True), ['sep', 'maxsplit'])
+ self.assertEqual(get_func_args(operator.itemgetter(2), stripself=True), ['obj'])
+
+ build_date = datetime.strptime(platform.python_build()[1], '%b %d %Y')
+ if build_date >= datetime(2020, 4, 7): # PyPy 3.6-v7.3.1
self.assertEqual(get_func_args(" ".join, stripself=True), ['iterable'])
+ else:
+ self.assertEqual(get_func_args(" ".join, stripself=True), ['list'])
def test_without_none_values(self):
self.assertEqual(without_none_values([1, None, 3, 4]), [1, 3, 4])
diff --git a/tox.ini b/tox.ini
index 4f5531aeada..dec0d75e8e0 100644
--- a/tox.ini
+++ b/tox.ini
@@ -14,7 +14,7 @@ deps =
# Extras
boto3>=1.13.0
botocore>=1.4.87
- Pillow>=3.4.2
+ Pillow>=4.0.0
passenv =
S3_TEST_FILE_URI
AWS_ACCESS_KEY_ID
@@ -78,7 +78,7 @@ deps =
# Extras
botocore==1.4.87
google-cloud-storage==1.29.0
- Pillow==3.4.2
+ Pillow==4.0.0
[testenv:pinned]
deps =
| Closes #4732 | https://api.github.com/repos/scrapy/scrapy/pulls/4743 | 2020-08-20T15:10:42Z | 2020-08-22T07:33:36Z | 2020-08-22T07:33:35Z | 2020-08-23T16:42:04Z | 1,065 | scrapy/scrapy | 35,128 |
Series.pow when right operand is missing value | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index b29d35c8ce332..4f5f31da75e03 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -317,6 +317,7 @@ Timezones
Numeric
^^^^^^^
- Bug in :func:`read_csv` with ``engine="pyarrow"`` causing rounding errors for large integers (:issue:`52505`)
+- Bug in :meth:`Series.pow` not filling missing values correctly (:issue:`55512`)
-
Conversion
diff --git a/pandas/core/series.py b/pandas/core/series.py
index c6e1e460b336a..724a54c85da9b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -6043,6 +6043,8 @@ def _flex_method(self, other, op, *, level=None, fill_value=None, axis: Axis = 0
return result
else:
if fill_value is not None:
+ if isna(other):
+ return op(self, fill_value)
self = self.fillna(fill_value)
return op(self, other)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 86aef2642750e..f9fcfd19e9956 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -3017,6 +3017,14 @@ def test_arrowextensiondtype_dataframe_repr():
assert result == expected
+def test_pow_missing_operand():
+ # GH 55512
+ k = pd.Series([2, None], dtype="int64[pyarrow]")
+ result = k.pow(None, fill_value=3)
+ expected = pd.Series([8, None], dtype="int64[pyarrow]")
+ tm.assert_series_equal(result, expected)
+
+
@pytest.mark.parametrize("pa_type", tm.TIMEDELTA_PYARROW_DTYPES)
def test_duration_fillna_numpy(pa_type):
# GH 54707
| - [x] closes #55512 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/55568 | 2023-10-18T01:08:00Z | 2023-10-19T16:40:55Z | 2023-10-19T16:40:55Z | 2023-10-19T16:54:15Z | 513 | pandas-dev/pandas | 45,586 |
fix missing TI hash | diff --git a/modules/sd_hijack_clip.py b/modules/sd_hijack_clip.py
index 2f9d569b1d8..8f29057a9cf 100644
--- a/modules/sd_hijack_clip.py
+++ b/modules/sd_hijack_clip.py
@@ -245,6 +245,8 @@ def forward(self, texts):
hashes.append(f"{name}: {shorthash}")
if hashes:
+ if self.hijack.extra_generation_params.get("TI hashes"):
+ hashes.append(self.hijack.extra_generation_params.get("TI hashes"))
self.hijack.extra_generation_params["TI hashes"] = ", ".join(hashes)
if getattr(self.wrapped, 'return_pooled', False):
| ## Description
issue https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12259
prevent positive prompt TI hash from overriding negative prompt TI hash
as this line of code will be called 2 time
## Checklist:
- [x] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [x] I have performed a self-review of my own code
- [x] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [x] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
| https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/12269 | 2023-08-03T01:27:20Z | 2023-08-03T09:56:20Z | 2023-08-03T09:56:20Z | 2023-08-03T09:57:16Z | 166 | AUTOMATIC1111/stable-diffusion-webui | 40,116 |
Fix volume dir path replacement for windows paths, use docker cmd client if not in docker | diff --git a/localstack/services/awslambda/lambda_executors.py b/localstack/services/awslambda/lambda_executors.py
index 69a7ca87c8470..6a09fe7e9fe45 100644
--- a/localstack/services/awslambda/lambda_executors.py
+++ b/localstack/services/awslambda/lambda_executors.py
@@ -1036,7 +1036,6 @@ def create_container(
image_name=container_config.image_name,
remove=container_config.remove,
interactive=container_config.interactive,
- detach=container_config.detach,
name=container_config.name,
entrypoint=container_config.entrypoint,
command=container_config.command,
@@ -1596,17 +1595,19 @@ def get_host_path_for_path_in_docker(cls, path):
f"Mount to {DEFAULT_VOLUME_DIR} needs to be a bind mount for lambda code mounting to work"
)
- result, subs = re.subn(
- r"^%s/(.*)$" % DEFAULT_VOLUME_DIR, r"%s/\1" % volume.source, path
- )
- if subs == 0:
+ if not path.startswith(f"{DEFAULT_VOLUME_DIR}/") and path != DEFAULT_VOLUME_DIR:
# We should be able to replace something here.
# if this warning is printed, the usage of this function is probably wrong.
- # Please check for missing slashes after DEFAULT_VOLUME_DIR etc.
+ # Please check if the target path is indeed prefixed by /var/lib/localstack
+ # if this happens, mounts may fail
LOG.warning(
- "Error while performing automatic host path replacement for path %s to source %s"
+ "Error while performing automatic host path replacement for path '%s' to source '%s'",
+ path,
+ volume.source,
)
else:
+ relative_path = path.removeprefix(DEFAULT_VOLUME_DIR)
+ result = volume.source + relative_path
return result
else:
raise ValueError(f"No volume mounted to {DEFAULT_VOLUME_DIR}")
diff --git a/localstack/utils/docker_utils.py b/localstack/utils/docker_utils.py
index 1a7446ab6bf88..96c481fb3c61b 100644
--- a/localstack/utils/docker_utils.py
+++ b/localstack/utils/docker_utils.py
@@ -22,7 +22,8 @@ def is_docker_sdk_installed() -> bool:
def create_docker_client() -> ContainerClient:
- if config.LEGACY_DOCKER_CLIENT or not is_docker_sdk_installed():
+ # never use the sdk client if it is not installed or not in docker - too risky for wrong version
+ if config.LEGACY_DOCKER_CLIENT or not is_docker_sdk_installed() or not config.is_in_docker:
from localstack.utils.container_utils.docker_cmd_client import CmdDockerClient
LOG.debug(
diff --git a/tests/unit/test_lambda.py b/tests/unit/test_lambda.py
index 0b14ef9e942c3..60d7c95c5cf3c 100644
--- a/tests/unit/test_lambda.py
+++ b/tests/unit/test_lambda.py
@@ -10,11 +10,12 @@
from localstack import config
from localstack.aws.accounts import get_aws_account_id
from localstack.services.awslambda import lambda_api, lambda_executors, lambda_utils
-from localstack.services.awslambda.lambda_executors import OutputLog
+from localstack.services.awslambda.lambda_executors import OutputLog, Util
from localstack.services.awslambda.lambda_utils import API_PATH_ROOT
from localstack.utils.aws import aws_stack
from localstack.utils.aws.aws_models import LambdaFunction
from localstack.utils.common import isoformat_milliseconds, mkdir, new_tmp_dir, save_file
+from localstack.utils.container_utils.container_client import VolumeInfo
TEST_EVENT_SOURCE_ARN = "arn:aws:sqs:eu-west-1:000000000000:testq"
TEST_SECRETSMANANAGER_EVENT_SOURCE_ARN = (
@@ -1130,3 +1131,72 @@ def test_put_function_event_invoke_config(self):
self.assertEqual(self.RETRY_ATTEMPTS, response["MaximumRetryAttempts"])
self.assertEqual(self.EVENT_AGE, response["MaximumEventAgeInSeconds"])
self.assertEqual(self.DL_QUEUE, response["DestinationConfig"]["OnFailure"]["Destination"])
+
+
+class TestLambdaUtils:
+ def test_host_path_for_path_in_docker_windows(self):
+ with mock.patch(
+ "localstack.services.awslambda.lambda_executors.get_default_volume_dir_mount"
+ ) as get_volume, mock.patch("localstack.config.is_in_docker", True):
+ get_volume.return_value = VolumeInfo(
+ type="bind",
+ source=r"C:\Users\localstack\volume\mount",
+ destination="/var/lib/localstack",
+ mode="rw",
+ rw=True,
+ propagation="rprivate",
+ )
+ result = Util.get_host_path_for_path_in_docker("/var/lib/localstack/some/test/file")
+ get_volume.assert_called_once()
+ # this path style is kinda weird, but windows will accept it - no need for manual conversion of / to \
+ assert result == r"C:\Users\localstack\volume\mount/some/test/file"
+
+ def test_host_path_for_path_in_docker_linux(self):
+ with mock.patch(
+ "localstack.services.awslambda.lambda_executors.get_default_volume_dir_mount"
+ ) as get_volume, mock.patch("localstack.config.is_in_docker", True):
+ get_volume.return_value = VolumeInfo(
+ type="bind",
+ source="/home/some-user/.cache/localstack/volume",
+ destination="/var/lib/localstack",
+ mode="rw",
+ rw=True,
+ propagation="rprivate",
+ )
+ result = Util.get_host_path_for_path_in_docker("/var/lib/localstack/some/test/file")
+ get_volume.assert_called_once()
+ assert result == "/home/some-user/.cache/localstack/volume/some/test/file"
+
+ def test_host_path_for_path_in_docker_linux_volume_dir(self):
+ with mock.patch(
+ "localstack.services.awslambda.lambda_executors.get_default_volume_dir_mount"
+ ) as get_volume, mock.patch("localstack.config.is_in_docker", True):
+ get_volume.return_value = VolumeInfo(
+ type="bind",
+ source="/home/some-user/.cache/localstack/volume",
+ destination="/var/lib/localstack",
+ mode="rw",
+ rw=True,
+ propagation="rprivate",
+ )
+ result = Util.get_host_path_for_path_in_docker("/var/lib/localstack")
+ get_volume.assert_called_once()
+ assert result == "/home/some-user/.cache/localstack/volume"
+
+ def test_host_path_for_path_in_docker_linux_wrong_path(self):
+ with mock.patch(
+ "localstack.services.awslambda.lambda_executors.get_default_volume_dir_mount"
+ ) as get_volume, mock.patch("localstack.config.is_in_docker", True):
+ get_volume.return_value = VolumeInfo(
+ type="bind",
+ source="/home/some-user/.cache/localstack/volume",
+ destination="/var/lib/localstack",
+ mode="rw",
+ rw=True,
+ propagation="rprivate",
+ )
+ result = Util.get_host_path_for_path_in_docker("/var/lib/localstacktest")
+ get_volume.assert_called_once()
+ assert result == "/var/lib/localstacktest"
+ result = Util.get_host_path_for_path_in_docker("/etc/some/path")
+ assert result == "/etc/some/path"
| ## PR reasons
1. Currently, since we use a regex replacement, the backslashes in Windows paths (as returned by docker for our mount detection - used for mounting the same paths into the lambda docker containers) will fail the method and therefore lambda execution.
2. A quick fix for #6458 - we currently use docker sdk if available, no matter the version installed
## Changes
1. use regular string logic instead of regex to avoid escaping backslashes, added tests for path replacements for multiple formats / cases
2. do not use docker sdk even if it is installed on the host - version mismatch have bad consequences (due to AWS sam often an older version of the docker sdk is installed, which will start pulling all tags instead of latest) | https://api.github.com/repos/localstack/localstack/pulls/6474 | 2022-07-18T11:47:11Z | 2022-07-18T16:36:13Z | 2022-07-18T16:36:13Z | 2022-07-18T16:36:31Z | 1,668 | localstack/localstack | 28,569 |
[Core] Fix mac build | diff --git a/src/ray/core_worker/core_worker.cc b/src/ray/core_worker/core_worker.cc
index e5658cbfc48a6..d669e9259bbc6 100644
--- a/src/ray/core_worker/core_worker.cc
+++ b/src/ray/core_worker/core_worker.cc
@@ -1801,7 +1801,7 @@ Status CoreWorker::GetLocationFromOwner(
// Calculate the number of batches
// Use the same config from worker_fetch_request_size
- int64_t batch_size = RayConfig::instance().worker_fetch_request_size();
+ size_t batch_size = static_cast<size_t>(RayConfig::instance().worker_fetch_request_size());
for (size_t batch_start = 0; batch_start < owner_object_ids.size();
batch_start += batch_size) {
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR.
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've added any new APIs to the API Reference. For example, if I added a
method in Tune, I've added it in `doc/source/tune/api/` under the
corresponding `.rst` file.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/44649 | 2024-04-10T20:09:04Z | 2024-04-10T20:11:32Z | 2024-04-10T20:11:32Z | 2024-04-10T20:12:20Z | 176 | ray-project/ray | 19,713 |
Clickhouse package meta fix | diff --git a/llama-index-integrations/vector_stores/llama-index-vector-stores-clickhouse/pyproject.toml b/llama-index-integrations/vector_stores/llama-index-vector-stores-clickhouse/pyproject.toml
index ad6f47e726cde..c63b92fee89e0 100644
--- a/llama-index-integrations/vector_stores/llama-index-vector-stores-clickhouse/pyproject.toml
+++ b/llama-index-integrations/vector_stores/llama-index-vector-stores-clickhouse/pyproject.toml
@@ -14,7 +14,7 @@ ignore_missing_imports = true
python_version = "3.8"
[tool.poetry]
-authors = ["Your Name dale@clickhouse.com"]
+authors = ["ClickHouse <info@clickhouse.com>"]
description = "llama-index vector_stores clickhouse integration"
license = "MIT"
name = "llama-index-vector-stores-clickhouse"
| # Description
Fix author metadata preventing package installation.
## Type of Change
Please delete options that are not relevant.
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [x] Added new unit/integration tests
- [x] Added new notebook (that tests end-to-end)
- [x] I stared at the code and made sure it makes sense
# Suggested Checklist:
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [x] I have added Google Colab support for the newly added notebooks.
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my feature works
- [x] New and existing unit tests pass locally with my changes
- [x] I ran `make format; make lint` to appease the lint gods
| https://api.github.com/repos/run-llama/llama_index/pulls/10799 | 2024-02-16T15:18:02Z | 2024-02-16T15:44:05Z | 2024-02-16T15:44:05Z | 2024-02-16T15:44:05Z | 213 | run-llama/llama_index | 6,420 |
Skip cookie extraction if necessary | diff --git a/requests/cookies.py b/requests/cookies.py
index 3bfedcc49f..f3ac64f0a3 100644
--- a/requests/cookies.py
+++ b/requests/cookies.py
@@ -107,6 +107,9 @@ def extract_cookies_to_jar(jar, request, response):
:param request: our own requests.Request object
:param response: urllib3.HTTPResponse object
"""
+ if not (hasattr(response, '_original_response') and
+ response._original_response):
+ return
# the _original_response field is the wrapped httplib.HTTPResponse object,
req = MockRequest(request)
# pull out the HTTPMessage with the headers and put it in the mock:
| If `_original_response` is never set/is `None`, then don't try to extract cookies
from the response.
Fixes #1534
| https://api.github.com/repos/psf/requests/pulls/1535 | 2013-08-17T02:44:53Z | 2013-08-17T03:15:18Z | 2013-08-17T03:15:18Z | 2021-09-08T21:01:06Z | 167 | psf/requests | 32,293 |
[chatgpt]add flag of action mask in critic | diff --git a/applications/ChatGPT/chatgpt/models/base/actor.py b/applications/ChatGPT/chatgpt/models/base/actor.py
index e2841dc68feb..57db2bb11a6a 100644
--- a/applications/ChatGPT/chatgpt/models/base/actor.py
+++ b/applications/ChatGPT/chatgpt/models/base/actor.py
@@ -37,7 +37,7 @@ def generate(
if pad_token_id is not None:
attention_mask = sequences.not_equal(pad_token_id).to(dtype=torch.long, device=sequences.device)
if not return_action_mask:
- return sequences, attention_mask
+ return sequences, attention_mask, None
input_len = input_ids.size(1)
eos_token_id = kwargs.get('eos_token_id', None)
if eos_token_id is None:
diff --git a/applications/ChatGPT/chatgpt/models/base/critic.py b/applications/ChatGPT/chatgpt/models/base/critic.py
index b12bddfcb2e5..e68a743a7762 100644
--- a/applications/ChatGPT/chatgpt/models/base/critic.py
+++ b/applications/ChatGPT/chatgpt/models/base/critic.py
@@ -18,15 +18,19 @@ class Critic(LoRAModule):
lora_train_bias (str): LoRA bias training mode.
"""
- def __init__(self,
- model: nn.Module,
- value_head: nn.Module,
- lora_rank: int = 0,
- lora_train_bias: str = 'none') -> None:
+ def __init__(
+ self,
+ model: nn.Module,
+ value_head: nn.Module,
+ lora_rank: int = 0,
+ lora_train_bias: str = 'none',
+ use_action_mask: bool = False,
+ ) -> None:
super().__init__(lora_rank=lora_rank, lora_train_bias=lora_train_bias)
self.model = model
self.value_head = value_head
+ self.use_action_mask = use_action_mask
self.convert_to_lora()
def forward(self,
@@ -38,7 +42,7 @@ def forward(self,
values = self.value_head(last_hidden_states).squeeze(-1)
- if action_mask is not None:
+ if action_mask is not None and self.use_action_mask:
num_actions = action_mask.size(1)
prompt_mask = attention_mask[:, :-num_actions]
values = values[:, :-num_actions]
@@ -46,5 +50,5 @@ def forward(self,
return value
values = values[:, :-1]
- value = values.mean(dim=1).squeeze(1)
+ value = values.mean(dim=1)
return value
diff --git a/applications/ChatGPT/chatgpt/models/bloom/bloom_critic.py b/applications/ChatGPT/chatgpt/models/bloom/bloom_critic.py
index 5a907309a674..a32fb2e102f9 100644
--- a/applications/ChatGPT/chatgpt/models/bloom/bloom_critic.py
+++ b/applications/ChatGPT/chatgpt/models/bloom/bloom_critic.py
@@ -24,7 +24,8 @@ def __init__(self,
config: Optional[BloomConfig] = None,
checkpoint: bool = False,
lora_rank: int = 0,
- lora_train_bias: str = 'none') -> None:
+ lora_train_bias: str = 'none',
+ **kwargs) -> None:
if pretrained is not None:
model = BloomModel.from_pretrained(pretrained)
elif config is not None:
@@ -34,4 +35,4 @@ def __init__(self,
if checkpoint:
model.gradient_checkpointing_enable()
value_head = nn.Linear(model.config.hidden_size, 1)
- super().__init__(model, value_head, lora_rank, lora_train_bias)
+ super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)
diff --git a/applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py b/applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py
index 897ddb4aeb03..01e824386d4a 100644
--- a/applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py
+++ b/applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py
@@ -20,7 +20,8 @@ class GPTCritic(Critic):
def __init__(self,
pretrained: Optional[str] = None,
config: Optional[GPT2Config] = None,
- checkpoint: bool = False) -> None:
+ checkpoint: bool = False,
+ **kwargs) -> None:
if pretrained is not None:
model = GPT2Model.from_pretrained(pretrained)
elif config is not None:
@@ -30,4 +31,4 @@ def __init__(self,
if checkpoint:
model.gradient_checkpointing_enable()
value_head = nn.Linear(model.config.n_embd, 1)
- super().__init__(model, value_head)
+ super().__init__(model, value_head, **kwargs)
diff --git a/applications/ChatGPT/chatgpt/models/opt/opt_critic.py b/applications/ChatGPT/chatgpt/models/opt/opt_critic.py
index 767cecb79353..1f5ead7582f7 100644
--- a/applications/ChatGPT/chatgpt/models/opt/opt_critic.py
+++ b/applications/ChatGPT/chatgpt/models/opt/opt_critic.py
@@ -24,7 +24,8 @@ def __init__(self,
config: Optional[OPTConfig] = None,
checkpoint: bool = False,
lora_rank: int = 0,
- lora_train_bias: str = 'none') -> None:
+ lora_train_bias: str = 'none',
+ **kargs) -> None:
if pretrained is not None:
model = OPTModel.from_pretrained(pretrained)
elif config is not None:
@@ -34,4 +35,4 @@ def __init__(self,
if checkpoint:
model.gradient_checkpointing_enable()
value_head = nn.Linear(model.config.hidden_size, 1)
- super().__init__(model, value_head, lora_rank, lora_train_bias)
+ super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)
| ## 📌 Checklist before creating the PR
- [ ] I have created an issue for this PR for traceability
- [ ] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`
- [ ] I have added relevant tags if possible for us to better distinguish different PRs
## 🚨 Issue number
> Link this PR to your issue with words like fixed to automatically close the linked issue upon merge
>
> e.g. `fixed #1234`, `closed #1234`, `resolved #1234`
## 📝 What does this PR do?
> Summarize your work here.
> if you have any plots/diagrams/screenshots/tables, please attach them here.
- add a choice for wether use action mask
## 💥 Checklist before requesting a review
- [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
- [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
- [ ] I have performed a self-review of my code
- [ ] I have added thorough tests.
- [ ] I have added docstrings for all the functions/methods I implemented
## ⭐️ Do you enjoy contributing to Colossal-AI?
- [ ] 🌝 Yes, I do.
- [ ] 🌚 No, I don't.
Tell us more if you don't enjoy contributing to Colossal-AI.
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/3086 | 2023-03-10T02:54:24Z | 2023-03-10T06:40:14Z | 2023-03-10T06:40:14Z | 2023-03-10T06:40:14Z | 1,460 | hpcaitech/ColossalAI | 10,987 |
Fixed #27135 -- Made index introspection return Index.suffix. | diff --git a/django/db/backends/mysql/introspection.py b/django/db/backends/mysql/introspection.py
index 20e017120c802..455e88962f0a5 100644
--- a/django/db/backends/mysql/introspection.py
+++ b/django/db/backends/mysql/introspection.py
@@ -6,6 +6,7 @@
from django.db.backends.base.introspection import (
BaseDatabaseIntrospection, FieldInfo, TableInfo,
)
+from django.db.models.indexes import Index
from django.utils.datastructures import OrderedSet
from django.utils.deprecation import RemovedInDjango21Warning
from django.utils.encoding import force_text
@@ -217,7 +218,7 @@ def get_constraints(self, cursor, table_name):
'foreign_key': None,
}
constraints[index]['index'] = True
- constraints[index]['type'] = type_.lower()
+ constraints[index]['type'] = Index.suffix if type_ == 'BTREE' else type_.lower()
constraints[index]['columns'].add(column)
# Convert the sorted sets to lists
for constraint in constraints.values():
diff --git a/django/db/backends/oracle/introspection.py b/django/db/backends/oracle/introspection.py
index 056edb6ce6f70..be6032180a3ab 100644
--- a/django/db/backends/oracle/introspection.py
+++ b/django/db/backends/oracle/introspection.py
@@ -274,7 +274,7 @@ def get_constraints(self, cursor, table_name):
"foreign_key": None,
"check": False,
"index": True,
- "type": 'btree' if type_ == 'normal' else type_,
+ "type": 'idx' if type_ == 'normal' else type_,
}
# Record the details
constraints[constraint]['columns'].append(column)
diff --git a/django/db/backends/postgresql/introspection.py b/django/db/backends/postgresql/introspection.py
index d663de61e5d12..d50f6eb9c2f71 100644
--- a/django/db/backends/postgresql/introspection.py
+++ b/django/db/backends/postgresql/introspection.py
@@ -3,6 +3,7 @@
from django.db.backends.base.introspection import (
BaseDatabaseIntrospection, FieldInfo, TableInfo,
)
+from django.db.models.indexes import Index
from django.utils.deprecation import RemovedInDjango21Warning
from django.utils.encoding import force_text
@@ -234,7 +235,7 @@ def get_constraints(self, cursor, table_name):
"foreign_key": None,
"check": False,
"index": True,
- "type": type_,
+ "type": Index.suffix if type_ == 'btree' else type_,
"definition": definition,
"options": options,
}
diff --git a/django/db/backends/sqlite3/introspection.py b/django/db/backends/sqlite3/introspection.py
index 730793879d85b..eb15fdb612c14 100644
--- a/django/db/backends/sqlite3/introspection.py
+++ b/django/db/backends/sqlite3/introspection.py
@@ -4,6 +4,7 @@
from django.db.backends.base.introspection import (
BaseDatabaseIntrospection, FieldInfo, TableInfo,
)
+from django.db.models.indexes import Index
from django.utils.deprecation import RemovedInDjango21Warning
field_size_re = re.compile(r'^\s*(?:var)?char\s*\(\s*(\d+)\s*\)\s*$')
@@ -262,7 +263,7 @@ def get_constraints(self, cursor, table_name):
# Add type and column orders for indexes
if constraints[index]['index'] and not constraints[index]['unique']:
# SQLite doesn't support any index type other than b-tree
- constraints[index]['type'] = 'btree'
+ constraints[index]['type'] = Index.suffix
cursor.execute(
"SELECT sql FROM sqlite_master "
"WHERE type='index' AND name=%s" % self.connection.ops.quote_name(index)
diff --git a/tests/introspection/tests.py b/tests/introspection/tests.py
index 2993592e4f63c..4d96bf8514639 100644
--- a/tests/introspection/tests.py
+++ b/tests/introspection/tests.py
@@ -1,6 +1,7 @@
from unittest import mock, skipUnless
from django.db import connection
+from django.db.models import Index
from django.db.utils import DatabaseError
from django.test import TransactionTestCase, skipUnlessDBFeature
from django.test.utils import ignore_warnings
@@ -191,7 +192,7 @@ def test_get_constraints_index_types(self):
for key, val in constraints.items():
if val['columns'] == ['headline', 'pub_date']:
index = val
- self.assertEqual(index['type'], 'btree')
+ self.assertEqual(index['type'], Index.suffix)
@skipUnlessDBFeature('supports_index_column_ordering')
def test_get_constraints_indexes_orders(self):
diff --git a/tests/model_indexes/tests.py b/tests/model_indexes/tests.py
index 0e276dbd15b59..33e4bfaa7c0ad 100644
--- a/tests/model_indexes/tests.py
+++ b/tests/model_indexes/tests.py
@@ -1,10 +1,13 @@
from django.db import models
-from django.test import TestCase
+from django.test import SimpleTestCase
from .models import Book
-class IndexesTests(TestCase):
+class IndexesTests(SimpleTestCase):
+
+ def test_suffix(self):
+ self.assertEqual(models.Index.suffix, 'idx')
def test_repr(self):
index = models.Index(fields=['title'])
diff --git a/tests/postgres_tests/test_indexes.py b/tests/postgres_tests/test_indexes.py
index 9298b86e7387c..db61561200328 100644
--- a/tests/postgres_tests/test_indexes.py
+++ b/tests/postgres_tests/test_indexes.py
@@ -9,6 +9,9 @@
@skipUnlessDBFeature('has_brin_index_support')
class BrinIndexTests(PostgreSQLTestCase):
+ def test_suffix(self):
+ self.assertEqual(BrinIndex.suffix, 'brin')
+
def test_repr(self):
index = BrinIndex(fields=['title'], pages_per_range=4)
another_index = BrinIndex(fields=['title'])
@@ -41,6 +44,9 @@ def test_invalid_pages_per_range(self):
class GinIndexTests(PostgreSQLTestCase):
+ def test_suffix(self):
+ self.assertEqual(GinIndex.suffix, 'gin')
+
def test_repr(self):
index = GinIndex(fields=['title'])
self.assertEqual(repr(index), "<GinIndex: fields='title'>")
@@ -84,7 +90,7 @@ def test_gin_index(self):
editor.add_index(IntegerArrayModel, index)
constraints = self.get_constraints(IntegerArrayModel._meta.db_table)
# Check gin index was added
- self.assertEqual(constraints[index_name]['type'], 'gin')
+ self.assertEqual(constraints[index_name]['type'], GinIndex.suffix)
# Drop the index
with connection.schema_editor() as editor:
editor.remove_index(IntegerArrayModel, index)
@@ -97,7 +103,7 @@ def test_brin_index(self):
with connection.schema_editor() as editor:
editor.add_index(CharFieldModel, index)
constraints = self.get_constraints(CharFieldModel._meta.db_table)
- self.assertEqual(constraints[index_name]['type'], 'brin')
+ self.assertEqual(constraints[index_name]['type'], BrinIndex.suffix)
self.assertEqual(constraints[index_name]['options'], ['pages_per_range=4'])
with connection.schema_editor() as editor:
editor.remove_index(CharFieldModel, index)
| https://code.djangoproject.com/ticket/27135 | https://api.github.com/repos/django/django/pulls/8069 | 2017-02-15T18:54:08Z | 2017-02-16T02:08:06Z | 2017-02-16T02:08:06Z | 2017-02-16T02:14:07Z | 1,727 | django/django | 51,088 |
Create worker metrics manually for more control | diff --git a/inference/full-dev-setup.sh b/inference/full-dev-setup.sh
index 86741891da..4a6a10cada 100755
--- a/inference/full-dev-setup.sh
+++ b/inference/full-dev-setup.sh
@@ -16,9 +16,9 @@ fi
# Creates a tmux window with splits for the individual services
tmux new-session -d -s "inference-dev-setup"
-tmux send-keys "docker run --rm -it -p 5432:5432 -e POSTGRES_PASSWORD=postgres --name postgres postgres" C-m
+tmux send-keys "docker run --rm -it -p 5732:5432 -e POSTGRES_PASSWORD=postgres --name postgres postgres" C-m
tmux split-window -h
-tmux send-keys "docker run --rm -it -p 6379:6379 --name redis redis" C-m
+tmux send-keys "docker run --rm -it -p 6779:6379 --name redis redis" C-m
# only if model is not _lorem
if [ "$MODEL_CONFIG_NAME" != "_lorem" ]; then
@@ -30,7 +30,7 @@ fi
tmux split-window -h
tmux send-keys "cd server" C-m
-tmux send-keys "LOGURU_LEVEL=$LOGLEVEL DEBUG_API_KEYS='0000,0001' ALLOW_DEBUG_AUTH=True uvicorn main:app" C-m
+tmux send-keys "LOGURU_LEVEL=$LOGLEVEL POSTGRES_PORT=5732 REDIS_PORT=6779 DEBUG_API_KEYS='0000,0001' ALLOW_DEBUG_AUTH=True uvicorn main:app" C-m
tmux split-window -h
tmux send-keys "cd text-client" C-m
tmux send-keys "sleep 5" C-m
diff --git a/inference/server/oasst_inference_server/routes/workers.py b/inference/server/oasst_inference_server/routes/workers.py
index 28a4421d30..550e197a79 100644
--- a/inference/server/oasst_inference_server/routes/workers.py
+++ b/inference/server/oasst_inference_server/routes/workers.py
@@ -130,7 +130,8 @@ async def handle_worker(
async def _update_session(metrics: inference.WorkerMetricsInfo):
worker_session.requests_in_flight = len(work_request_map)
- worker_session.metrics = metrics
+ if metrics:
+ worker_session.metrics = metrics
await worker_utils.store_worker_session(worker_session)
def _add_dequeue(ftrs: set):
diff --git a/inference/server/oasst_inference_server/settings.py b/inference/server/oasst_inference_server/settings.py
index 884fc08b68..366e0b4a3b 100644
--- a/inference/server/oasst_inference_server/settings.py
+++ b/inference/server/oasst_inference_server/settings.py
@@ -68,7 +68,7 @@ def debug_api_keys_list(self) -> list[str]:
# we decided on letting the nextjs / website backend handle the token at first
# and then proxy this information back to the inference server
# in short: this should refer to the website, not to this server
- auth_callback_root: str = "https://open-assistant.io/api/inference_auth"
+ auth_callback_root: str = "http://localhost:3000/api/inference_auth"
allow_debug_auth: bool = False
diff --git a/inference/server/oasst_inference_server/worker_utils.py b/inference/server/oasst_inference_server/worker_utils.py
index 5c41da5244..2eeaba2d9a 100644
--- a/inference/server/oasst_inference_server/worker_utils.py
+++ b/inference/server/oasst_inference_server/worker_utils.py
@@ -95,7 +95,6 @@ async def receive_worker_info(
async def store_worker_session(worker_session: WorkerSession):
- logger.debug(f"Saving worker session {worker_session.id}")
await deps.redis_client.set(f"worker_session:{worker_session.id}", worker_session.json())
diff --git a/inference/worker/__main__.py b/inference/worker/__main__.py
index c9d0c9a935..21bcd741bc 100644
--- a/inference/worker/__main__.py
+++ b/inference/worker/__main__.py
@@ -85,7 +85,12 @@ def main():
)
ftrs.append(ftr)
case "ping":
- utils.send_response(ws, inference.PongResponse(request_id=worker_request.id))
+ utils.send_response(
+ ws,
+ inference.PongResponse(
+ request_id=worker_request.id, metrics=inference.WorkerMetricsInfo()
+ ),
+ )
case "wrong_api_key":
logger.error("Your API Key seems to be wrong, please check it!")
raise RuntimeError("Your API Key seems to be wrong, please check it!")
diff --git a/inference/worker/work.py b/inference/worker/work.py
index 2bd26cfc42..b4454c0044 100644
--- a/inference/worker/work.py
+++ b/inference/worker/work.py
@@ -115,6 +115,7 @@ def handle_work_request(
inference.ErrorResponse(
request_id=work_request.id,
error=stream_response.error,
+ metrics=inference.WorkerMetricsInfo(),
),
)
raise RuntimeError(f"Error from inference server: {stream_response.error}")
@@ -151,6 +152,7 @@ def handle_work_request(
request_id=work_request.id,
text=stream_response.generated_text,
finish_reason=stream_response.details.finish_reason,
+ metrics=inference.WorkerMetricsInfo(),
),
)
logger.debug("Work complete. Waiting for more work...")
diff --git a/oasst-shared/oasst_shared/schemas/inference.py b/oasst-shared/oasst_shared/schemas/inference.py
index 3419535086..636afff596 100644
--- a/oasst-shared/oasst_shared/schemas/inference.py
+++ b/oasst-shared/oasst_shared/schemas/inference.py
@@ -79,12 +79,14 @@ class GpuMetricsInfo(pydantic.BaseModel):
class WorkerMetricsInfo(pydantic.BaseModel):
+ created_at: datetime
cpu_usage: float
mem_usage: float
swap_usage: float
gpus: list[GpuMetricsInfo] | None = None
def __init__(self, **data):
+ data["created_at"] = datetime.utcnow()
data["cpu_usage"] = psutil.cpu_percent()
data["mem_usage"] = psutil.virtual_memory().percent
data["swap_usage"] = psutil.swap_memory().percent
@@ -212,7 +214,7 @@ class WorkerResponseBase(pydantic.BaseModel):
class PongResponse(WorkerResponseBase):
response_type: Literal["pong"] = "pong"
- metrics: WorkerMetricsInfo = pydantic.Field(default_factory=WorkerMetricsInfo)
+ metrics: WorkerMetricsInfo | None = None
class TokenResponse(WorkerResponseBase):
@@ -226,7 +228,7 @@ class GeneratedTextResponse(WorkerResponseBase):
response_type: Literal["generated_text"] = "generated_text"
text: str
finish_reason: Literal["length", "eos_token", "stop_sequence"]
- metrics: WorkerMetricsInfo = pydantic.Field(default_factory=WorkerMetricsInfo)
+ metrics: WorkerMetricsInfo | None = None
class InternalFinishedMessageResponse(WorkerResponseBase):
@@ -237,17 +239,18 @@ class InternalFinishedMessageResponse(WorkerResponseBase):
class InternalErrorResponse(WorkerResponseBase):
response_type: Literal["internal_error"] = "internal_error"
error: str
+ message: MessageRead
class ErrorResponse(WorkerResponseBase):
response_type: Literal["error"] = "error"
- metrics: WorkerMetricsInfo = pydantic.Field(default_factory=WorkerMetricsInfo)
+ metrics: WorkerMetricsInfo | None = None
error: str
class GeneralErrorResponse(WorkerResponseBase):
response_type: Literal["general_error"] = "general_error"
- metrics: WorkerMetricsInfo = pydantic.Field(default_factory=WorkerMetricsInfo)
+ metrics: WorkerMetricsInfo | None = None
error: str
| https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/2229 | 2023-03-26T09:11:53Z | 2023-03-26T17:58:15Z | 2023-03-26T17:58:15Z | 2023-03-26T17:58:16Z | 1,834 | LAION-AI/Open-Assistant | 37,016 | |
Add new snippet: Yielding None | diff --git a/README.md b/README.md
index 670a1fb..221e03b 100755
--- a/README.md
+++ b/README.md
@@ -83,14 +83,16 @@ So, here ya go...
- [💡 Explanation:](#-explanation-18)
- [Needle in a Haystack](#needle-in-a-haystack)
- [💡 Explanation:](#-explanation-19)
- - [The surprising comma](#the-surprising-comma)
+ - [yielding None](#yielding-none)
- [💡 Explanation:](#-explanation-20)
- - [For what?](#for-what)
+ - [The surprising comma](#the-surprising-comma)
- [💡 Explanation:](#-explanation-21)
- - [not knot!](#not-knot)
+ - [For what?](#for-what)
- [💡 Explanation:](#-explanation-22)
- - [Let's see if you can guess this?](#lets-see-if-you-can-guess-this)
+ - [not knot!](#not-knot)
- [💡 Explanation:](#-explanation-23)
+ - [Let's see if you can guess this?](#lets-see-if-you-can-guess-this)
+ - [💡 Explanation:](#-explanation-24)
- [Minor Ones](#minor-ones)
- [TODO: Hell of an example!](#todo-hell-of-an-example)
- [Contributing](#contributing)
@@ -1491,6 +1493,38 @@ tuple()
---
+### yielding None
+
+Suggested by @chris-rands in [this](https://github.com/satwikkansal/wtfpython/issues/32) issue.
+
+```py
+some_iterable = ('a', 'b')
+
+def some_func(val):
+ return "something"
+```
+
+
+**Output:**
+```py
+>>> [x for x in some_iterable]
+['a', 'b']
+>>> [(yield x) for x in some_iterable]
+<generator object <listcomp> at 0x7f70b0a4ad58>
+>>> list([(yield x) for x in some_iterable])
+['a', 'b']
+>>> list((yield x) for x in some_iterable)
+['a', None, 'b', None]
+>>> list(some_func((yield x)) for x in some_iterable)
+['a', 'something', 'b', 'something']
+```
+
+#### 💡 Explanation:
+- Source and explanation can be found here: https://stackoverflow.com/questions/32139885/yield-in-list-comprehensions-and-generator-expressions
+- Related bug report: http://bugs.python.org/issue10544
+
+---
+
### The surprising comma
Suggested by @MostAwesomeDude in [this](https://github.com/satwikkansal/wtfPython/issues/1) issue.
| Closes https://github.com/satwikkansal/wtfpython/issues/32 | https://api.github.com/repos/satwikkansal/wtfpython/pulls/45 | 2017-10-11T12:15:23Z | 2017-10-11T12:19:09Z | 2017-10-11T12:19:08Z | 2017-10-11T12:19:09Z | 684 | satwikkansal/wtfpython | 25,801 |
Add recent articles and opinions | diff --git a/README.md b/README.md
index 70d537bb039ab..9e333232d2dfb 100644
--- a/README.md
+++ b/README.md
@@ -63,8 +63,19 @@ The key features are:
---
+"*If you're looking to learn one **modern framework** for building REST APIs, check out **FastAPI** [...] It's fast, easy to use and easy to learn [...]*"
+"*We've switched over to **FastAPI** for our **APIs** [...] I think you'll like it [...]*"
+<div style="text-align: right; margin-right: 10%;">Ines Montani - Matthew Honnibal - <strong><a href="https://explosion.ai" target="_blank">Explosion AI</a> founders - <a href="https://spacy.io" target="_blank">spaCy</a> creators</strong> <a href="https://twitter.com/_inesmontani/status/1144173225322143744" target="_blank"><small>(ref)</small></a> - <a href="https://twitter.com/honnibal/status/1144031421859655680" target="_blank"><small>(ref)</small></a></div>
+
+---
+
+"*We adopted the **FastAPI** library to spawn a **REST** server that can be queried to obtain **predictions**. [for Ludwig]*"
+
+<div style="text-align: right; margin-right: 10%;">Piero Molino, Yaroslav Dudin, and Sai Sumanth Miryala - <strong>Uber</strong> <a href="https://eng.uber.com/ludwig-v0-2/" target="_blank"><small>(ref)</small></a></div>
+
+---
## Requirements
diff --git a/docs/external-links.md b/docs/external-links.md
index f81a3100839cb..4bed321d4600a 100644
--- a/docs/external-links.md
+++ b/docs/external-links.md
@@ -27,6 +27,14 @@ Here's an incomplete list of some of them.
* <a href="https://medium.com/@nico.axtmann95/deploying-a-scikit-learn-model-with-onnx-und-fastapi-1af398268915" target="_blank">Deploying a scikit-learn model with ONNX and FastAPI</a> by <a href="https://www.linkedin.com/in/nico-axtmann" target="_blank">Nico Axtmann</a>.
+* <a href="https://geekflare.com/python-asynchronous-web-frameworks/" target="_blank">Top 5 Asynchronous Web Frameworks for Python</a> by <a href="https://geekflare.com/author/ankush/" target="_blank">Ankush Thakur</a> on <a href="https://geekflare.com" target="_blank">GeekFlare</a>.
+
+* <a href="https://medium.com/@gntrm/jwt-authentication-with-fastapi-and-aws-cognito-1333f7f2729e" target="_blank">JWT Authentication with FastAPI and AWS Cognito</a> by <a href="https://twitter.com/gntrm" target="_blank">Johannes Gontrum</a>.
+
+* <a href="https://towardsdatascience.com/how-to-deploy-a-machine-learning-model-dc51200fe8cf" target="_blank">How to Deploy a Machine Learning Model</a> by <a href="https://www.linkedin.com/in/mgrootendorst/" target="_blank">Maarten Grootendorst</a> on <a href="https://towardsdatascience.com/" target="_blank">Towards Data Science</a>.
+
+* <a href="https://eng.uber.com/ludwig-v0-2/" target="_blank">Uber: Ludwig v0.2 Adds New Features and Other Improvements to its Deep Learning Toolbox [including a FastAPI server]</a> on <a href="https://eng.uber.com" target="_blank">Uber Engineering</a>.
+
### Japanese
* <a href="https://qiita.com/mtitg/items/47770e9a562dd150631d" target="_blank">FastAPI|DB接続してCRUDするPython製APIサーバーを構築</a> by <a href="https://qiita.com/mtitg" target="_blank">@mtitg</a>.
diff --git a/docs/index.md b/docs/index.md
index 70d537bb039ab..9e333232d2dfb 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -63,8 +63,19 @@ The key features are:
---
+"*If you're looking to learn one **modern framework** for building REST APIs, check out **FastAPI** [...] It's fast, easy to use and easy to learn [...]*"
+"*We've switched over to **FastAPI** for our **APIs** [...] I think you'll like it [...]*"
+<div style="text-align: right; margin-right: 10%;">Ines Montani - Matthew Honnibal - <strong><a href="https://explosion.ai" target="_blank">Explosion AI</a> founders - <a href="https://spacy.io" target="_blank">spaCy</a> creators</strong> <a href="https://twitter.com/_inesmontani/status/1144173225322143744" target="_blank"><small>(ref)</small></a> - <a href="https://twitter.com/honnibal/status/1144031421859655680" target="_blank"><small>(ref)</small></a></div>
+
+---
+
+"*We adopted the **FastAPI** library to spawn a **REST** server that can be queried to obtain **predictions**. [for Ludwig]*"
+
+<div style="text-align: right; margin-right: 10%;">Piero Molino, Yaroslav Dudin, and Sai Sumanth Miryala - <strong>Uber</strong> <a href="https://eng.uber.com/ludwig-v0-2/" target="_blank"><small>(ref)</small></a></div>
+
+---
## Requirements
| :pencil: Add recent articles and opinions. | https://api.github.com/repos/tiangolo/fastapi/pulls/490 | 2019-08-31T01:30:00Z | 2019-08-31T01:35:35Z | 2019-08-31T01:35:35Z | 2019-08-31T01:35:38Z | 1,373 | tiangolo/fastapi | 22,720 |
Create binary_search_matrix.py | diff --git a/matrix/binary_search_matrix.py b/matrix/binary_search_matrix.py
new file mode 100644
index 000000000000..6f203b7a3484
--- /dev/null
+++ b/matrix/binary_search_matrix.py
@@ -0,0 +1,57 @@
+def binary_search(array: list, lower_bound: int, upper_bound: int, value: int) -> int:
+ """
+ This function carries out Binary search on a 1d array and
+ return -1 if it do not exist
+ array: A 1d sorted array
+ value : the value meant to be searched
+ >>> matrix = [1, 4, 7, 11, 15]
+ >>> binary_search(matrix, 0, len(matrix) - 1, 1)
+ 0
+ >>> binary_search(matrix, 0, len(matrix) - 1, 23)
+ -1
+ """
+
+ r = int((lower_bound + upper_bound) // 2)
+ if array[r] == value:
+ return r
+ if lower_bound >= upper_bound:
+ return -1
+ if array[r] < value:
+ return binary_search(array, r + 1, upper_bound, value)
+ else:
+ return binary_search(array, lower_bound, r - 1, value)
+
+
+def mat_bin_search(value: int, matrix: list) -> list:
+ """
+ This function loops over a 2d matrix and calls binarySearch on
+ the selected 1d array and returns [-1, -1] is it do not exist
+ value : value meant to be searched
+ matrix = a sorted 2d matrix
+ >>> matrix = [[1, 4, 7, 11, 15],
+ ... [2, 5, 8, 12, 19],
+ ... [3, 6, 9, 16, 22],
+ ... [10, 13, 14, 17, 24],
+ ... [18, 21, 23, 26, 30]]
+ >>> target = 1
+ >>> mat_bin_search(target, matrix)
+ [0, 0]
+ >>> target = 34
+ >>> mat_bin_search(target, matrix)
+ [-1, -1]
+ """
+ index = 0
+ if matrix[index][0] == value:
+ return [index, 0]
+ while index < len(matrix) and matrix[index][0] < value:
+ r = binary_search(matrix[index], 0, len(matrix[index]) - 1, value)
+ if r != -1:
+ return [index, r]
+ index += 1
+ return [-1, -1]
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
| Added an algorithm to search in matrix
### Describe your change:
Added an algorithm to search in row sorted matrix using Binary Search.
* [x] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| https://api.github.com/repos/TheAlgorithms/Python/pulls/6995 | 2022-10-11T12:28:22Z | 2022-10-13T20:03:15Z | 2022-10-13T20:03:15Z | 2022-10-13T20:03:22Z | 667 | TheAlgorithms/Python | 29,632 |
Adds Lisbon public transportation info | diff --git a/README.md b/README.md
index f158dcb841..f7cadc6cfa 100644
--- a/README.md
+++ b/README.md
@@ -862,6 +862,7 @@ API | Description | Auth | HTTPS | CORS |
| [Icelandic APIs](http://docs.apis.is/) | Open APIs that deliver services in or regarding Iceland | No | Yes | Unknown |
| [Indian Railways](http://api.erail.in/) | Indian Railways Information | `apiKey` | No | Unknown |
| [Izi](http://api-docs.izi.travel/) | Audio guide for travellers | `apiKey` | Yes | Unknown |
+| [Metro Lisboa](http://app.metrolisboa.pt/status/getLinhas.php) | Delays in subway lines | No | No | No |
| [Navitia](https://api.navitia.io/) | The open API for building cool stuff with transport data | `apiKey` | Yes | Unknown |
| [REFUGE Restrooms](https://www.refugerestrooms.org/api/docs/#!/restrooms) | Provides safe restroom access for transgender, intersex and gender nonconforming individuals | No | Yes | Unknown |
| [Schiphol Airport](https://developer.schiphol.nl/) | Schiphol | `apiKey` | Yes | Unknown |
@@ -881,6 +882,7 @@ API | Description | Auth | HTTPS | CORS |
| [Transport for Grenoble, France](https://www.metromobilite.fr/pages/opendata/OpenDataApi.html) | Grenoble public transport | No | No | No |
| [Transport for Honolulu, US](http://hea.thebus.org/api_info.asp) | Honolulu Transportation Information | `apiKey` | No | Unknown |
| [Transport for India](https://data.gov.in/sector/transport) | India Public Transport API | `apiKey` | Yes | Unknown |
+| [Transport for Lisbon, Portugal](https://emel.city-platform.com/opendata/) | Data about buses routes, parking and traffic | `apiKey` | Yes | Unknown |
| [Transport for London, England](https://api.tfl.gov.uk) | TfL API | No | Yes | Unknown |
| [Transport for Madrid, Spain](http://opendata.emtmadrid.es/Servicios-web/BUS) | Madrid BUS transport API | `apiKey` | No | Unknown |
| [Transport for Manchester, England](https://developer.tfgm.com/) | TfGM transport network data | `apiKey` | Yes | No |
| Thank you for taking the time to work on a Pull Request for this project!
To ensure your PR is dealt with swiftly please check the following:
- [ ] Your submissions are formatted according to the guidelines in the [contributing guide](CONTRIBUTING.md)
- [ ] Your additions are ordered alphabetically
- [ ] Your submission has a useful description
- [ ] The description does not end with punctuation
- [ ] Each table column should be padded with one space on either side
- [ ] You have searched the repository for any relevant issues or pull requests
- [ ] Any category you are creating has the minimum requirement of 3 items
- [ ] All changes have been [squashed][squash-link] into a single commit
[squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
| https://api.github.com/repos/public-apis/public-apis/pulls/1098 | 2019-10-23T18:04:32Z | 2019-11-13T06:05:38Z | 2019-11-13T06:05:38Z | 2019-11-13T06:05:54Z | 543 | public-apis/public-apis | 35,436 |
V0.5 release | diff --git a/docs/ROADMAP.md b/docs/ROADMAP.md
index afc9ff445..3cb03f374 100644
--- a/docs/ROADMAP.md
+++ b/docs/ROADMAP.md
@@ -30,10 +30,10 @@ To reach version v0.5, approximately 70% of the following tasks need to be compl
4. Complete the design and implementation of module breakdown
5. Support various modes of memory: clearly distinguish between long-term and short-term memory
6. Perfect the test role, and carry out necessary interactions with humans
- 7. Allowing natural communication between roles (expected v0.5.0)
+ 7. ~~Allowing natural communication between roles~~ (v0.5.0)
8. Implement SkillManager and the process of incremental Skill learning (experimentation done with game agents)
9. Automatically get RPM and configure it by calling the corresponding openai page, so that each key does not need to be manually configured
- 10. IMPORTANT: Support incremental development (expected v0.5.0)
+ 10. ~~IMPORTANT: Support incremental development~~ (v0.5.0)
3. Strategies
1. Support ReAct strategy (experimentation done with game agents)
2. Support CoT strategy (experimentation done with game agents)
@@ -45,8 +45,8 @@ To reach version v0.5, approximately 70% of the following tasks need to be compl
2. Implementation: Knowledge search, supporting 10+ data formats
3. Implementation: Data EDA (expected v0.6.0)
4. Implementation: Review
- 5. Implementation: Add Document (expected v0.5.0)
- 6. Implementation: Delete Document (expected v0.5.0)
+ 5. ~~Implementation~~: Add Document (v0.5.0)
+ 6. ~~Implementation~~: Delete Document (v0.5.0)
7. Implementation: Self-training
8. ~~Implementation: DebugError~~ (v0.2.1)
9. Implementation: Generate reliable unit tests based on YAPI
diff --git a/metagpt/roles/role.py b/metagpt/roles/role.py
index 1e7ebf711..48688ad5f 100644
--- a/metagpt/roles/role.py
+++ b/metagpt/roles/role.py
@@ -25,9 +25,8 @@
from pydantic import BaseModel, Field
-from metagpt.actions import Action, ActionOutput
+from metagpt.actions import Action, ActionOutput, UserRequirement
from metagpt.actions.action_node import ActionNode
-from metagpt.actions.add_requirement import UserRequirement
from metagpt.llm import LLM, HumanProvider
from metagpt.logs import logger
from metagpt.memory import Memory
@@ -127,17 +126,7 @@ def history(self) -> list[Message]:
return self.memory.get()
-class _RoleInjector(type):
- def __call__(cls, *args, **kwargs):
- instance = super().__call__(*args, **kwargs)
-
- if not instance._rc.watch:
- instance._watch([UserRequirement])
-
- return instance
-
-
-class Role(metaclass=_RoleInjector):
+class Role:
"""Role/Agent"""
def __init__(self, name="", profile="", goal="", constraints="", desc="", is_human=False):
@@ -149,10 +138,9 @@ def __init__(self, name="", profile="", goal="", constraints="", desc="", is_hum
self._states = []
self._actions = []
self._role_id = str(self._setting)
- self._rc = RoleContext()
+ self._rc = RoleContext(watch={any_to_str(UserRequirement)})
self._subscription = {any_to_str(self), name} if name else {any_to_str(self)}
-
def _reset(self):
self._states = []
self._actions = []
@@ -203,8 +191,7 @@ def _watch(self, actions: Iterable[Type[Action]]):
"""Watch Actions of interest. Role will select Messages caused by these Actions from its personal message
buffer during _observe.
"""
- tags = {any_to_str(t) for t in actions}
- self._rc.watch.update(tags)
+ self._rc.watch = {any_to_str(t) for t in actions}
# check RoleContext after adding watch actions
self._rc.check(self._role_id)
@@ -401,6 +388,8 @@ async def run(self, with_message=None):
msg = with_message
elif isinstance(with_message, list):
msg = Message("\n".join(with_message))
+ if not msg.cause_by:
+ msg.cause_by = UserRequirement
self.put_message(msg)
if not await self._observe():
diff --git a/metagpt/schema.py b/metagpt/schema.py
index 5aec378e4..758149efa 100644
--- a/metagpt/schema.py
+++ b/metagpt/schema.py
@@ -121,10 +121,6 @@ def __init__(
:param send_to: Specifies the target recipient or consumer for message delivery in the environment.
:param role: Message meta info tells who sent this message.
"""
- if not cause_by:
- from metagpt.actions import UserRequirement
- cause_by = UserRequirement
-
super().__init__(
id=uuid.uuid4().hex,
content=content,
diff --git a/metagpt/startup.py b/metagpt/startup.py
index f930c386b..e886ad2a4 100644
--- a/metagpt/startup.py
+++ b/metagpt/startup.py
@@ -26,7 +26,7 @@ def startup(
),
reqa_file: str = typer.Option(default="", help="Specify the source file name for rewriting the quality test code."),
max_auto_summarize_code: int = typer.Option(
- default=-1,
+ default=0,
help="The maximum number of times the 'SummarizeCode' action is automatically invoked, with -1 indicating unlimited. This parameter is used for debugging the workflow.",
),
):
diff --git a/metagpt/team.py b/metagpt/team.py
index a5c405f80..5ce07ef13 100644
--- a/metagpt/team.py
+++ b/metagpt/team.py
@@ -3,10 +3,11 @@
"""
@Time : 2023/5/12 00:30
@Author : alexanderwu
-@File : software_company.py
+@File : team.py
@Modified By: mashenquan, 2023/11/27. Add an archiving operation after completing the project, as specified in
Section 2.2.3.3 of RFC 135.
"""
+import warnings
from pydantic import BaseModel, Field
from metagpt.actions import UserRequirement
@@ -47,7 +48,7 @@ def _check_balance(self):
raise NoMoneyException(CONFIG.total_cost, f"Insufficient funds: {CONFIG.max_budget}")
def run_project(self, idea, send_to: str = ""):
- """Start a project from publishing user requirement."""
+ """Run a project from publishing user requirement."""
self.idea = idea
# Human requirement.
@@ -55,6 +56,16 @@ def run_project(self, idea, send_to: str = ""):
Message(role="Human", content=idea, cause_by=UserRequirement, send_to=send_to or MESSAGE_ROUTE_TO_ALL)
)
+ def start_project(self, idea, send_to: str = ""):
+ """
+ Deprecated: This method will be removed in the future.
+ Please use the `run_project` method instead.
+ """
+ warnings.warn("The 'start_project' method is deprecated and will be removed in the future. "
+ "Please use the 'run_project' method instead.",
+ DeprecationWarning, stacklevel=2)
+ return self.run_project(idea=idea, send_to=send_to)
+
def _save(self):
logger.info(self.json(ensure_ascii=False))
diff --git a/setup.py b/setup.py
index 730fffd35..57290f4cd 100644
--- a/setup.py
+++ b/setup.py
@@ -30,7 +30,7 @@ def run(self):
setup(
name="metagpt",
- version="0.5.0",
+ version="0.5.2",
description="The Multi-Role Meta Programming Framework",
long_description=long_description,
long_description_content_type="text/markdown",
diff --git a/tests/metagpt/test_role.py b/tests/metagpt/test_role.py
index 8fac2503c..611d321fc 100644
--- a/tests/metagpt/test_role.py
+++ b/tests/metagpt/test_role.py
@@ -14,11 +14,11 @@
import pytest
from pydantic import BaseModel
-from metagpt.actions import Action, ActionOutput
+from metagpt.actions import Action, ActionOutput, UserRequirement
from metagpt.environment import Environment
from metagpt.roles import Role
from metagpt.schema import Message
-from metagpt.utils.common import get_class_name
+from metagpt.utils.common import any_to_str, get_class_name
class MockAction(Action):
@@ -60,7 +60,7 @@ class Input(BaseModel):
name=seed.name, profile=seed.profile, goal=seed.goal, constraints=seed.constraints, desc=seed.desc
)
role.subscribe({seed.subscription})
- assert role._rc.watch == set({})
+ assert role._rc.watch == {any_to_str(UserRequirement)}
assert role.name == seed.name
assert role.profile == seed.profile
assert role._setting.goal == seed.goal
| Update version and fix bugs encountered in v0.5 | https://api.github.com/repos/geekan/MetaGPT/pulls/583 | 2023-12-19T06:14:22Z | 2023-12-19T16:03:26Z | 2023-12-19T16:03:26Z | 2023-12-19T16:03:26Z | 2,193 | geekan/MetaGPT | 16,909 |
No more False Positive on Tinder | diff --git a/sherlock/resources/data.json b/sherlock/resources/data.json
index 0f5c60f71..60bca44bf 100644
--- a/sherlock/resources/data.json
+++ b/sherlock/resources/data.json
@@ -1697,7 +1697,7 @@
"username_unclaimed": "noonewouldeverusethis7"
},
"Tinder": {
- "errorMsg": "Looking for Someone?",
+ "errorMsg": "<title data-react-helmet=\"true\">Tinder | Match. Chat. Date.</title>",
"errorType": "message",
"rank": 1149,
"url": "https://www.gotinder.com/@{}",
| Tinder Title Hack. No JS Processing needed. | https://api.github.com/repos/sherlock-project/sherlock/pulls/697 | 2020-08-06T09:49:14Z | 2020-08-06T10:04:41Z | 2020-08-06T10:04:41Z | 2020-08-06T10:04:41Z | 159 | sherlock-project/sherlock | 36,398 |
Update zep_memory.ipynb | diff --git a/docs/docs/integrations/memory/zep_memory.ipynb b/docs/docs/integrations/memory/zep_memory.ipynb
index 9ec0757bbf84ef..286e848644be5e 100644
--- a/docs/docs/integrations/memory/zep_memory.ipynb
+++ b/docs/docs/integrations/memory/zep_memory.ipynb
@@ -12,10 +12,10 @@
"\n",
"Key Features:\n",
"\n",
- "- **Fast!** Zep’s async extractors operate independently of the your chat loop, ensuring a snappy user experience.\n",
+ "- **Fast!** Zep’s async extractors operate independently of your chat loop, ensuring a snappy user experience.\n",
"- **Long-term memory persistence**, with access to historical messages irrespective of your summarization strategy.\n",
"- **Auto-summarization** of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.\n",
- "- **Hybrid search** over memories and metadata, with messages automatically embedded on creation.\n",
+ "- **Hybrid search** over memories and metadata, with messages automatically embedded upon creation.\n",
"- **Entity Extractor** that automatically extracts named entities from messages and stores them in the message metadata.\n",
"- **Auto-token counting** of memories and summaries, allowing finer-grained control over prompt assembly.\n",
"- Python and JavaScript SDKs.\n",
| fixed minor typos;
the your > your
on > upon | https://api.github.com/repos/langchain-ai/langchain/pulls/11713 | 2023-10-12T12:59:46Z | 2023-10-12T14:41:19Z | 2023-10-12T14:41:19Z | 2023-10-12T14:41:20Z | 325 | langchain-ai/langchain | 42,921 |
Performance improvement for most scenes | diff --git a/manimlib/camera/camera.py b/manimlib/camera/camera.py
index b6bb4be4b9..94c111eb23 100644
--- a/manimlib/camera/camera.py
+++ b/manimlib/camera/camera.py
@@ -338,15 +338,15 @@ def set_cairo_context_path(self, ctx, vmobject):
return
ctx.new_path()
- subpaths = vmobject.get_subpaths_from_points(points)
+ subpaths = vmobject.gen_subpaths_from_points_2d(points)
for subpath in subpaths:
- quads = vmobject.get_cubic_bezier_tuples_from_points(subpath)
+ quads = vmobject.gen_cubic_bezier_tuples_from_points(subpath)
ctx.new_sub_path()
start = subpath[0]
ctx.move_to(*start[:2])
for p0, p1, p2, p3 in quads:
ctx.curve_to(*p1[:2], *p2[:2], *p3[:2])
- if vmobject.consider_points_equals(subpath[0], subpath[-1]):
+ if vmobject.consider_points_equals_2d(subpath[0], subpath[-1]):
ctx.close_path()
return self
@@ -549,7 +549,7 @@ def adjust_out_of_range_points(self, points):
def transform_points_pre_display(self, mobject, points):
# Subclasses (like ThreeDCamera) may want to
# adjust points futher before they're shown
- if np.any(np.isnan(points)) or np.any(points == np.inf):
+ if not np.all(np.isfinite(points)):
# TODO, print some kind of warning about
# mobject having invalid points?
points = np.zeros((1, 3))
diff --git a/manimlib/mobject/types/vectorized_mobject.py b/manimlib/mobject/types/vectorized_mobject.py
index 29b91e544d..25ba527dad 100644
--- a/manimlib/mobject/types/vectorized_mobject.py
+++ b/manimlib/mobject/types/vectorized_mobject.py
@@ -595,35 +595,69 @@ def consider_points_equals(self, p0, p1):
atol=self.tolerance_for_point_equality
)
+ def consider_points_equals_2d(self, p0, p1):
+ """
+ Determine if two points are close enough to be considered equal.
+
+ This uses the algorithm from np.isclose(), but expanded here for the
+ 2D point case. NumPy is overkill for such a small question.
+ """
+ rtol = 1.e-5 # default from np.isclose()
+ atol = self.tolerance_for_point_equality
+ if abs(p0[0] - p1[0]) > atol + rtol * abs(p1[0]):
+ return False
+ if abs(p0[1] - p1[1]) > atol + rtol * abs(p1[1]):
+ return False
+ return True
+
# Information about line
def get_cubic_bezier_tuples_from_points(self, points):
+ return np.array(list(self.gen_cubic_bezier_tuples_from_points(points)))
+
+ def gen_cubic_bezier_tuples_from_points(self, points):
+ """
+ Get a generator for the cubic bezier tuples of this object.
+
+ Generator to not materialize a list or np.array needlessly.
+ """
nppcc = VMobject.CONFIG["n_points_per_cubic_curve"]
remainder = len(points) % nppcc
points = points[:len(points) - remainder]
- return np.array([
+ return (
points[i:i + nppcc]
for i in range(0, len(points), nppcc)
- ])
+ )
def get_cubic_bezier_tuples(self):
return self.get_cubic_bezier_tuples_from_points(
self.get_points()
)
- def get_subpaths_from_points(self, points):
+ def _gen_subpaths_from_points(self, points, filter_func):
nppcc = self.n_points_per_cubic_curve
- split_indices = filter(
- lambda n: not self.consider_points_equals(
- points[n - 1], points[n]
- ),
- range(nppcc, len(points), nppcc)
- )
+ split_indices = filter(filter_func, range(nppcc, len(points), nppcc))
split_indices = [0] + list(split_indices) + [len(points)]
- return [
+ return (
points[i1:i2]
for i1, i2 in zip(split_indices, split_indices[1:])
if (i2 - i1) >= nppcc
- ]
+ )
+
+ def get_subpaths_from_points(self, points):
+ return list(
+ self._gen_subpaths_from_points(
+ points,
+ lambda n: not self.consider_points_equals(
+ points[n - 1], points[n]
+ ))
+ )
+
+ def gen_subpaths_from_points_2d(self, points):
+ return self._gen_subpaths_from_points(
+ points,
+ lambda n: not self.consider_points_equals_2d(
+ points[n - 1], points[n]
+ ))
def get_subpaths(self):
return self.get_subpaths_from_points(self.get_points())
diff --git a/perf_scenes.py b/perf_scenes.py
new file mode 100644
index 0000000000..80d468e7b0
--- /dev/null
+++ b/perf_scenes.py
@@ -0,0 +1,88 @@
+from manimlib.imports import *
+
+"""
+A set of scenes to be used for performance testing of Manim.
+"""
+
+
+class Perf1(GraphScene):
+ """
+ A simple scene of two animations from the end of a video on recursion.
+
+ - Uses a graph in 1/4 of the scene.
+ - First fades in multiple lines of text and equations, and the graph axes.
+ - Next animates creation of two graphs and the creation of their text
+ labels.
+ """
+ CONFIG = {
+ "x_axis_label":
+ "$n$",
+ "y_axis_label":
+ "$time$",
+ "x_axis_width":
+ FRAME_HEIGHT,
+ "y_axis_height":
+ FRAME_HEIGHT / 2,
+ "y_max":
+ 50,
+ "y_min":
+ 0,
+ "x_max":
+ 100,
+ "x_min":
+ 0,
+ "x_labeled_nums": [50, 100],
+ "y_labeled_nums":
+ range(0, 51, 10),
+ "y_tick_frequency":
+ 10,
+ "x_tick_frequency":
+ 10,
+ "axes_color":
+ BLUE,
+ "graph_origin":
+ np.array(
+ (-FRAME_X_RADIUS + LARGE_BUFF, -FRAME_Y_RADIUS + LARGE_BUFF, 0))
+ }
+
+ def construct(self):
+ t1 = TextMobject(
+ "Dividing a problem in half over and over means\\\\"
+ "the work done is proportional to $\\log_2{n}$").to_edge(UP)
+
+ t2 = TextMobject(
+ '\\textit{This is one of our\\\\favorite things to do in CS!}')
+ t2.to_edge(RIGHT)
+
+ t3 = TextMobject(
+ 'The new \\texttt{power(x,n)} is \\underline{much}\\\\better than the old!'
+ )
+ t3.scale(0.8)
+ p1f = TexMobject('x^n=x \\times x^{n-1}').set_color(ORANGE)
+ t4 = TextMobject('\\textit{vs.}').scale(0.8)
+ p2f = TexMobject(
+ 'x^n=x^{\\frac{n}{2}} \\times x^{\\frac{n}{2}}').set_color(GREEN)
+ p1v2g = VGroup(t3, p1f, t4, p2f).arrange(DOWN).center().to_edge(RIGHT)
+
+ self.setup_axes()
+ o_n = self.get_graph(lambda x: x, color=ORANGE, x_min=1, x_max=50)
+ o_log2n = self.get_graph(lambda x: math.log2(x),
+ color=GREEN,
+ x_min=2,
+ x_max=90)
+ onl = TexMobject('O(n)')
+ olog2nl = TexMobject('O(\\log_2{n})')
+ onl.next_to(o_n.get_point_from_function(0.6), UL)
+ olog2nl.next_to(o_log2n.get_point_from_function(0.8), UP)
+ self.play(
+ FadeIn(t1),
+ FadeIn(self.axes),
+ # FadeInFromDown(t2),
+ FadeIn(p1v2g),
+ )
+ self.play(ShowCreation(o_n),
+ ShowCreation(o_log2n),
+ ShowCreation(onl),
+ ShowCreation(olog2nl),
+ run_time=3)
+ self.wait(duration=5)
| tl;dr: this is a significant performance improvement for many scenes. 1.7x - 2.6x improvement in animation it/s.
This is a small change to some of the hotest paths in rendering objects. The biggest win comes from not using np.allclose() to check if two points are close enough. In general, NumPy is awesome at operating on large arrays, but overkill for very tiny questions like this. Created a small function to determine if two points are close using the same algorithm, and limited it to 2D points since that's all we need in set_cairo_context_path().
A couple of other minor tweaks to reduce or eliminate other uses of NumPy in this path.
In general, it is better to avoid wrapping lists in np.array when a real NumPy array isn't actually needed.
Added a new file for performance test scenes, with a single scene from the end of a video I've been working on.
Data:
MacBook Pro (16-inch, 2019)
macOS Catalina 10.15.4
2.4 GHz 8-Core Intel Core i9
64 GB 2667 MHz DDR4
Python 3.7.3 (default, Mar 6 2020, 22:34:30)
Profiler: yappi under Pycharm.
Using the scene Perf1 from the included perf_scenes.py, averaged over 5 runs and rendered with:
manim.py perf_scenes.py Perf1 -pl --leave_progress_bars
Before:
Animation 0: FadeInTextMobject, etc.: 8.93it/s
Animation 1: ShowCreationParametricFunction, etc.: 84.66it/s
Profiler shows 48.5% of the run spent under Camera.set_cairo_context_path()
After
Animation 0: FadeInTextMobject, etc.: 23.45it/s -- 2.63x improvement
Animation 1: ShowCreationParametricFunction, etc.: 149.62it/s -- 1.77x improvement
Profiler shows 19.9% of the run spent under Camera.set_cairo_context_path()
Less improvement with production-quality renders, and percent improvement varies with scene of course. This appears to be a good win for every scene I'm working on though. I hope it will be for others, too.
NB: there are more perf improvements to be had, of course, but this is the best one I currently have.
Thanks for contributing to manim!
**Please ensure that your pull request works with the latest version of manim.**
You should also include:
1. The motivation for making this change (or link the relevant issues)
2. How you tested the new behavior (e.g. a minimal working example, before/after
screenshots, gifs, commands, etc.) This is rather informal at the moment, but
the goal is to show us how you know the pull request works as intended.
| https://api.github.com/repos/3b1b/manim/pulls/974 | 2020-04-12T08:28:02Z | 2020-04-25T04:04:09Z | 2020-04-25T04:04:09Z | 2020-04-25T04:04:26Z | 2,085 | 3b1b/manim | 18,178 |
[deepspeed] check whether model is NLP one instead of counting on input type | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
index 079696c244fef..d34449fa57db9 100755
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -2562,8 +2562,8 @@ def _prepare_input(self, data: Union[torch.Tensor, Any]) -> Union[torch.Tensor,
return type(data)(self._prepare_input(v) for v in data)
elif isinstance(data, torch.Tensor):
kwargs = {"device": self.args.device}
- if self.deepspeed and data.dtype != torch.int64:
- # NLP models inputs are int64 and those get adjusted to the right dtype of the
+ if self.deepspeed and (torch.is_floating_point(data) or torch.is_complex(data)):
+ # NLP models inputs are int/uint and those get adjusted to the right dtype of the
# embedding. Other models such as wav2vec2's inputs are already float and thus
# may need special handling to match the dtypes of the model
kwargs.update({"dtype": self.args.hf_deepspeed_config.dtype()})
| # What does this PR do?
This PR intends to fix an issue when training of NLP model fails if input dtype isn't int64.
My dataset had dtype = int32. Everything was ok until I decided to add deepspeed.
It turned out that trainer relies on dtype and does input data convertion into hf_deepspeed_config.dtype if it isn't int64.
I guess it has to check whether first layer isn't Embedding instead.
I think this PR also needs tests but I need an advice on how we can cover this case.
@stas00 could you be so kind and review this PR and give an advice on whether tests are necessary and their implementation ? | https://api.github.com/repos/huggingface/transformers/pulls/21800 | 2023-02-25T11:02:29Z | 2023-03-01T12:41:36Z | 2023-03-01T12:41:36Z | 2023-03-01T12:43:58Z | 258 | huggingface/transformers | 12,913 |
fix import order | diff --git a/tools/infer/predict_cls.py b/tools/infer/predict_cls.py
index ab3f4b04f0..ed2f47c04d 100755
--- a/tools/infer/predict_cls.py
+++ b/tools/infer/predict_cls.py
@@ -16,7 +16,7 @@
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '../..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '../..')))
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'
diff --git a/tools/infer/predict_det.py b/tools/infer/predict_det.py
index 95a099451b..2c389f0e49 100755
--- a/tools/infer/predict_det.py
+++ b/tools/infer/predict_det.py
@@ -16,7 +16,7 @@
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '../..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '../..')))
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'
diff --git a/tools/infer/predict_e2e.py b/tools/infer/predict_e2e.py
index c00d101aa6..fb2859f0c7 100755
--- a/tools/infer/predict_e2e.py
+++ b/tools/infer/predict_e2e.py
@@ -16,7 +16,7 @@
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '../..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '../..')))
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'
diff --git a/tools/infer/predict_rec.py b/tools/infer/predict_rec.py
index 575e1925c9..eebb2b3ba4 100755
--- a/tools/infer/predict_rec.py
+++ b/tools/infer/predict_rec.py
@@ -16,7 +16,7 @@
from PIL import Image
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '../..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '../..')))
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'
diff --git a/tools/infer/predict_system.py b/tools/infer/predict_system.py
index b4e316d6a5..63b635c111 100755
--- a/tools/infer/predict_system.py
+++ b/tools/infer/predict_system.py
@@ -17,7 +17,7 @@
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '../..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '../..')))
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'
diff --git a/tools/infer_cls.py b/tools/infer_cls.py
index ab6a49120b..4be30bbb3c 100755
--- a/tools/infer_cls.py
+++ b/tools/infer_cls.py
@@ -23,7 +23,7 @@
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '..')))
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'
diff --git a/tools/infer_det.py b/tools/infer_det.py
index 9d2daf13ad..1acecedf3e 100755
--- a/tools/infer_det.py
+++ b/tools/infer_det.py
@@ -23,7 +23,7 @@
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '..')))
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'
diff --git a/tools/infer_e2e.py b/tools/infer_e2e.py
index 96dbac8e83..f3d5712fdd 100755
--- a/tools/infer_e2e.py
+++ b/tools/infer_e2e.py
@@ -23,7 +23,7 @@
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '..')))
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'
diff --git a/tools/infer_kie.py b/tools/infer_kie.py
index 16294e59cc..0cb0b8702c 100755
--- a/tools/infer_kie.py
+++ b/tools/infer_kie.py
@@ -24,7 +24,7 @@
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '..')))
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'
diff --git a/tools/infer_rec.py b/tools/infer_rec.py
index adc3c1c3c4..02b3afd8a1 100755
--- a/tools/infer_rec.py
+++ b/tools/infer_rec.py
@@ -24,7 +24,7 @@
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '..')))
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'
diff --git a/tools/infer_table.py b/tools/infer_table.py
index c73e384046..66c2da4421 100644
--- a/tools/infer_table.py
+++ b/tools/infer_table.py
@@ -24,7 +24,7 @@
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '..')))
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'
diff --git a/tools/infer_vqa_token_ser.py b/tools/infer_vqa_token_ser.py
index 5859c28f92..83ed72b392 100755
--- a/tools/infer_vqa_token_ser.py
+++ b/tools/infer_vqa_token_ser.py
@@ -23,7 +23,7 @@
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '..')))
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'
import cv2
diff --git a/tools/infer_vqa_token_ser_re.py b/tools/infer_vqa_token_ser_re.py
index fd62ace8ae..1e5f6f76d6 100755
--- a/tools/infer_vqa_token_ser_re.py
+++ b/tools/infer_vqa_token_ser_re.py
@@ -23,7 +23,7 @@
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '..')))
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'
import cv2
diff --git a/tools/train.py b/tools/train.py
index 506e0f7fa8..f6cd0e7d12 100755
--- a/tools/train.py
+++ b/tools/train.py
@@ -21,7 +21,7 @@
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
+sys.path.insert(0, os.path.abspath(os.path.join(__dir__, '..')))
import yaml
import paddle
| 环境变量中包含多个tools目录的时候,需要保证当前运行的tools目录在path的最前面,从而使得函数等内容可以被成功引用 | https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/5628 | 2022-03-04T07:10:22Z | 2022-03-04T08:13:54Z | 2022-03-04T08:13:54Z | 2022-03-04T08:13:54Z | 1,892 | PaddlePaddle/PaddleOCR | 42,101 |
Add a `whois` rule | diff --git a/thefuck/rules/whois.py b/thefuck/rules/whois.py
new file mode 100644
index 000000000..f019758ec
--- /dev/null
+++ b/thefuck/rules/whois.py
@@ -0,0 +1,30 @@
+from urllib.parse import urlparse
+
+
+def match(command, settings):
+ """
+ What the `whois` command returns depends on the 'Whois server' it contacted
+ and is not consistent through different servers. But there can be only two
+ types of errors I can think of with `whois`:
+ - `whois https://en.wikipedia.org/` → `whois en.wikipedia.org`;
+ - `whois en.wikipedia.org` → `whois wikipedia.org`.
+ So we match any `whois` command and then:
+ - if there is a slash: keep only the FQDN;
+ - if there is no slash but there is a point: removes the left-most
+ subdomain.
+
+ We cannot either remove all subdomains because we cannot know which part is
+ the subdomains and which is the domain, consider:
+ - www.google.fr → subdomain: www, domain: 'google.fr';
+ - google.co.uk → subdomain: None, domain; 'google.co.uk'.
+ """
+ return 'whois' in command.script
+
+
+def get_new_command(command, settings):
+ url = command.script.split()[1]
+
+ if '/' in command.script:
+ return 'whois ' + urlparse(url).netloc
+ elif '.' in command.script:
+ return 'whois ' + '.'.join(urlparse(url).path.split('.')[1:])
| What the `whois` command returns depends on the 'Whois server' it contacted and is not consistent through different servers. But there can be only two types of errors I can think of with `whois`:
- `whois https://en.wikipedia.org/` → `whois en.wikipedia.org`;
- `whois en.wikipedia.org` → `whois wikipedia.org`.
So we match any `whois` command and then:
- if there is a slash: keep only the FQDN;
- if there is no slash but there is a point: removes the left-most subdomain.
We cannot either remove all subdomains because we cannot know which part is the subdomains and which is the domain, consider:
- www.google.fr → subdomain: www, domain: 'google.fr';
- google.co.uk → subdomain: None, domain; 'google.co.uk'.
| https://api.github.com/repos/nvbn/thefuck/pulls/197 | 2015-05-15T16:42:39Z | 2015-05-15T17:13:22Z | 2015-05-15T17:13:22Z | 2015-05-15T17:13:37Z | 385 | nvbn/thefuck | 30,647 |
GitHub Action to lint Python code | diff --git a/.github/workflows/lint_python.yml b/.github/workflows/lint_python.yml
new file mode 100644
index 00000000000..2857808b883
--- /dev/null
+++ b/.github/workflows/lint_python.yml
@@ -0,0 +1,21 @@
+name: lint_python
+on: [pull_request, push]
+jobs:
+ lint_python:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v2
+ - uses: actions/setup-python@v2
+ - run: pip install bandit black codespell flake8 isort mypy pytest pyupgrade safety
+ - run: bandit --recursive --skip B101,B108,B301,B403,B404,B603 .
+ - run: black --check . || true
+ - run: codespell --ignore-words-list=nd,reacher,thist,ths -w
+ - run: flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
+ - run: flake8 . --count --exit-zero --max-complexity=10 --max-line-length=88 --show-source --statistics
+ - run: isort --check-only --profile black . || true
+ - run: pip install -e .[all]
+ - run: mypy --install-types --non-interactive . || true
+ - run: pytest . || true
+ - run: pytest --doctest-modules . || true
+ - run: shopt -s globstar && pyupgrade --py36-plus **/*.py || true
+ - run: safety check
| Output: https://github.com/cclauss/gym/actions | https://api.github.com/repos/openai/gym/pulls/2258 | 2021-07-27T05:27:40Z | 2021-07-27T18:16:01Z | 2021-07-27T18:16:00Z | 2021-07-27T18:32:56Z | 377 | openai/gym | 5,334 |
Fix train_mem for the upstream changes | diff --git a/fastchat/train/llama_flash_attn_monkey_patch.py b/fastchat/train/llama_flash_attn_monkey_patch.py
index d16a001ed2..f87b76e19e 100644
--- a/fastchat/train/llama_flash_attn_monkey_patch.py
+++ b/fastchat/train/llama_flash_attn_monkey_patch.py
@@ -14,8 +14,9 @@
def forward(
self,
hidden_states: torch.Tensor,
- past_key_value: Optional[Tuple[torch.Tensor]] = None,
attention_mask: Optional[torch.Tensor] = None,
+ position_ids: Optional[torch.Tensor] = None,
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
output_attentions: bool = False,
use_cache: bool = False,
) -> Tuple[torch.Tensor, Optional[torch.Tensor],
@@ -26,30 +27,20 @@ def forward(
"""
bsz, q_len, _ = hidden_states.size()
- query_states = self.q_proj(hidden_states).view(
- bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
- key_states = self.k_proj(hidden_states).view(
- bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
- value_states = self.v_proj(hidden_states).view(
- bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
+ query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
+ key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
+ value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
# [bsz, q_len, nh, hd]
# [bsz, nh, q_len, hd]
kv_seq_len = key_states.shape[-2]
- offset = 0
- if past_key_value is not None:
- offset = past_key_value[0].shape[-2]
- kv_seq_len += offset
+ assert past_key_value is None, "past_key_value is not supported"
+
cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
- query_states, key_states = apply_rotary_pos_emb(query_states,
- key_states,
- cos,
- sin,
- offset=offset)
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
# [bsz, nh, t, hd]
assert not output_attentions, "output_attentions is not supported"
assert not use_cache, "use_cache is not supported"
- assert past_key_value is None, "past_key_value is not supported"
# Flash attention codes from
# https://github.com/HazyResearch/flash-attention/blob/main/flash_attn/flash_attention.py
| This is to fix the training code due to the upstream change for the `LlamaAttention` module.
Fixes #187
Tested:
- [x] `sky launch -c vicuna-7b ./scripts/train-vicuna.yaml --cloud gcp --env WANDB_API_KEY --env MODEL_SIZE=7 --cloud gcp -s` | https://api.github.com/repos/lm-sys/FastChat/pulls/189 | 2023-04-05T00:17:27Z | 2023-04-05T02:38:04Z | 2023-04-05T02:38:04Z | 2023-04-05T02:38:06Z | 693 | lm-sys/FastChat | 41,440 |
Change Crossbar from websocket to networking category according to author | diff --git a/README.md b/README.md
index 991f53680..45e3f612e 100644
--- a/README.md
+++ b/README.md
@@ -612,6 +612,7 @@ A curated list of awesome Python frameworks, libraries and software. Inspired by
* [eventlet](http://eventlet.net/) - Asynchronous framework with WSGI support.
* [pyzmq](http://zeromq.github.io/pyzmq/) - A Python wrapper for the 0MQ message library.
* [txZMQ](https://github.com/smira/txZMQ) - Twisted based wrapper for the 0MQ message library.
+* [Crossbar](http://crossbar.io) - Open-source Unified Application Router (Websocket & WAMP for Python on Autobahn).
## WebSocket
@@ -619,7 +620,6 @@ A curated list of awesome Python frameworks, libraries and software. Inspired by
* [AutobahnPython](https://github.com/tavendo/AutobahnPython) - WebSocket & WAMP for Python on Twisted and [asyncio](https://docs.python.org/3/library/asyncio.html).
* [WebSocket-for-Python](https://github.com/Lawouach/WebSocket-for-Python) - WebSocket client and server library for Python 2 and 3 as well as PyPy.
-* [Crossbar](http://crossbar.io) - Open-source Unified Application Router (Websocket & WAMP for Python on Autobahn).
## WSGI Servers
| Change Crossbar from websocket to networking category according to the author.
| https://api.github.com/repos/vinta/awesome-python/pulls/161 | 2014-07-22T17:21:08Z | 2014-07-22T23:00:17Z | 2014-07-22T23:00:17Z | 2014-07-22T23:00:17Z | 332 | vinta/awesome-python | 27,108 |
F.60: Remove C-style cast (T&) from example of invalid C++ | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index f7c7a2e85..ef3a7f1ba 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -3439,7 +3439,7 @@ Sometimes having `nullptr` as an alternative to indicated "no object" is useful,
##### Note
-It is possible, but not valid C++ to construct a reference that is essentially a `nullptr` (e.g., `T* p = nullptr; T& r = (T&)*p;`).
+It is possible, but not valid C++ to construct a reference that is essentially a `nullptr` (e.g., `T* p = nullptr; T& r = *p;`).
That error is very uncommon.
##### Note
| The C-style cast in the example of constructing a `nullptr` reference appears unnecessary, in "F.60: Prefer T* over T& when “no argument” is a valid option". The example would also compile without any cast.
Of course, `T* p = nullptr; T& r = *p;` is still not valid C++, but that is according to the intention of the example. | https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/1711 | 2020-11-15T12:17:34Z | 2020-11-16T16:07:57Z | 2020-11-16T16:07:57Z | 2020-11-16T16:07:57Z | 186 | isocpp/CppCoreGuidelines | 15,467 |
Fix timeseries_dataset_from_array counts when sequence_stride > 1 | diff --git a/keras/utils/timeseries_dataset.py b/keras/utils/timeseries_dataset.py
index a53860ec98e..60c37b116d9 100644
--- a/keras/utils/timeseries_dataset.py
+++ b/keras/utils/timeseries_dataset.py
@@ -84,7 +84,7 @@ def timeseries_dataset_from_array(
Example 1:
- Consider indices `[0, 1, ... 99]`.
+ Consider indices `[0, 1, ... 98]`.
With `sequence_length=10, sampling_rate=2, sequence_stride=3`,
`shuffle=False`, the dataset will yield batches of sequences
composed of the following indices:
@@ -97,9 +97,9 @@ def timeseries_dataset_from_array(
Last sequence: [78 80 82 84 86 88 90 92 94 96]
```
- In this case the last 3 data points are discarded since no full sequence
+ In this case the last 2 data points are discarded since no full sequence
can be generated to include them (the next sequence would have started
- at index 81, and thus its last step would have gone over 99).
+ at index 81, and thus its last step would have gone over 98).
Example 2: Temporal regression.
@@ -209,7 +209,7 @@ def timeseries_dataset_from_array(
# Determine the lowest dtype to store start positions (to lower memory
# usage).
- num_seqs = end_index - start_index - (sequence_length * sampling_rate) + 1
+ num_seqs = end_index - start_index - (sequence_length - 1) * sampling_rate
if targets is not None:
num_seqs = min(num_seqs, len(targets))
if num_seqs < 2147483647:
diff --git a/keras/utils/timeseries_dataset_test.py b/keras/utils/timeseries_dataset_test.py
index 28fc932dfe5..77f6acd33d3 100644
--- a/keras/utils/timeseries_dataset_test.py
+++ b/keras/utils/timeseries_dataset_test.py
@@ -130,8 +130,8 @@ def test_sampling_rate(self):
if i < 16:
self.assertEqual(inputs.shape, (5, 9))
if i == 16:
- # Last batch: size 3
- self.assertEqual(inputs.shape, (3, 9))
+ # Last batch: size 4
+ self.assertEqual(inputs.shape, (4, 9))
# Check target values
self.assertAllClose(inputs[:, 0] * 2, targets)
for j in range(min(5, len(inputs))):
| Fixes https://github.com/keras-team/tf-keras/issues/314 | https://api.github.com/repos/keras-team/keras/pulls/17396 | 2023-01-05T10:09:49Z | 2023-02-06T18:42:27Z | 2023-02-06T18:42:27Z | 2023-09-22T18:13:05Z | 611 | keras-team/keras | 47,237 |
Added information about diffusion and total execution time (#463) | diff --git a/modules/async_worker.py b/modules/async_worker.py
index 379b2c88e..5eebd02dc 100644
--- a/modules/async_worker.py
+++ b/modules/async_worker.py
@@ -336,6 +336,8 @@ def callback(step, x0, x, total_steps, y):
outputs.append(['preview', (13, 'Starting tasks ...', None)])
for current_task_id, task in enumerate(tasks):
try:
+ execution_start_time = time.perf_counter()
+
imgs = pipeline.process_diffusion(
positive_cond=task['c'],
negative_cond=task['uc'],
@@ -353,6 +355,9 @@ def callback(step, x0, x, total_steps, y):
if inpaint_worker.current_task is not None:
imgs = [inpaint_worker.current_task.post_process(x) for x in imgs]
+ execution_time = time.perf_counter() - execution_start_time
+ print(f'Diffusion time: {execution_time:.2f} seconds')
+
for x in imgs:
d = [
('Prompt', raw_prompt),
diff --git a/webui.py b/webui.py
index d72c1fa0a..7519784b6 100644
--- a/webui.py
+++ b/webui.py
@@ -15,6 +15,8 @@
def generate_clicked(*args):
+ execution_start_time = time.perf_counter()
+
yield gr.update(visible=True, value=modules.html.make_progress_html(1, 'Initializing ...')), \
gr.update(visible=True, value=None), \
gr.update(visible=False)
@@ -36,6 +38,9 @@ def generate_clicked(*args):
gr.update(visible=False), \
gr.update(visible=True, value=product)
finished = True
+
+ execution_time = time.perf_counter() - execution_start_time
+ print(f'Total time: {execution_time:.2f} seconds')
return
| Fix proposal for issue #463.
| https://api.github.com/repos/lllyasviel/Fooocus/pulls/464 | 2023-09-21T08:57:22Z | 2023-09-21T09:07:17Z | 2023-09-21T09:07:17Z | 2023-09-21T09:09:07Z | 430 | lllyasviel/Fooocus | 7,289 |
[docs] Fix typos and other small stuff | diff --git a/docs/source/progress.rst b/docs/source/progress.rst
index 23693843b..272687d93 100644
--- a/docs/source/progress.rst
+++ b/docs/source/progress.rst
@@ -58,7 +58,7 @@ The ``total`` value associated with a task is the number of steps that must be c
Updating tasks
~~~~~~~~~~~~~~
-When you call :meth:`~rich.progress.Progress.add_task` you get back a `Task ID`. Use this ID to call :meth:`~rich.progress.Progress.update` whenever you have completed some work, or any information has changed. Typically you will need to update ``completed`` every time you have completed a step. You can do this by updated ``completed`` directly or by setting ``advance`` which will add to the current ``completed`` value.
+When you call :meth:`~rich.progress.Progress.add_task` you get back a `Task ID`. Use this ID to call :meth:`~rich.progress.Progress.update` whenever you have completed some work, or any information has changed. Typically you will need to update ``completed`` every time you have completed a step. You can do this by setting ``completed`` directly or by setting ``advance`` which will add to the current ``completed`` value.
The :meth:`~rich.progress.Progress.update` method collects keyword arguments which are also associated with the task. Use this to supply any additional information you would like to render in the progress display. The additional arguments are stored in ``task.fields`` and may be referenced in :ref:`Column classes<Columns>`.
@@ -234,7 +234,7 @@ Here's an example that reads a url from the internet::
If you expect to be reading from multiple files, you can use :meth:`~rich.progress.Progress.open` or :meth:`~rich.progress.Progress.wrap_file` to add a file progress to an existing Progress instance.
-See `cp_progress.py <https://github.com/willmcgugan/rich/blob/master/examples/cp_progress.py>` for a minimal clone of the ``cp`` command which shows a progress bar as the file is copied.
+See `cp_progress.py <https://github.com/willmcgugan/rich/blob/master/examples/cp_progress.py>`_ for a minimal clone of the ``cp`` command which shows a progress bar as the file is copied.
Multiple Progress
diff --git a/docs/source/tables.rst b/docs/source/tables.rst
index fdc6ec043..f573dbc62 100644
--- a/docs/source/tables.rst
+++ b/docs/source/tables.rst
@@ -50,13 +50,13 @@ Table Options
There are a number of keyword arguments on the Table constructor you can use to define how a table should look.
-- ``title`` Sets the title of the table (text show above the table).
-- ``caption`` Sets the table caption (text show below the table).
+- ``title`` Sets the title of the table (text shown above the table).
+- ``caption`` Sets the table caption (text shown below the table).
- ``width`` Sets the desired width of the table (disables automatic width calculation).
- ``min_width`` Sets a minimum width for the table.
- ``box`` Sets one of the :ref:`appendix_box` styles for the table grid, or ``None`` for no grid.
- ``safe_box`` Set to ``True`` to force the table to generate ASCII characters rather than unicode.
-- ``padding`` An integer, or tuple of 1, 2, or 4 values to set the padding on cells.
+- ``padding`` An integer, or tuple of 1, 2, or 4 values to set the padding on cells (see :ref:`Padding`).
- ``collapse_padding`` If True the padding of neighboring cells will be merged.
- ``pad_edge`` Set to False to remove padding around the edge of the table.
- ``expand`` Set to True to expand the table to the full available size.
diff --git a/docs/source/text.rst b/docs/source/text.rst
index fd6851fb4..c5a1add82 100644
--- a/docs/source/text.rst
+++ b/docs/source/text.rst
@@ -28,12 +28,12 @@ Alternatively, you can construct styled text by calling :meth:`~rich.text.Text.a
If you would like to use text that is already formatted with ANSI codes, call :meth:`~rich.text.Text.from_ansi` to convert it to a ``Text`` object::
- text = Text.from_ansi("\033[1mHello, World!\033[0m")
+ text = Text.from_ansi("\033[1;35mHello\033[0m, World!")
console.print(text.spans)
-Since building Text instances from parts is a common requirement, Rich offers :meth:`~rich.text.Text.assemble` which will combine strings or pairs of string and Style, and return a Text instance. The follow example is equivalent to the code above::
+Since building Text instances from parts is a common requirement, Rich offers :meth:`~rich.text.Text.assemble` which will combine strings or pairs of string and Style, and return a Text instance. The following example is equivalent to the ANSI example above::
- text = Text.assemble(("Hello", "bold magenta"), " World!")
+ text = Text.assemble(("Hello", "bold magenta"), ", World!")
console.print(text)
You can apply a style to given words in the text with :meth:`~rich.text.Text.highlight_words` or for ultimate control call :meth:`~rich.text.Text.highlight_regex` to highlight text matching a *regular expression*.
diff --git a/rich/pretty.py b/rich/pretty.py
index 498907f4c..5c48cfd9f 100644
--- a/rich/pretty.py
+++ b/rich/pretty.py
@@ -986,7 +986,7 @@ class StockKeepingUnit(NamedTuple):
from rich import print
- # print(Pretty(data, indent_guides=True, max_string=20))
+ print(Pretty(data, indent_guides=True, max_string=20))
class Thing:
def __repr__(self) -> str:
| ## Type of changes
- [ ] Bug fix
- [ ] New feature
- [x] Documentation / docstrings
- [ ] Tests
- [ ] Other
## Checklist
- [ ] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [ ] I've added tests for new code.
- [x] I accept that @willmcgugan may be pedantic in the code review.
## Description
Just some minor mistakes I found while reading the (otherwise excellent!) docs.
| https://api.github.com/repos/Textualize/rich/pulls/3094 | 2023-08-19T18:03:08Z | 2023-11-07T17:37:25Z | 2023-11-07T17:37:25Z | 2024-01-24T13:45:32Z | 1,329 | Textualize/rich | 48,302 |
Pass sort for agg multiple | diff --git a/pandas/core/base.py b/pandas/core/base.py
index 5022beabef76b..fa78c89ed4ee7 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -608,7 +608,7 @@ def _aggregate_multiple_funcs(self, arg, _level, _axis):
raise ValueError("no results")
try:
- return concat(results, keys=keys, axis=1)
+ return concat(results, keys=keys, axis=1, sort=False)
except TypeError:
# we are concatting non-NDFrame objects,
diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py
index af39c8f01cf73..ac46f02d00773 100644
--- a/pandas/tests/frame/test_apply.py
+++ b/pandas/tests/frame/test_apply.py
@@ -908,6 +908,31 @@ def test_demo(self):
index=['max', 'min', 'sum'])
tm.assert_frame_equal(result.reindex_like(expected), expected)
+ def test_agg_multiple_mixed_no_warning(self):
+ # https://github.com/pandas-dev/pandas/issues/20909
+ mdf = pd.DataFrame({'A': [1, 2, 3],
+ 'B': [1., 2., 3.],
+ 'C': ['foo', 'bar', 'baz'],
+ 'D': pd.date_range('20130101', periods=3)})
+ expected = pd.DataFrame({"A": [1, 6], 'B': [1.0, 6.0],
+ "C": ['bar', 'foobarbaz'],
+ "D": [pd.Timestamp('2013-01-01'), pd.NaT]},
+ index=['min', 'sum'])
+ # sorted index
+ with tm.assert_produces_warning(None):
+ result = mdf.agg(['min', 'sum'])
+
+ tm.assert_frame_equal(result, expected)
+
+ with tm.assert_produces_warning(None):
+ result = mdf[['D', 'C', 'B', 'A']].agg(['sum', 'min'])
+
+ # For backwards compatibility, the result's index is
+ # still sorted by function name, so it's ['min', 'sum']
+ # not ['sum', 'min'].
+ expected = expected[['D', 'C', 'B', 'A']]
+ tm.assert_frame_equal(result, expected)
+
def test_agg_dict_nested_renaming_depr(self):
df = pd.DataFrame({'A': range(5), 'B': 5})
| xref https://github.com/pandas-dev/pandas/issues/20909 | https://api.github.com/repos/pandas-dev/pandas/pulls/21062 | 2018-05-15T16:07:43Z | 2018-05-15T20:02:19Z | 2018-05-15T20:02:19Z | 2018-05-15T20:02:33Z | 586 | pandas-dev/pandas | 45,650 |
fix(hc): Restructure OrganizationView base classes | diff --git a/src/sentry/web/frontend/base.py b/src/sentry/web/frontend/base.py
index d9ed43005b1eb..5ae4ffda6fbd4 100644
--- a/src/sentry/web/frontend/base.py
+++ b/src/sentry/web/frontend/base.py
@@ -1,5 +1,6 @@
from __future__ import annotations
+import abc
import inspect
import logging
from typing import Any, Mapping, Protocol
@@ -418,12 +419,8 @@ def handle_disabled_member(self, organization: Organization) -> HttpResponse:
return self.redirect(redirect_uri)
-class OrganizationView(BaseView):
+class AbstractOrganizationView(BaseView, abc.ABC):
"""
- A deprecated view used by endpoints that act on behalf of an organization.
- In the future, we should move endpoints to either of the subclasses, RegionSilo* or ControlSilo*, and
- move out any ORM specific logic into the correct silo view. This will likely become an ABC that shares some
- common logic.
The 'organization' keyword argument is automatically injected into the resulting dispatch, but currently the
typing of 'organization' will vary based on the subclass. It may either be an RpcOrganization or an orm
Organization based on the subclass. Be mindful during this transition of the typing.
@@ -543,58 +540,45 @@ def needs_sso(self, request: Request, organization: Organization | RpcOrganizati
return True
return False
- def _lookup_orm_org(self) -> Organization | None:
- """
- Used by convert_args to convert the hybrid cloud safe active_organization object into an org ORM.
- This should really only be used by the Region or Monolith silo modes -- calling this in a Control silo
- endpoint or codepath will result in exceptions.
- :return:
- """
- organization: Organization | None = None
- if self.active_organization:
- try:
- organization = Organization.objects.get(id=self.active_organization.organization.id)
- except Organization.DoesNotExist:
- pass
- return organization
+ @abc.abstractmethod
+ def _get_organization(self) -> Organization | RpcOrganization | None:
+ raise NotImplementedError
def convert_args(
self, request: Request, organization_slug: str | None = None, *args: Any, **kwargs: Any
) -> tuple[tuple[Any, ...], dict[str, Any]]:
if "organization" not in kwargs:
- kwargs["organization"] = self._lookup_orm_org()
+ kwargs["organization"] = self._get_organization()
- return args, kwargs
+ return super().convert_args(request, *args, **kwargs)
-class RegionSiloOrganizationView(OrganizationView):
+class OrganizationView(AbstractOrganizationView):
"""
- A view which has direct ORM access to organization objects. In practice, **only endpoints that exist in the
- region silo should use this class**. When All endpoints have been convert / tested against region silo compliance,
- the base class (OrganizationView) will likely disappear and only either ControlSilo* or RegionSilo* classes will
- remain.
+ A view which has direct ORM access to organization objects. Only endpoints that exist in the
+ region silo should use this class.
"""
- def convert_args(
- self, request: Any, organization_slug: str | None = None, *args: Any, **kwargs: Any
- ) -> tuple[tuple[Any, ...], dict[str, Any]]:
- if "organization" not in kwargs:
- kwargs["organization"] = self._lookup_orm_org()
+ def _get_organization(self) -> Organization | None:
+ if not self.active_organization:
+ return None
+ try:
+ return Organization.objects.get(id=self.active_organization.organization.id)
+ except Organization.DoesNotExist:
+ return None
- return args, kwargs
+class ControlSiloOrganizationView(AbstractOrganizationView):
+ """A view which accesses organization objects over RPC.
-class ControlSiloOrganizationView(OrganizationView):
- def convert_args(
- self, request: Any, *args: Any, **kwargs: Any
- ) -> tuple[tuple[Any, ...], dict[str, Any]]:
- kwargs["organization"] = (
- self.active_organization.organization if self.active_organization else None
- )
- return super().convert_args(request, *args, **kwargs)
+ Only endpoints on the control silo should use this class (but it works anywhere).
+ """
+
+ def _get_organization(self) -> RpcOrganization | None:
+ return self.active_organization.organization if self.active_organization else None
-class ProjectView(RegionSiloOrganizationView):
+class ProjectView(OrganizationView):
"""
Any view acting on behalf of a project should inherit from this base and the
matching URL pattern must pass 'org_slug' as well as 'project_slug'.
@@ -640,7 +624,7 @@ def convert_args(self, request: Request, organization_slug: str, project_slug: s
organization: Organization | None = None
active_project: Project | None = None
if self.active_organization:
- organization = self._lookup_orm_org()
+ organization = self._get_organization()
if organization:
active_project = self.get_active_project(
diff --git a/src/sentry/web/frontend/react_page.py b/src/sentry/web/frontend/react_page.py
index 0fb8e2244c9ed..04fa6f7d4e6e7 100644
--- a/src/sentry/web/frontend/react_page.py
+++ b/src/sentry/web/frontend/react_page.py
@@ -14,7 +14,7 @@
from sentry.services.hybrid_cloud.organization import organization_service
from sentry.signals import first_event_pending
from sentry.utils.http import is_using_customer_domain, query_string
-from sentry.web.frontend.base import BaseView, OrganizationView
+from sentry.web.frontend.base import BaseView, ControlSiloOrganizationView
from sentry.web.helpers import render_to_response
# url names that should only be accessible from a non-customer domain hostname.
@@ -98,7 +98,7 @@ def handle_react(self, request: Request, **kwargs) -> Response:
# TODO(dcramer): once we implement basic auth hooks in React we can make this
# generic
-class ReactPageView(OrganizationView, ReactMixin):
+class ReactPageView(ControlSiloOrganizationView, ReactMixin):
def handle_auth_required(self, request: Request, *args, **kwargs) -> Response:
# If user is a superuser (but not active, because otherwise this method would never be called)
# Then allow client to handle the route and respond to any API request errors
| Make `OrganizationView` abstract and rename it to `AbstractOrganizationView`. Refactor `_lookup_orm_org` into an abstract method. Have `RegionSiloOrganizationView` replace `OrganizationView` as the typical case that loads an `Organization` ORM model. Have `ControlSiloOrganizationView` load an `RpcOrganization` instead. | https://api.github.com/repos/getsentry/sentry/pulls/49729 | 2023-05-24T21:23:01Z | 2023-05-25T21:28:44Z | 2023-05-25T21:28:44Z | 2023-06-10T00:02:09Z | 1,473 | getsentry/sentry | 44,178 |
Add OnWater API | diff --git a/README.md b/README.md
index 33b1b6279d..79fbc3ed45 100644
--- a/README.md
+++ b/README.md
@@ -278,6 +278,7 @@ API | Description | Auth | HTTPS | Link |
| Mapzen Search | Open Source & Open Data Global Geocoding Service | `apiKey` | Yes | [Go!](https://mapzen.com/products/search/) |
| Mexico | Mexico RESTful zip codes API | No | Yes | [Go!](https://github.com/IcaliaLabs/sepomex) |
| One Map 2.0, Singapore| Singapore Land Authority REST API services for Singapore addresses | `apiKey` | Yes | [Go!](https://docs.onemap.sg/) |
+| OnWater | Determine if a lat/lon is on water or land | No | Yes | [Go!](https://onwater.io/) |
| OpenCage | Forward and reverse geocoding using open data | No | Yes | [Go!](https://geocoder.opencagedata.com) |
| OpenStreetMap | Navigation, geolocation and geographical data | `OAuth` | No | [Go!](http://wiki.openstreetmap.org/wiki/API) |
| PostcodeData.nl | Provide geolocation data based on postcode for Dutch addresses | No | No | [Go!](http://api.postcodedata.nl/v1/postcode/?postcode=1211EP&streetnumber=60&ref=domeinnaam.nl&type=json) |
| https://api.github.com/repos/public-apis/public-apis/pulls/404 | 2017-07-26T16:56:32Z | 2017-07-26T18:12:07Z | 2017-07-26T18:12:07Z | 2017-07-26T18:13:26Z | 331 | public-apis/public-apis | 36,078 | |
Add the ability to exclude dependencies from conda builds of Streamlit | diff --git a/Makefile b/Makefile
index 2822c72e8009..8034a4a87018 100644
--- a/Makefile
+++ b/Makefile
@@ -186,7 +186,7 @@ conda-distribution:
# This can take upwards of 20 minutes to complete in a fresh conda installation! (Dependency solving is slow.)
# NOTE: Running the following command requires both conda and conda-build to
# be installed.
- GIT_HASH=$$(git rev-parse --short HEAD) conda build lib/conda-recipe --output-folder lib/conda-recipe/dist
+ ST_CONDA_BUILD=1 GIT_HASH=$$(git rev-parse --short HEAD) conda build lib/conda-recipe --output-folder lib/conda-recipe/dist
.PHONY: conda-package
# Build lib and frontend, and then run 'conda-distribution'
diff --git a/lib/conda-recipe/meta.yaml b/lib/conda-recipe/meta.yaml
index fde4eebec1df..cd5135215ee1 100644
--- a/lib/conda-recipe/meta.yaml
+++ b/lib/conda-recipe/meta.yaml
@@ -39,6 +39,7 @@ build:
{% endfor %}
script_env:
- GIT_HASH
+ - ST_CONDA_BUILD
requirements:
host:
@@ -55,23 +56,18 @@ requirements:
# by default in our conda distribution due to this.
{% elif 'watchdog' in req %}
- watchdog
- # TODO(vdonato): Possibly remove this check if gitpython and pydeck are
- # moved to extras_require.
- {% elif 'gitpython' not in req and 'pydeck' not in req %}
+ {% else %}
- {{ req }}
{% endif %}
{% endfor %}
-# TODO(vdonato): Uncomment this section once we've figured out what to do with
-# optional dependencies (pip check will currently fail due to gitpython and
-# pydeck being uninstalled).
-# test:
-# imports:
-# - streamlit
-# commands:
-# - pip check
-# requires:
-# - pip
+test:
+ imports:
+ - streamlit
+ commands:
+ - pip check
+ requires:
+ - pip
about:
home: https://streamlit.io
diff --git a/lib/setup.py b/lib/setup.py
index c9e4040e1a6d..41b012172e7f 100644
--- a/lib/setup.py
+++ b/lib/setup.py
@@ -41,7 +41,6 @@
"blinker>=1.0.0",
"cachetools>=4.0",
"click>=7.0",
- "gitpython!=3.1.19",
# 1.4 introduced the functionality found in python 3.8's importlib.metadata module
"importlib-metadata>=1.4",
"numpy",
@@ -67,6 +66,20 @@
"watchdog; platform_system != 'Darwin'",
]
+# We want to exclude some dependencies in our internal conda distribution of
+# Streamlit.
+CONDA_OPTIONAL_DEPENDENCIES = [
+ "gitpython!=3.1.19",
+]
+
+# NOTE: ST_CONDA_BUILD is used here (even though CONDA_BUILD is set
+# automatically when using the `conda build` command) because the
+# `load_setup_py_data()` conda build helper function does not have the
+# CONDA_BUILD environment variable set when it runs to generate our build
+# recipe from meta.yaml.
+if not os.getenv("ST_CONDA_BUILD"):
+ INSTALL_REQUIRES.extend(CONDA_OPTIONAL_DEPENDENCIES)
+
class VerifyVersionCommand(install):
"""Custom command to verify that the git tag matches our version"""
| ## 📚 Context
We want to be able to exclude certain dependencies from our private (that is, SnowPark-specific)
conda builds of Streamlit. This PR allows us to do this by setting some environment variables in the
`make conda-package` target and using them to conditionally add dependencies to `INSTALL_REQUIRES`
in `setup.py`.
Currently, the `ST_CONDA_BUILD` env var is used to exclude the `gitpython` dependency when set.
- What kind of change does this PR introduce?
- [x] Other, please describe: fun with dependencies
| https://api.github.com/repos/streamlit/streamlit/pulls/4991 | 2022-07-19T00:32:51Z | 2022-07-19T18:04:58Z | 2022-07-19T18:04:58Z | 2023-05-26T23:34:02Z | 852 | streamlit/streamlit | 21,729 |
Short string optimization example for C.180 | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index 22a731fc8..66a1604ff 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -7322,7 +7322,44 @@ But heed the warning: [Avoid "naked" `union`s](#Ru-naked)
##### Example
- ??? short-string optimization; safe union without dscriminant ???
+ // Short string optimization
+
+ constexpr size_t buffer_size = 16; // Slightly larger than the size of a pointer
+
+ class Immutable_string {
+ public:
+ Immutable_string(const char* str) :
+ size(strlen(str))
+ {
+ if (size < buffer_size)
+ strcpy_s(string_buffer, buffer_size, str);
+ else {
+ string_ptr = new char[size + 1];
+ strcpy_s(string_ptr, size + 1, str);
+ }
+ }
+
+ ~Immutable_string()
+ {
+ if (size >= buffer_size)
+ delete string_ptr;
+ }
+
+ const char* get_str() const
+ {
+ return (size < buffer_size) ? string_buffer : string_ptr;
+ }
+
+ private:
+ // If the string is short enough, we store the string itself
+ // instead of a pointer to the string.
+ union {
+ char* string_ptr;
+ char string_buffer[buffer_size];
+ };
+
+ const size_t size;
+ };
##### Enforcement
| https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/758 | 2016-10-03T21:45:09Z | 2016-10-03T23:41:07Z | 2016-10-03T23:41:07Z | 2016-10-03T23:50:25Z | 360 | isocpp/CppCoreGuidelines | 15,960 | |
add template .gitattributes that fixes language stats | diff --git a/.gitattributes b/.gitattributes
new file mode 100644
index 000000000..b2d461810
--- /dev/null
+++ b/.gitattributes
@@ -0,0 +1,3 @@
+# Override jupyter in Github language stats for more accurate estimate of repo code languages
+# reference: https://github.com/github/linguist/blob/master/docs/overrides.md#generated-code
+*.ipynb linguist-generated
| Tiny PR to fix the annoying jupyter notebook language count stats :)
https://twitter.com/karpathy/status/1620875263700799488?s=20&t=bV2NXNJaUzxcZWsUyDyJEg
Instructions taken from https://github.com/github/linguist/blob/master/docs/overrides.md#generated-code, referenced in discussion under https://github.com/github/linguist/issues/3316 (official github-linguist repo).
I left the rest of the commented template lines in there in case you need anything from there, but could remove them if you really want to keep it "nano" 😉 | https://api.github.com/repos/karpathy/nanoGPT/pulls/115 | 2023-02-03T21:41:14Z | 2023-02-04T01:23:44Z | 2023-02-04T01:23:44Z | 2023-02-04T01:28:26Z | 101 | karpathy/nanoGPT | 40,965 |
Improve flow for -i flag | diff --git a/README.md b/README.md
index 243692553d..2aaf45a290 100644
--- a/README.md
+++ b/README.md
@@ -8,7 +8,7 @@
GPT Engineer is made to be easy to adapt, extend, and make your agent learn how you want your code to look. It generates an entire codebase based on a prompt.
-- [Demo](https://twitter.com/antonosika/status/1667641038104674306)
+- [Demo](https://twitter.com/antonosika/status/1667641038104674306)
## Project philosophy
diff --git a/gpt_engineer/ai.py b/gpt_engineer/ai.py
index 923fd655fc..db58d943f4 100644
--- a/gpt_engineer/ai.py
+++ b/gpt_engineer/ai.py
@@ -365,17 +365,16 @@ def create_chat_model(self, model: str, temperature) -> BaseChatModel:
)
# Fetch available models from OpenAI API
supported = [model["id"] for model in openai.Model.list()["data"]]
- if model in supported:
- return ChatOpenAI(
- model=model,
- temperature=temperature,
- streaming=True,
- client=openai.ChatCompletion,
- )
- else:
+ if model not in supported:
raise ValueError(
f"Model {model} is not supported, supported models are: {supported}"
)
+ return ChatOpenAI(
+ model=model,
+ temperature=temperature,
+ streaming=True,
+ client=openai.ChatCompletion,
+ )
def get_tokenizer(model: str):
diff --git a/gpt_engineer/chat_to_files.py b/gpt_engineer/chat_to_files.py
index 394a7c18ad..4f8d3d963b 100644
--- a/gpt_engineer/chat_to_files.py
+++ b/gpt_engineer/chat_to_files.py
@@ -86,13 +86,10 @@ def overwrite_files(chat, dbs):
files = parse_chat(chat)
for file_name, file_content in files:
- if file_name.find("../") > -1:
- raise Exception(f"File name {file_name} attempted to access parent path.")
- elif file_name == "README.md":
- dbs.workspace["ExistingCodeModificationsREADME.md"] = file_content
+ if file_name == "README.md":
+ dbs.workspace["LAST_MODIFICATION_README.md"] = file_content
else:
- full_path = os.path.join(dbs.input.path, file_name)
- dbs.workspace[full_path] = file_content
+ dbs.workspace[file_name] = file_content
def get_code_strings(input) -> dict[str, str]:
@@ -115,6 +112,7 @@ def get_code_strings(input) -> dict[str, str]:
with open(full_file_path, "r") as file:
file_data = file.read()
if file_data:
+ # TODO: Should below be the full path?
file_name = os.path.relpath(full_file_path, input.path)
files_dict[file_name] = file_data
return files_dict
diff --git a/gpt_engineer/db.py b/gpt_engineer/db.py
index fdca5e0dcc..eb2bd3e7af 100644
--- a/gpt_engineer/db.py
+++ b/gpt_engineer/db.py
@@ -85,7 +85,7 @@ def get(self, key, default=None):
except KeyError:
return default
- def __setitem__(self, key, val):
+ def __setitem__(self, key: str | Path, val: str):
"""
Set the content of a file in the database.
@@ -101,6 +101,9 @@ def __setitem__(self, key, val):
TypeError
If val is not string.
"""
+ if str(key).startswith("../"):
+ raise ValueError(f"File name {key} attempted to access parent path.")
+
full_path = self.path / key
full_path.parent.mkdir(parents=True, exist_ok=True)
@@ -108,7 +111,7 @@ def __setitem__(self, key, val):
full_path.write_text(val, encoding="utf-8")
else:
# If val is not string, raise an error.
- raise TypeError("val must be either a str or bytes")
+ raise TypeError("val must be str")
# dataclass for all dbs:
diff --git a/gpt_engineer/file_selector.py b/gpt_engineer/file_selector.py
index c00541c9f6..8c44d43d1f 100644
--- a/gpt_engineer/file_selector.py
+++ b/gpt_engineer/file_selector.py
@@ -7,7 +7,8 @@
from pathlib import Path
from typing import List, Union
-IGNORE_FOLDERS = {"site-packages", "node_modules"}
+IGNORE_FOLDERS = {"site-packages", "node_modules", "venv"}
+FILE_LIST_NAME = "file_list.txt"
class DisplayablePath(object):
@@ -235,10 +236,7 @@ def ask_for_files(db_input) -> None:
dict[str, str]: Dictionary where key = file name and value = file path
"""
use_last_string = ""
- is_valid_selection = False
- can_use_last = False
if "file_list.txt" in db_input:
- can_use_last = True
use_last_string = (
"3. Use previous file list (available at "
+ f"{os.path.join(db_input.path, 'file_list.txt')})\n"
@@ -246,12 +244,17 @@ def ask_for_files(db_input) -> None:
selection_number = 3
else:
selection_number = 1
- selection_str = f"""How do you want to select the files?
+ selection_str = "\n".join(
+ [
+ "How do you want to select the files?",
+ "",
+ "1. Use File explorer.",
+ "2. Use Command-Line.",
+ use_last_string if len(use_last_string) > 1 else "",
+ f"Select option and press Enter (default={selection_number}): ",
+ ]
+ )
-1. Use Command-Line.
-2. Use File explorer.
-{use_last_string if len(use_last_string) > 1 else ""}
-Select option and press Enter (default={selection_number}): """
file_path_list = []
selected_number_str = input(selection_str)
if selected_number_str:
@@ -260,30 +263,23 @@ def ask_for_files(db_input) -> None:
except ValueError:
print("Invalid number. Select a number from the list above.\n")
sys.exit(1)
+
if selection_number == 1:
- # Open terminal selection
- file_path_list = terminal_file_selector()
- is_valid_selection = True
- elif selection_number == 2:
# Open GUI selection
file_path_list = gui_file_selector()
- is_valid_selection = True
- else:
- if can_use_last and selection_number == 3:
- # Use previous file list
- is_valid_selection = True
- if not is_valid_selection:
+ elif selection_number == 2:
+ # Open terminal selection
+ file_path_list = terminal_file_selector()
+ if (
+ selection_number <= 0
+ or selection_number > 3
+ or (selection_number == 3 and not use_last_string)
+ ):
print("Invalid number. Select a number from the list above.\n")
sys.exit(1)
- file_list_string = ""
if not selection_number == 3:
- # New files
- for file_path in file_path_list:
- file_list_string += str(file_path) + "\n"
-
- # Write in file_list so the user can edit and remember what was done
- db_input["file_list.txt"] = file_list_string
+ db_input["file_list.txt"] = "\n".join(file_path_list)
def gui_file_selector() -> List[str]:
diff --git a/gpt_engineer/main.py b/gpt_engineer/main.py
index f00a7a1b95..68ed8807d1 100644
--- a/gpt_engineer/main.py
+++ b/gpt_engineer/main.py
@@ -49,7 +49,6 @@ def main(
logging.basicConfig(level=logging.DEBUG if verbose else logging.INFO)
# For the improve option take current project as path and add .gpteng folder
- # By now, ignoring the 'project_path' argument
if improve_option:
# The default option for the --improve is the IMPROVE_CODE, not DEFAULT
if steps_config == StepsConfig.DEFAULT:
@@ -83,9 +82,15 @@ def main(
StepsConfig.EXECUTE_ONLY,
StepsConfig.USE_FEEDBACK,
StepsConfig.EVALUATE,
+ StepsConfig.IMPROVE_CODE,
]:
archive(dbs)
+ if not dbs.input.get("prompt"):
+ dbs.input["prompt"] = input(
+ "\nWhat application do you want gpt-engineer to generate?\n"
+ )
+
steps = STEPS[steps_config]
for step in steps:
messages = step(ai, dbs)
diff --git a/gpt_engineer/steps.py b/gpt_engineer/steps.py
index 7577e5963a..4f669a2412 100644
--- a/gpt_engineer/steps.py
+++ b/gpt_engineer/steps.py
@@ -16,7 +16,7 @@
to_files,
)
from gpt_engineer.db import DBs
-from gpt_engineer.file_selector import ask_for_files
+from gpt_engineer.file_selector import FILE_LIST_NAME, ask_for_files
from gpt_engineer.learning import human_review_input
Message = Union[AIMessage, HumanMessage, SystemMessage]
@@ -325,25 +325,28 @@ def get_improve_prompt(ai: AI, dbs: DBs):
Asks the user what they would like to fix.
"""
- dbs.input["prompt"] = input(
- "\nWhat do you need to improve with the selected files?\n"
- )
-
- confirm_str = f"""
- -----------------------------
- The following files will be used in the improvement process:
- {dbs.input["file_list.txt"]}
-
- The inserted prompt is the following:
- '{dbs.input['prompt']}'
- -----------------------------
-
- You can change these files in .gpteng folder ({dbs.input.path}) in your project
- before proceeding.
-
- Press enter to proceed with modifications.
+ if not dbs.input.get("prompt"):
+ dbs.input["prompt"] = input(
+ "\nWhat do you need to improve with the selected files?\n"
+ )
- """
+ confirm_str = "\n".join(
+ [
+ "-----------------------------",
+ "The following files will be used in the improvement process:",
+ f"{FILE_LIST_NAME}:",
+ str(dbs.input["file_list.txt"]),
+ "",
+ "The inserted prompt is the following:",
+ f"'{dbs.input['prompt']}'",
+ "-----------------------------",
+ "",
+ "You can change these files in your project before proceeding.",
+ "",
+ "Press enter to proceed with modifications.",
+ "",
+ ]
+ )
input(confirm_str)
return []
| https://api.github.com/repos/gpt-engineer-org/gpt-engineer/pulls/652 | 2023-09-02T17:36:57Z | 2023-09-02T18:01:59Z | 2023-09-02T18:01:59Z | 2023-09-02T18:03:13Z | 2,543 | gpt-engineer-org/gpt-engineer | 33,248 | |
Migrate to Anthropic 0.3 | diff --git a/fastchat/llm_judge/README.md b/fastchat/llm_judge/README.md
index caa845ec5f..a9127c6c23 100644
--- a/fastchat/llm_judge/README.md
+++ b/fastchat/llm_judge/README.md
@@ -18,14 +18,14 @@ To automate the evaluation process, we prompt strong LLMs like GPT-4 to act as j
git clone https://github.com/lm-sys/FastChat.git
cd FastChat
pip install -e .
-pip install openai anthropic ray
+pip install openai anthropic==0.3.2 ray
```
## Review Pre-Generated Model Answers and Judgments
We provide pre-generated model answers and judgments for some models.
You can view them at this [demo](https://huggingface.co/spaces/lmsys/mt-bench).
-To download the pre-generated data, use
+To download the pre-generated data, use
```
python3 download_mt_bench_pregenerated.py
```
diff --git a/fastchat/llm_judge/common.py b/fastchat/llm_judge/common.py
index 7ea156f148..6ec9b5eee7 100644
--- a/fastchat/llm_judge/common.py
+++ b/fastchat/llm_judge/common.py
@@ -422,18 +422,18 @@ def chat_compeletion_anthropic(model, conv, temperature, max_tokens):
output = API_ERROR_OUTPUT
for _ in range(API_MAX_RETRY):
try:
- c = anthropic.Client(os.environ["ANTHROPIC_API_KEY"])
+ c = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
prompt = conv.get_prompt()
- response = c.completion(
+ response = c.completions.create(
model=model,
prompt=prompt,
stop_sequences=[anthropic.HUMAN_PROMPT],
max_tokens_to_sample=max_tokens,
temperature=temperature,
)
- output = response["completion"]
+ output = response.completion
break
- except anthropic.ApiException as e:
+ except anthropic.APIError as e:
print(type(e), e)
time.sleep(API_RETRY_SLEEP)
return output.strip()
| - https://github.com/anthropics/anthropic-sdk-python#migration-from-v02x-and-below
<!-- Thank you for your contribution! -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number (if applicable)
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've run `format.sh` to lint the changes in this PR.
- [x] I've included any doc changes needed.
- [x] I've made sure the relevant tests are passing (if applicable).
## POC
### Before

### After

| https://api.github.com/repos/lm-sys/FastChat/pulls/1909 | 2023-07-10T01:54:13Z | 2023-07-10T18:10:25Z | 2023-07-10T18:10:25Z | 2023-07-10T18:10:25Z | 504 | lm-sys/FastChat | 41,634 |
ENH: add render warn for None | diff --git a/gym/envs/box2d/bipedal_walker.py b/gym/envs/box2d/bipedal_walker.py
index 392d3277d27..bea56f4c7f7 100644
--- a/gym/envs/box2d/bipedal_walker.py
+++ b/gym/envs/box2d/bipedal_walker.py
@@ -606,6 +606,14 @@ def step(self, action: np.ndarray):
return np.array(state, dtype=np.float32), reward, terminated, False, {}
def render(self):
+ if self.render_mode is None:
+ gym.logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
+ return
+
try:
import pygame
from pygame import gfxdraw
diff --git a/gym/envs/box2d/car_racing.py b/gym/envs/box2d/car_racing.py
index adddb25e5ef..bee7f3acd2f 100644
--- a/gym/envs/box2d/car_racing.py
+++ b/gym/envs/box2d/car_racing.py
@@ -565,7 +565,14 @@ def step(self, action: Union[np.ndarray, int]):
return self.state, step_reward, terminated, truncated, {}
def render(self):
- return self._render(self.render_mode)
+ if self.render_mode is None:
+ gym.logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
+ else:
+ return self._render(self.render_mode)
def _render(self, mode: str):
assert mode in self.metadata["render_modes"]
diff --git a/gym/envs/box2d/lunar_lander.py b/gym/envs/box2d/lunar_lander.py
index 87185c43a70..fb8e8e0f934 100644
--- a/gym/envs/box2d/lunar_lander.py
+++ b/gym/envs/box2d/lunar_lander.py
@@ -600,6 +600,14 @@ def step(self, action):
return np.array(state, dtype=np.float32), reward, terminated, False, {}
def render(self):
+ if self.render_mode is None:
+ gym.logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
+ return
+
try:
import pygame
from pygame import gfxdraw
diff --git a/gym/envs/classic_control/acrobot.py b/gym/envs/classic_control/acrobot.py
index d618078a0ff..4ca31ca138f 100644
--- a/gym/envs/classic_control/acrobot.py
+++ b/gym/envs/classic_control/acrobot.py
@@ -4,7 +4,7 @@
import numpy as np
from numpy import cos, pi, sin
-from gym import core, spaces
+from gym import core, logger, spaces
from gym.error import DependencyNotInstalled
__copyright__ = "Copyright 2013, RLPy http://acl.mit.edu/RLPy"
@@ -277,6 +277,14 @@ def _dsdt(self, s_augmented):
return dtheta1, dtheta2, ddtheta1, ddtheta2, 0.0
def render(self):
+ if self.render_mode is None:
+ logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
+ return
+
try:
import pygame
from pygame import gfxdraw
diff --git a/gym/envs/classic_control/cartpole.py b/gym/envs/classic_control/cartpole.py
index a1d4045993b..39005d7f877 100644
--- a/gym/envs/classic_control/cartpole.py
+++ b/gym/envs/classic_control/cartpole.py
@@ -207,6 +207,14 @@ def reset(
return np.array(self.state, dtype=np.float32), {}
def render(self):
+ if self.render_mode is None:
+ gym.logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
+ return
+
try:
import pygame
from pygame import gfxdraw
diff --git a/gym/envs/classic_control/continuous_mountain_car.py b/gym/envs/classic_control/continuous_mountain_car.py
index 3d427b6708d..0995b6232ae 100644
--- a/gym/envs/classic_control/continuous_mountain_car.py
+++ b/gym/envs/classic_control/continuous_mountain_car.py
@@ -189,6 +189,14 @@ def _height(self, xs):
return np.sin(3 * xs) * 0.45 + 0.55
def render(self):
+ if self.render_mode is None:
+ gym.logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
+ return
+
try:
import pygame
from pygame import gfxdraw
diff --git a/gym/envs/classic_control/mountain_car.py b/gym/envs/classic_control/mountain_car.py
index f25aeab914b..1bae60fe497 100644
--- a/gym/envs/classic_control/mountain_car.py
+++ b/gym/envs/classic_control/mountain_car.py
@@ -167,6 +167,14 @@ def _height(self, xs):
return np.sin(3 * xs) * 0.45 + 0.55
def render(self):
+ if self.render_mode is None:
+ gym.logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
+ return
+
try:
import pygame
from pygame import gfxdraw
diff --git a/gym/envs/classic_control/pendulum.py b/gym/envs/classic_control/pendulum.py
index 3a1c6edac34..536d57e909d 100644
--- a/gym/envs/classic_control/pendulum.py
+++ b/gym/envs/classic_control/pendulum.py
@@ -163,6 +163,14 @@ def _get_obs(self):
return np.array([np.cos(theta), np.sin(theta), thetadot], dtype=np.float32)
def render(self):
+ if self.render_mode is None:
+ gym.logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
+ return
+
try:
import pygame
from pygame import gfxdraw
diff --git a/gym/envs/mujoco/mujoco_env.py b/gym/envs/mujoco/mujoco_env.py
index 44ddbd72d86..62e01c3a37b 100644
--- a/gym/envs/mujoco/mujoco_env.py
+++ b/gym/envs/mujoco/mujoco_env.py
@@ -228,6 +228,14 @@ def _step_mujoco_simulation(self, ctrl, n_frames):
self.sim.step()
def render(self):
+ if self.render_mode is None:
+ gym.logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
+ return
+
width, height = self.width, self.height
camera_name, camera_id = self.camera_name, self.camera_id
if self.render_mode in {"rgb_array", "depth_array"}:
@@ -348,6 +356,14 @@ def _step_mujoco_simulation(self, ctrl, n_frames):
mujoco.mj_rnePostConstraint(self.model, self.data)
def render(self):
+ if self.render_mode is None:
+ gym.logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
+ return
+
if self.render_mode in {
"rgb_array",
"depth_array",
diff --git a/gym/envs/toy_text/blackjack.py b/gym/envs/toy_text/blackjack.py
index 8a767e08ec0..4bcce17a086 100644
--- a/gym/envs/toy_text/blackjack.py
+++ b/gym/envs/toy_text/blackjack.py
@@ -190,6 +190,14 @@ def reset(
return self._get_obs(), {}
def render(self):
+ if self.render_mode is None:
+ gym.logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
+ return
+
try:
import pygame
except ImportError:
diff --git a/gym/envs/toy_text/cliffwalking.py b/gym/envs/toy_text/cliffwalking.py
index 9b543174e29..cc3ed523668 100644
--- a/gym/envs/toy_text/cliffwalking.py
+++ b/gym/envs/toy_text/cliffwalking.py
@@ -5,7 +5,7 @@
import numpy as np
-from gym import Env, spaces
+from gym import Env, logger, spaces
from gym.envs.toy_text.utils import categorical_sample
from gym.error import DependencyNotInstalled
@@ -163,7 +163,13 @@ def reset(self, *, seed: Optional[int] = None, options: Optional[dict] = None):
return int(self.s), {"prob": 1}
def render(self):
- if self.render_mode == "ansi":
+ if self.render_mode is None:
+ logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
+ elif self.render_mode == "ansi":
return self._render_text()
else:
return self._render_gui(self.render_mode)
diff --git a/gym/envs/toy_text/frozen_lake.py b/gym/envs/toy_text/frozen_lake.py
index 404d2b08af3..65ea6429771 100644
--- a/gym/envs/toy_text/frozen_lake.py
+++ b/gym/envs/toy_text/frozen_lake.py
@@ -5,7 +5,7 @@
import numpy as np
-from gym import Env, spaces, utils
+from gym import Env, logger, spaces, utils
from gym.envs.toy_text.utils import categorical_sample
from gym.error import DependencyNotInstalled
@@ -267,7 +267,13 @@ def reset(
return int(self.s), {"prob": 1}
def render(self):
- if self.render_mode == "ansi":
+ if self.render_mode is None:
+ logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
+ elif self.render_mode == "ansi":
return self._render_text()
else: # self.render_mode in {"human", "rgb_array"}:
return self._render_gui(self.render_mode)
diff --git a/gym/envs/toy_text/taxi.py b/gym/envs/toy_text/taxi.py
index d9b9a817017..ac5ef188174 100644
--- a/gym/envs/toy_text/taxi.py
+++ b/gym/envs/toy_text/taxi.py
@@ -5,7 +5,7 @@
import numpy as np
-from gym import Env, spaces, utils
+from gym import Env, logger, spaces, utils
from gym.envs.toy_text.utils import categorical_sample
from gym.error import DependencyNotInstalled
@@ -278,6 +278,12 @@ def reset(
return int(self.s), {"prob": 1.0, "action_mask": self.action_mask(self.s)}
def render(self):
+ if self.render_mode is None:
+ logger.warn(
+ "You are calling render method without specifying any render mode. "
+ "You can specify the render_mode at initialization, "
+ f'e.g. gym("{self.spec.id}", render_mode="rgb_array")'
+ )
if self.render_mode == "ansi":
return self._render_text()
else: # self.render_mode in {"human", "rgb_array"}:
| Add a warning when `render` is called without specifying `render_mode`, see https://github.com/openai/gym/issues/3108 | https://api.github.com/repos/openai/gym/pulls/3112 | 2022-10-03T20:08:15Z | 2022-10-04T16:12:37Z | 2022-10-04T16:12:37Z | 2022-10-04T16:12:37Z | 3,107 | openai/gym | 5,247 |
Setup tox | diff --git a/.gitignore b/.gitignore
index ac1ef8af..75441b56 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,3 +1,5 @@
__pycache__
*.pyc
.idea
+*.egg-info/
+.tox/
diff --git a/.travis.yml b/.travis.yml
index 8e5abf9e..ae71d86a 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,27 +1,25 @@
-# vim ft=yaml
dist: xenial
language: python
sudo: false
-python:
- - "2.7"
- - "3.6"
- - "3.7"
+matrix:
+ include:
+ - python: "2.7"
+ env: TOXENV=ci27
+ - python: "3.6"
+ env: TOXENV=ci36
+ - python: "3.7"
+ env: TOXENV=ci37
cache:
- pip
install:
- - pip install -r requirements-dev.txt
+ - pip install tox
script:
- - if [ "${TRAVIS_PYTHON_VERSION:0:1}" = 2 ]; then export PYEXCLUDE=3; else export PYEXCLUDE=2; fi
- - flake8 --exclude="*__py${PYEXCLUDE}.py" patterns/
- - pytest --doctest-modules --ignore-glob="*__py${PYEXCLUDE}.py" patterns/
- - pytest -s -vv --cov=. --log-level=INFO tests/
- # Actually run all the scripts, contributing to coverage
- - PYTHONPATH=. ./run_all.sh
+ - tox
after_success:
- codecov
diff --git a/setup.py b/setup.py
index 07c495dc..80930a8b 100644
--- a/setup.py
+++ b/setup.py
@@ -1,7 +1,8 @@
-from setuptools import setup
+from setuptools import setup, find_packages
setup(
- name="python-patterns",
+ name="patterns",
+ packages=find_packages(),
description="A collection of design patterns and idioms in Python.",
classifiers=[
"Programming Language :: Python :: 2",
diff --git a/tox.ini b/tox.ini
new file mode 100644
index 00000000..911b7bfd
--- /dev/null
+++ b/tox.ini
@@ -0,0 +1,42 @@
+[tox]
+envlist = ci27,ci36,ci37,cov-report
+
+
+[testenv]
+setenv =
+ COVERAGE_FILE = .coverage.{envname}
+
+[testenv:ci27]
+basepython = python2.7
+deps =
+ -r requirements-dev.txt
+commands =
+ flake8 --exclude="*__py3.py" patterns/
+ pytest --doctest-modules --ignore-glob="*__py3.py" patterns/
+ pytest -s -vv --cov={envsitepackagesdir}/patterns --log-level=INFO tests/
+
+[testenv:ci36]
+basepython = python3.6
+deps =
+ -r requirements-dev.txt
+commands =
+ flake8 --exclude="*__py2.py" patterns/
+ pytest --doctest-modules --ignore-glob="*__py2.py" patterns/
+ pytest -s -vv --cov={envsitepackagesdir}/patterns --log-level=INFO tests/
+
+[testenv:ci37]
+basepython = python3.7
+deps =
+ -r requirements-dev.txt
+commands =
+ flake8 --exclude="*__py2.py" patterns/
+ pytest --doctest-modules --ignore-glob="*__py2.py" patterns/
+ pytest -s -vv --cov={envsitepackagesdir}/patterns --log-level=INFO tests/
+
+[testenv:cov-report]
+setenv =
+ COVERAGE_FILE = .coverage
+deps = coverage
+commands =
+ coverage combine
+ coverage report
| `tox` may help to:
- support several python versions
- run same commands in CI and locally
| https://api.github.com/repos/faif/python-patterns/pulls/289 | 2019-03-12T09:55:15Z | 2019-03-13T20:37:59Z | 2019-03-13T20:37:59Z | 2019-03-13T20:37:59Z | 921 | faif/python-patterns | 33,533 |
Enable mixtral 8x7b autotp | diff --git a/deepspeed/module_inject/auto_tp.py b/deepspeed/module_inject/auto_tp.py
index bf9c2d74c635..88f7086518e8 100644
--- a/deepspeed/module_inject/auto_tp.py
+++ b/deepspeed/module_inject/auto_tp.py
@@ -133,7 +133,7 @@ def is_load_module(module):
load_layers = [nn.Linear, nn.Embedding, nn.LayerNorm]
load_layer_names = [
"LPLayerNorm", "SharedEmbedding", "OPTLearnedPositionalEmbedding", "LlamaRMSNorm", "FalconLinear",
- "MistralRMSNorm", "T5LayerNorm"
+ "MistralRMSNorm", "T5LayerNorm", "MixtralRMSNorm"
]
return module.__class__ in load_layers or module._get_name() in load_layer_names
@@ -303,6 +303,9 @@ def tp_parser(model):
elif 'self_attention.dense' in layer and 'falcon' in str(
type(module)): # this is a hack to get the right linear layer for this model!
gem_list = gem_list + [layer]
+ # Mixtral-7x8b used w2*act(w1*w3) linear. need to replace w2 to linearallreduce.
+ elif 'w2' in layer and 'Mixtral' in str(type(module)):
+ gem_list = gem_list + [layer]
layer_list = []
if gem_list != []:
@@ -322,6 +325,9 @@ def _replace(self, child, name, conv_linear_layer):
return
weight_shape = child.weight.shape
mp_replace = ReplaceWithTensorSlicing(mp_group=self.mp_group)
+ # For mixtral-7x8b, need to skip MoE gate linear replace.
+ if name == "block_sparse_moe.gate":
+ return child
if name in self.all_reduce_linears:
# if conv_linear_layer [weight_shape[1], weight_shape[0] // mp_size]
# else [weight_shape[0], weight_shape[1] // mp_size]
| This PR aims to enable mixtral 8x7b (MoE model) autotp. | https://api.github.com/repos/microsoft/DeepSpeed/pulls/5257 | 2024-03-12T02:14:30Z | 2024-03-27T18:50:14Z | 2024-03-27T18:50:14Z | 2024-03-27T18:50:14Z | 479 | microsoft/DeepSpeed | 10,247 |
untag without `object_hook` | diff --git a/CHANGES.rst b/CHANGES.rst
index 9f79995b46..27c04f7f3e 100644
--- a/CHANGES.rst
+++ b/CHANGES.rst
@@ -5,6 +5,9 @@ Unreleased
- Correct type for ``path`` argument to ``send_file``. :issue:`5230`
- Fix a typo in an error message for the ``flask run --key`` option. :pr:`5344`
+- Session data is untagged without relying on the built-in ``json.loads``
+ ``object_hook``. This allows other JSON providers that don't implement that.
+ :issue:`5381`
Version 3.0.0
diff --git a/src/flask/json/tag.py b/src/flask/json/tag.py
index 91cc4412d6..069739f264 100644
--- a/src/flask/json/tag.py
+++ b/src/flask/json/tag.py
@@ -305,10 +305,22 @@ def untag(self, value: dict[str, t.Any]) -> t.Any:
return self.tags[key].to_python(value[key])
+ def _untag_scan(self, value: t.Any) -> t.Any:
+ if isinstance(value, dict):
+ # untag each item recursively
+ value = {k: self._untag_scan(v) for k, v in value.items()}
+ # untag the dict itself
+ value = self.untag(value)
+ elif isinstance(value, list):
+ # untag each item recursively
+ value = [self._untag_scan(item) for item in value]
+
+ return value
+
def dumps(self, value: t.Any) -> str:
"""Tag the value and dump it to a compact JSON string."""
return dumps(self.tag(value), separators=(",", ":"))
def loads(self, value: str) -> t.Any:
"""Load data from a JSON string and deserialized any tagged objects."""
- return loads(value, object_hook=self.untag)
+ return self._untag_scan(loads(value))
| Load session JSON without using `object_hook`, then recursively untag the data.
fixes #5381 | https://api.github.com/repos/pallets/flask/pulls/5382 | 2024-01-15T15:50:22Z | 2024-01-15T15:52:35Z | 2024-01-15T15:52:35Z | 2024-01-30T00:05:40Z | 470 | pallets/flask | 20,909 |
Add Discord badge | diff --git a/README.md b/README.md
index 496a232a4d1..3a4dbb9d67e 100644
--- a/README.md
+++ b/README.md
@@ -14,6 +14,7 @@
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv5 Citation"></a>
<a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
+ <a href="https://ultralytics.com/discord"><img alt="Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
<br>
<a href="https://bit.ly/yolov5-paperspace-notebook"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"></a>
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
| <!--
Thank you 🙏 for your contribution to [Ultralytics](https://ultralytics.com) 🚀! Your effort in enhancing our repositories is greatly appreciated. To streamline the process and assist us in integrating your Pull Request (PR) effectively, please follow these steps:
1. **Check for Existing Contributions**: Before submitting, kindly explore existing PRs to ensure your contribution is unique and complementary.
2. **Link Related Issues**: If your PR addresses an open issue, please link it in your submission. This helps us better understand the context and impact of your contribution.
3. **Elaborate Your Changes**: Clearly articulate the purpose of your PR. Whether it's a bug fix or a new feature, a detailed description aids in a smoother integration process.
4. **Ultralytics Contributor License Agreement (CLA)**: To uphold the quality and integrity of our project, we require all contributors to sign the CLA. Please confirm your agreement by commenting below:
_I have read the CLA Document and I hereby sign the CLA_
For more detailed guidance and best practices on contributing, refer to our ✅ [Contributing Guide](https://docs.ultralytics.com/help/contributing). Your adherence to these guidelines ensures a faster and more effective review process.
--->
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Added Discord badge to README and cosmetic changes to code documentation.
### 📊 Key Changes
- Added Discord community badge to the README for easier access to Ultralytics Discord.
- Re-formatted a block of code in `export.py` for better readability.
- In `models/experimental.py` and `utils/loggers/__init__.py`, inserted blank lines for consistency with the code convention.
- Minor documentation consistency adjustment in `utils/loggers/clearml/clearml_utils.py`.
### 🎯 Purpose & Impact
- 🤝 The new Discord badge on the README.md encourages community engagement by providing a quick link to join discussions and support channels.
- 📑 The code formatting changes improve the readability of the code, making it easier to understand and maintain.
- 🧹 The added blank lines and formatting tweaks demonstrate good coding practices and uphold the project's code quality standards.
- 🚀 No direct impact on functionality or performance; primary benefit is improving developer experience and community involvement. | https://api.github.com/repos/ultralytics/yolov5/pulls/12783 | 2024-03-04T20:09:03Z | 2024-03-04T20:12:53Z | 2024-03-04T20:12:53Z | 2024-03-04T20:12:54Z | 383 | ultralytics/yolov5 | 24,930 |
paddle support stride, fix dy2st check | diff --git a/ppocr/modeling/heads/rec_robustscanner_head.py b/ppocr/modeling/heads/rec_robustscanner_head.py
index 7956059ecf..550836bd40 100644
--- a/ppocr/modeling/heads/rec_robustscanner_head.py
+++ b/ppocr/modeling/heads/rec_robustscanner_head.py
@@ -99,10 +99,11 @@ def forward(self, query, key, value, h, w, valid_ratios=None):
logits = paddle.reshape(logits, [n, c, h, w])
if valid_ratios is not None:
# cal mask of attention weight
- for i, valid_ratio in enumerate(valid_ratios):
- valid_width = min(w, int(w * valid_ratio + 0.5))
- if valid_width < w:
- logits[i, :, :, valid_width:] = float('-inf')
+ with paddle.fluid.framework._stride_in_no_check_dy2st_diff():
+ for i, valid_ratio in enumerate(valid_ratios):
+ valid_width = min(w, int(w * valid_ratio + 0.5))
+ if valid_width < w:
+ logits[i, :, :, valid_width:] = float('-inf')
# reshape to (n, c, h, w)
logits = paddle.reshape(logits, [n, c, t])
| Paddle支持stride后,需要检查动转静不一致的情况。但检查难度大,简单的检查会有误报,但这种方式获得评审会的同意。通过with可以避免误报。 | https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/10498 | 2023-07-28T06:36:22Z | 2023-08-01T11:05:11Z | 2023-08-01T11:05:10Z | 2023-08-01T11:05:11Z | 298 | PaddlePaddle/PaddleOCR | 42,471 |
docs: Fix a few typos | diff --git a/src/you_get/extractors/flickr.py b/src/you_get/extractors/flickr.py
index 2535dd1cb7..79fca4ff9a 100644
--- a/src/you_get/extractors/flickr.py
+++ b/src/you_get/extractors/flickr.py
@@ -73,7 +73,7 @@ def get_api_key(page):
match = match1(page, pattern_inline_api_key)
# this happens only when the url points to a gallery page
# that contains no inline api_key(and never makes xhr api calls)
- # in fact this might be a better approch for getting a temporary api key
+ # in fact this might be a better approach for getting a temporary api key
# since there's no place for a user to add custom information that may
# misguide the regex in the homepage
if not match:
diff --git a/src/you_get/extractors/mtv81.py b/src/you_get/extractors/mtv81.py
index b92f74bc2d..ef43215959 100644
--- a/src/you_get/extractors/mtv81.py
+++ b/src/you_get/extractors/mtv81.py
@@ -28,7 +28,7 @@ def mtv81_download(url, output_dir='.', merge=True, info_only=False, **kwargs):
#
# rtmpdump -r 'rtmpe://cp30865.edgefcs.net/ondemand/mtviestor/_!/intlod/MTVInternational/MBUS/GeoLocals/00JP/VIAMTVI/PYC/201304/7122HVAQ4/00JPVIAMTVIPYC7122HVAQ4_640x_360_1200_m30.mp4' -o "title.mp4" --swfVfy http://media.mtvnservices.com/player/prime/mediaplayerprime.1.10.8.swf
#
- # because rtmpdump is unstable,may try serveral times
+ # because rtmpdump is unstable,may try several times
#
if not info_only:
# import pdb
diff --git a/src/you_get/extractors/qingting.py b/src/you_get/extractors/qingting.py
index 9859d4be95..8dd1b14f56 100644
--- a/src/you_get/extractors/qingting.py
+++ b/src/you_get/extractors/qingting.py
@@ -10,7 +10,7 @@
class Qingting(VideoExtractor):
# every resource is described by its channel id and program id
- # so vid is tuple (chaanel_id, program_id)
+ # so vid is tuple (channel_id, program_id)
name = 'Qingting'
stream_types = [
| There are small typos in:
- src/you_get/extractors/flickr.py
- src/you_get/extractors/mtv81.py
- src/you_get/extractors/qingting.py
Fixes:
- Should read `several` rather than `serveral`.
- Should read `channel` rather than `chaanel`.
- Should read `approach` rather than `approch`.
Semi-automated pull request generated by
https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md | https://api.github.com/repos/soimort/you-get/pulls/2909 | 2021-07-30T23:34:55Z | 2021-08-15T04:41:25Z | 2021-08-15T04:41:25Z | 2021-08-15T04:41:32Z | 633 | soimort/you-get | 21,353 |
Fix #928 test_json_dumps_pretty py3 compat. | diff --git a/acme/acme/jose/interfaces_test.py b/acme/acme/jose/interfaces_test.py
index 91e6f4416a2..84dc2a1be7f 100644
--- a/acme/acme/jose/interfaces_test.py
+++ b/acme/acme/jose/interfaces_test.py
@@ -1,6 +1,8 @@
"""Tests for acme.jose.interfaces."""
import unittest
+import six
+
class JSONDeSerializableTest(unittest.TestCase):
# pylint: disable=too-many-instance-attributes
@@ -90,8 +92,9 @@ def test_json_dumps(self):
self.assertEqual('["foo1", "foo2"]', self.seq.json_dumps())
def test_json_dumps_pretty(self):
- self.assertEqual(
- self.seq.json_dumps_pretty(), '[\n "foo1", \n "foo2"\n]')
+ filler = ' ' if six.PY2 else ''
+ self.assertEqual(self.seq.json_dumps_pretty(),
+ '[\n "foo1",{0}\n "foo2"\n]'.format(filler))
def test_json_dump_default(self):
from acme.jose.interfaces import JSONDeSerializable
| https://api.github.com/repos/certbot/certbot/pulls/929 | 2015-10-08T20:32:50Z | 2015-10-09T22:38:30Z | 2015-10-09T22:38:30Z | 2016-05-06T19:21:35Z | 270 | certbot/certbot | 3,399 | |
VW MQB: Add FW for 2017 Škoda Kodiaq | diff --git a/docs/CARS.md b/docs/CARS.md
index 426db94f89b8c6..070d068cab8967 100644
--- a/docs/CARS.md
+++ b/docs/CARS.md
@@ -164,7 +164,7 @@ A supported vehicle is one that just works when you install a comma three. All s
|Škoda|Fabia 2022-23|Adaptive Cruise Control (ACC) & Lane Assist|openpilot available[<sup>1,9</sup>](#footnotes)|0 mph|0 mph|[](##)|[](##)|<a href="https://comma.ai/shop/comma-three.html?make=Škoda&model=Fabia 2022-23">J533</a>[<sup>10</sup>](#footnotes)||
|Škoda|Kamiq 2021[<sup>7</sup>](#footnotes)|Adaptive Cruise Control (ACC) & Lane Assist|openpilot available[<sup>1,9</sup>](#footnotes)|0 mph|0 mph|[](##)|[](##)|<a href="https://comma.ai/shop/comma-three.html?make=Škoda&model=Kamiq 2021">J533</a>[<sup>10</sup>](#footnotes)||
|Škoda|Karoq 2019-21|Adaptive Cruise Control (ACC) & Lane Assist|openpilot available[<sup>1,9</sup>](#footnotes)|0 mph|0 mph|[](##)|[](##)|<a href="https://comma.ai/shop/comma-three.html?make=Škoda&model=Karoq 2019-21">J533</a>||
-|Škoda|Kodiaq 2018-19|Adaptive Cruise Control (ACC) & Lane Assist|openpilot available[<sup>1,9</sup>](#footnotes)|0 mph|0 mph|[](##)|[](##)|<a href="https://comma.ai/shop/comma-three.html?make=Škoda&model=Kodiaq 2018-19">J533</a>||
+|Škoda|Kodiaq 2017-23|Adaptive Cruise Control (ACC) & Lane Assist|openpilot available[<sup>1,9</sup>](#footnotes)|0 mph|0 mph|[](##)|[](##)|<a href="https://comma.ai/shop/comma-three.html?make=Škoda&model=Kodiaq 2017-23">J533</a>||
|Škoda|Octavia 2015, 2018-19|Adaptive Cruise Control (ACC) & Lane Assist|openpilot available[<sup>1,9</sup>](#footnotes)|0 mph|0 mph|[](##)|[](##)|<a href="https://comma.ai/shop/comma-three.html?make=Škoda&model=Octavia 2015, 2018-19">J533</a>||
|Škoda|Octavia RS 2016|Adaptive Cruise Control (ACC) & Lane Assist|openpilot available[<sup>1,9</sup>](#footnotes)|0 mph|0 mph|[](##)|[](##)|<a href="https://comma.ai/shop/comma-three.html?make=Škoda&model=Octavia RS 2016">J533</a>||
|Škoda|Scala 2020|Adaptive Cruise Control (ACC) & Lane Assist|openpilot available[<sup>1,9</sup>](#footnotes)|0 mph|0 mph|[](##)|[](##)|<a href="https://comma.ai/shop/comma-three.html?make=Škoda&model=Scala 2020">J533</a>[<sup>10</sup>](#footnotes)||
diff --git a/selfdrive/car/volkswagen/values.py b/selfdrive/car/volkswagen/values.py
index c5868e16f3ecbd..8e129239831616 100755
--- a/selfdrive/car/volkswagen/values.py
+++ b/selfdrive/car/volkswagen/values.py
@@ -243,7 +243,7 @@ def init_make(self, CP: car.CarParams):
CAR.SKODA_FABIA_MK4: VWCarInfo("Škoda Fabia 2022-23", footnotes=[Footnote.VW_MQB_A0]),
CAR.SKODA_KAMIQ_MK1: VWCarInfo("Škoda Kamiq 2021", footnotes=[Footnote.VW_MQB_A0, Footnote.KAMIQ]),
CAR.SKODA_KAROQ_MK1: VWCarInfo("Škoda Karoq 2019-21"),
- CAR.SKODA_KODIAQ_MK1: VWCarInfo("Škoda Kodiaq 2018-19"),
+ CAR.SKODA_KODIAQ_MK1: VWCarInfo("Škoda Kodiaq 2017-23"),
CAR.SKODA_SCALA_MK1: VWCarInfo("Škoda Scala 2020", footnotes=[Footnote.VW_MQB_A0]),
CAR.SKODA_SUPERB_MK3: VWCarInfo("Škoda Superb 2015-22"),
CAR.SKODA_OCTAVIA_MK3: [
@@ -1071,17 +1071,20 @@ def init_make(self, CP: car.CarParams):
(Ecu.engine, 0x7e0, None): [
b'\xf1\x8704E906027DD\xf1\x893123',
b'\xf1\x8704L906026DE\xf1\x895418',
+ b'\xf1\x8704L906026EJ\xf1\x893661',
b'\xf1\x8704L906026HT\xf1\x893617',
b'\xf1\x875NA907115E \xf1\x890003',
b'\xf1\x875NA907115E \xf1\x890005',
],
(Ecu.transmission, 0x7e1, None): [
b'\xf1\x870D9300043 \xf1\x895202',
+ b'\xf1\x870DL300011N \xf1\x892014',
b'\xf1\x870DL300012M \xf1\x892107',
b'\xf1\x870DL300012N \xf1\x892110',
b'\xf1\x870DL300013G \xf1\x892119',
],
(Ecu.srs, 0x715, None): [
+ b'\xf1\x873Q0959655AP\xf1\x890306\xf1\x82\r11110011110011421111314211',
b'\xf1\x873Q0959655BJ\xf1\x890703\xf1\x82\x0e1213001211001205212111052100',
b'\xf1\x873Q0959655BK\xf1\x890703\xf1\x82\x0e1213001211001244212111442100',
b'\xf1\x873Q0959655CN\xf1\x890720\xf1\x82\x0e1213001211001205212112052100',
@@ -1096,6 +1099,7 @@ def init_make(self, CP: car.CarParams):
(Ecu.fwdRadar, 0x757, None): [
b'\xf1\x872Q0907572Q \xf1\x890342',
b'\xf1\x872Q0907572R \xf1\x890372',
+ b'\xf1\x872Q0907572T \xf1\x890383',
b'\xf1\x872Q0907572AA\xf1\x890396',
],
},
| Add missing firmware for the 2017 Škoda Kodiaq. Expand supported model-year range back to 2017 to include this vehicle, and forward to 2023 since those are supportable as well.
**Route:** `89b596c5edcb6dba|2023-03-08--18-44-07`
Thanks to community Kodiaq owner strom! | https://api.github.com/repos/commaai/openpilot/pulls/27532 | 2023-03-08T18:34:48Z | 2023-03-08T21:06:18Z | 2023-03-08T21:06:18Z | 2024-03-01T23:01:01Z | 1,878 | commaai/openpilot | 8,910 |
🌐 Add Japanese translation for `docs/ja/docs/advanced/websockets.md` | diff --git a/docs/ja/docs/advanced/websockets.md b/docs/ja/docs/advanced/websockets.md
new file mode 100644
index 0000000000000..65e4112a6b29c
--- /dev/null
+++ b/docs/ja/docs/advanced/websockets.md
@@ -0,0 +1,186 @@
+# WebSocket
+
+**FastAPI**で<a href="https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API" class="external-link" target="_blank">WebSocket</a>が使用できます。
+
+## `WebSockets`のインストール
+
+まず `WebSockets`のインストールが必要です。
+
+<div class="termy">
+
+```console
+$ pip install websockets
+
+---> 100%
+```
+
+</div>
+
+## WebSocket クライアント
+
+### 本番環境
+
+本番環境では、React、Vue.js、Angularなどの最新のフレームワークで作成されたフロントエンドを使用しているでしょう。
+
+そして、バックエンドとWebSocketを使用して通信するために、おそらくフロントエンドのユーティリティを使用することになるでしょう。
+
+または、ネイティブコードでWebSocketバックエンドと直接通信するネイティブモバイルアプリケーションがあるかもしれません。
+
+他にも、WebSocketのエンドポイントと通信する方法があるかもしれません。
+
+---
+
+ただし、この例では非常にシンプルなHTML文書といくつかのJavaScriptを、すべてソースコードの中に入れて使用することにします。
+
+もちろん、これは最適な方法ではありませんし、本番環境で使うことはないでしょう。
+
+本番環境では、上記の方法のいずれかの選択肢を採用することになるでしょう。
+
+しかし、これはWebSocketのサーバーサイドに焦点を当て、実用的な例を示す最も簡単な方法です。
+
+```Python hl_lines="2 6-38 41-43"
+{!../../../docs_src/websockets/tutorial001.py!}
+```
+
+## `websocket` を作成する
+
+**FastAPI** アプリケーションで、`websocket` を作成します。
+
+```Python hl_lines="1 46-47"
+{!../../../docs_src/websockets/tutorial001.py!}
+```
+
+!!! note "技術詳細"
+ `from starlette.websockets import WebSocket` を使用しても構いません.
+
+ **FastAPI** は開発者の利便性のために、同じ `WebSocket` を提供します。しかし、こちらはStarletteから直接提供されるものです。
+
+## メッセージの送受信
+
+WebSocketルートでは、 `await` を使ってメッセージの送受信ができます。
+
+```Python hl_lines="48-52"
+{!../../../docs_src/websockets/tutorial001.py!}
+```
+
+バイナリやテキストデータ、JSONデータを送受信できます。
+
+## 試してみる
+
+ファイル名が `main.py` である場合、以下の方法でアプリケーションを実行します。
+
+<div class="termy">
+
+```console
+$ uvicorn main:app --reload
+
+<span style="color: green;">INFO</span>: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
+```
+
+</div>
+
+ブラウザで <a href="http://127.0.0.1:8000" class="external-link" target="_blank">http://127.0.0.1:8000</a> を開きます。
+
+次のようなシンプルなページが表示されます。
+
+<img src="/img/tutorial/websockets/image01.png">
+
+入力ボックスにメッセージを入力して送信できます。
+
+<img src="/img/tutorial/websockets/image02.png">
+
+そして、 WebSocketを使用した**FastAPI**アプリケーションが応答します。
+
+<img src="/img/tutorial/websockets/image03.png">
+
+複数のメッセージを送信(および受信)できます。
+
+<img src="/img/tutorial/websockets/image04.png">
+
+そして、これらの通信はすべて同じWebSocket接続を使用します。
+
+## 依存関係
+
+WebSocketエンドポイントでは、`fastapi` から以下をインポートして使用できます。
+
+* `Depends`
+* `Security`
+* `Cookie`
+* `Header`
+* `Path`
+* `Query`
+
+これらは、他のFastAPI エンドポイント/*path operation* の場合と同じように機能します。
+
+```Python hl_lines="58-65 68-83"
+{!../../../docs_src/websockets/tutorial002.py!}
+```
+
+!!! info "情報"
+ WebSocket で `HTTPException` を発生させることはあまり意味がありません。したがって、WebSocketの接続を直接閉じる方がよいでしょう。
+
+ クロージングコードは、<a href="https://tools.ietf.org/html/rfc6455#section-7.4.1" class="external-link" target="_blank">仕様で定義された有効なコード</a>の中から使用することができます。
+
+ 将来的には、どこからでも `raise` できる `WebSocketException` が用意され、専用の例外ハンドラを追加できるようになる予定です。これは、Starlette の <a href="https://github.com/encode/starlette/pull/527" class="external-link" target="_blank">PR #527</a> に依存するものです。
+
+### 依存関係を用いてWebSocketsを試してみる
+
+ファイル名が `main.py` である場合、以下の方法でアプリケーションを実行します。
+
+<div class="termy">
+
+```console
+$ uvicorn main:app --reload
+
+<span style="color: green;">INFO</span>: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
+```
+
+</div>
+
+ブラウザで <a href="http://127.0.0.1:8000" class="external-link" target="_blank">http://127.0.0.1:8000</a> を開きます。
+
+クライアントが設定できる項目は以下の通りです。
+
+* パスで使用される「Item ID」
+* クエリパラメータとして使用される「Token」
+
+!!! tip "豆知識"
+ クエリ `token` は依存パッケージによって処理されることに注意してください。
+
+これにより、WebSocketに接続してメッセージを送受信できます。
+
+<img src="/img/tutorial/websockets/image05.png">
+
+## 切断や複数クライアントへの対応
+
+WebSocket接続が閉じられると、 `await websocket.receive_text()` は例外 `WebSocketDisconnect` を発生させ、この例のようにキャッチして処理することができます。
+
+```Python hl_lines="81-83"
+{!../../../docs_src/websockets/tutorial003.py!}
+```
+
+試してみるには、
+
+* いくつかのブラウザタブでアプリを開きます。
+* それらのタブでメッセージを記入してください。
+* そして、タブのうち1つを閉じてください。
+
+これにより例外 `WebSocketDisconnect` が発生し、他のすべてのクライアントは次のようなメッセージを受信します。
+
+```
+Client #1596980209979 left the chat
+```
+
+!!! tip "豆知識"
+ 上記のアプリは、複数の WebSocket 接続に対してメッセージを処理し、ブロードキャストする方法を示すための最小限のシンプルな例です。
+
+ しかし、すべての接続がメモリ内の単一のリストで処理されるため、プロセスの実行中にのみ機能し、単一のプロセスでのみ機能することに注意してください。
+
+ もしFastAPIと簡単に統合できて、RedisやPostgreSQLなどでサポートされている、より堅牢なものが必要なら、<a href="https://github.com/encode/broadcaster" class="external-link" target="_blank">encode/broadcaster</a> を確認してください。
+
+## その他のドキュメント
+
+オプションの詳細については、Starletteのドキュメントを確認してください。
+
+* <a href="https://www.starlette.io/websockets/" class="external-link" target="_blank"> `WebSocket` クラス</a>
+* <a href="https://www.starlette.io/endpoints/#websocketendpoint" class="external-link" target="_blank">クラスベースのWebSocket処理</a>
diff --git a/docs/ja/mkdocs.yml b/docs/ja/mkdocs.yml
index b3f18bbdd305f..5bbcce605951a 100644
--- a/docs/ja/mkdocs.yml
+++ b/docs/ja/mkdocs.yml
@@ -86,6 +86,7 @@ nav:
- advanced/response-directly.md
- advanced/custom-response.md
- advanced/nosql-databases.md
+ - advanced/websockets.md
- advanced/conditional-openapi.md
- async.md
- デプロイ:
| Relates to #1572
This PR translates advanced/websockets.md.
I am Japanese and I like this repository so I translated it.
This is my first pull request for fastapi, so I apologize if there are any problems. I would appreciate it if you could check it. | https://api.github.com/repos/tiangolo/fastapi/pulls/4983 | 2022-06-03T16:13:46Z | 2022-11-13T13:58:31Z | 2022-11-13T13:58:31Z | 2022-11-13T17:58:22Z | 2,194 | tiangolo/fastapi | 22,872 |
fix typo: "Python'd" -> "Python's" | diff --git a/docs/contributing.rst b/docs/contributing.rst
index 7ddbdcf24e8..1398e818c75 100644
--- a/docs/contributing.rst
+++ b/docs/contributing.rst
@@ -61,7 +61,7 @@ The following tools are there to help you:
- For debugging, we recommend ``pip install ipdb`` and putting
``import ipdb; ipdb.set_trace()`` statement inside the source
- code. Alternatively, you can use Python'd standard library `pdb`,
+ code. Alternatively, you can use Python's standard library `pdb`,
but you won't get TAB completion...
| https://api.github.com/repos/certbot/certbot/pulls/768 | 2015-09-13T06:48:06Z | 2015-09-13T21:15:12Z | 2015-09-13T21:15:12Z | 2016-05-06T19:21:23Z | 151 | certbot/certbot | 903 | |
[ie/mixch] Fix extractor | diff --git a/yt_dlp/extractor/mixch.py b/yt_dlp/extractor/mixch.py
index 82a7c325724..b980fd01a82 100644
--- a/yt_dlp/extractor/mixch.py
+++ b/yt_dlp/extractor/mixch.py
@@ -1,6 +1,6 @@
from .common import InfoExtractor
from ..networking.exceptions import HTTPError
-from ..utils import ExtractorError, UserNotLive, url_or_none
+from ..utils import ExtractorError, UserNotLive, int_or_none, url_or_none
from ..utils.traversal import traverse_obj
@@ -27,25 +27,23 @@ class MixchIE(InfoExtractor):
def _real_extract(self, url):
video_id = self._match_id(url)
- webpage = self._download_webpage(f'https://mixch.tv/u/{video_id}/live', video_id)
-
- initial_js_state = self._parse_json(self._search_regex(
- r'(?m)^\s*window\.__INITIAL_JS_STATE__\s*=\s*(\{.+?\});\s*$', webpage, 'initial JS state'), video_id)
- if not initial_js_state.get('liveInfo'):
+ data = self._download_json(f'https://mixch.tv/api-web/users/{video_id}/live', video_id)
+ if not traverse_obj(data, ('liveInfo', {dict})):
raise UserNotLive(video_id=video_id)
return {
'id': video_id,
- 'title': traverse_obj(initial_js_state, ('liveInfo', 'title')),
- 'comment_count': traverse_obj(initial_js_state, ('liveInfo', 'comments')),
- 'view_count': traverse_obj(initial_js_state, ('liveInfo', 'visitor')),
- 'timestamp': traverse_obj(initial_js_state, ('liveInfo', 'created')),
- 'uploader': traverse_obj(initial_js_state, ('broadcasterInfo', 'name')),
'uploader_id': video_id,
+ **traverse_obj(data, {
+ 'title': ('liveInfo', 'title', {str}),
+ 'comment_count': ('liveInfo', 'comments', {int_or_none}),
+ 'view_count': ('liveInfo', 'visitor', {int_or_none}),
+ 'timestamp': ('liveInfo', 'created', {int_or_none}),
+ 'uploader': ('broadcasterInfo', 'name', {str}),
+ }),
'formats': [{
'format_id': 'hls',
- 'url': (traverse_obj(initial_js_state, ('liveInfo', 'hls'))
- or f'https://d1hd0ww6piyb43.cloudfront.net/hls/torte_{video_id}.m3u8'),
+ 'url': data['liveInfo']['hls'],
'ext': 'mp4',
'protocol': 'm3u8',
}],
| Thanks @nipotan for the API endpoint knowledge
Closes #9536
<details open><summary>Template</summary> <!-- OPEN is intentional -->
### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
### What is the purpose of your *pull request*?
- [x] Fix or improvement to an extractor (Make sure to add/update tests)
</details>
| https://api.github.com/repos/yt-dlp/yt-dlp/pulls/9608 | 2024-04-03T20:36:37Z | 2024-04-03T22:53:42Z | 2024-04-03T22:53:42Z | 2024-04-03T22:53:43Z | 645 | yt-dlp/yt-dlp | 8,034 |
Add regression/system test for #4719 | diff --git a/tests/boulder-integration.sh b/tests/boulder-integration.sh
index d86a6fb8c78..5c00be0542a 100755
--- a/tests/boulder-integration.sh
+++ b/tests/boulder-integration.sh
@@ -80,6 +80,20 @@ CheckHooks() {
rm "$HOOK_TEST"
}
+# test for regressions of #4719
+get_num_tmp_files() {
+ ls -1 /tmp | wc -l
+}
+num_tmp_files=$(get_num_tmp_files)
+common --csr / && echo expected error && exit 1 || true
+common --help
+common --help all
+common --version
+if [ $(get_num_tmp_files) -ne $num_tmp_files ]; then
+ echo "New files or directories created in /tmp!"
+ exit 1
+fi
+
# We start a server listening on the port for the
# unrequested challenge to prevent regressions in #3601.
python ./tests/run_http_server.py $http_01_port &
| https://api.github.com/repos/certbot/certbot/pulls/4739 | 2017-05-25T19:26:19Z | 2017-06-01T16:57:27Z | 2017-06-01T16:57:27Z | 2017-06-01T16:57:30Z | 232 | certbot/certbot | 744 | |
Fix MPS on PyTorch 2.0.1, Intel Macs | diff --git a/modules/mac_specific.py b/modules/mac_specific.py
index 6fe8dea0726..40ce2101764 100644
--- a/modules/mac_specific.py
+++ b/modules/mac_specific.py
@@ -54,6 +54,11 @@ def cumsum_fix(input, cumsum_func, *args, **kwargs):
CondFunc('torch.cumsum', cumsum_fix_func, None)
CondFunc('torch.Tensor.cumsum', cumsum_fix_func, None)
CondFunc('torch.narrow', lambda orig_func, *args, **kwargs: orig_func(*args, **kwargs).clone(), None)
- if version.parse(torch.__version__) == version.parse("2.0"):
+
# MPS workaround for https://github.com/pytorch/pytorch/issues/96113
- CondFunc('torch.nn.functional.layer_norm', lambda orig_func, x, normalized_shape, weight, bias, eps, **kwargs: orig_func(x.float(), normalized_shape, weight.float() if weight is not None else None, bias.float() if bias is not None else bias, eps).to(x.dtype), lambda *args, **kwargs: len(args) == 6)
+ CondFunc('torch.nn.functional.layer_norm', lambda orig_func, x, normalized_shape, weight, bias, eps, **kwargs: orig_func(x.float(), normalized_shape, weight.float() if weight is not None else None, bias.float() if bias is not None else bias, eps).to(x.dtype), lambda _, input, *args, **kwargs: len(args) == 4 and input.device.type == 'mps')
+
+ # MPS workaround for https://github.com/pytorch/pytorch/issues/92311
+ if platform.processor() == 'i386':
+ for funcName in ['torch.argmax', 'torch.Tensor.argmax']:
+ CondFunc(funcName, lambda _, input, *args, **kwargs: torch.max(input.float() if input.dtype == torch.int64 else input, *args, **kwargs)[1], lambda _, input, *args, **kwargs: input.device.type == 'mps')
\ No newline at end of file
diff --git a/modules/sd_hijack_optimizations.py b/modules/sd_hijack_optimizations.py
index 372555ffaf4..f10865cd1e7 100644
--- a/modules/sd_hijack_optimizations.py
+++ b/modules/sd_hijack_optimizations.py
@@ -256,6 +256,9 @@ def sub_quad_attention_forward(self, x, context=None, mask=None):
k = k.unflatten(-1, (h, -1)).transpose(1,2).flatten(end_dim=1)
v = v.unflatten(-1, (h, -1)).transpose(1,2).flatten(end_dim=1)
+ if q.device.type == 'mps':
+ q, k, v = q.contiguous(), k.contiguous(), v.contiguous()
+
dtype = q.dtype
if shared.opts.upcast_attn:
q, k = q.float(), k.float()
| **Describe what this pull request is trying to achieve.**
- Fix NaNs occurring in sub-quadratic attention by making q, k, and v contiguous when using MPS
- Fix crash that occurs with MPS on PyTorch 2.0.1 due to LayerNorm still not accepting float16 inputs
- Fix generation failing on Intel Macs with k-diffusion and UniPC samplers
**Environment this was tested in**
- OS: macOS 13.3.1
- Browser: Safari
- Graphics card: M1 Max 64 GB, Radeon Pro Vega 20 (4 GB)
Fixes #8555. | https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/10201 | 2023-05-08T21:53:37Z | 2023-05-09T07:28:24Z | 2023-05-09T07:28:24Z | 2023-05-17T23:58:34Z | 658 | AUTOMATIC1111/stable-diffusion-webui | 40,277 |
use version from package init also for sphinx docs, insert toplevel dir ... | diff --git a/docs/conf.py b/docs/conf.py
index fbcd610650b..018d2afedf0 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -12,13 +12,22 @@
# All configuration values have a default; values that are commented out
# serve to show the default.
-import sys
+import codecs
import os
+import re
+import sys
+
+here = os.path.abspath(os.path.dirname(__file__))
+
+# read version number (and other metadata) from package init
+init_fn = os.path.join(here, '..', 'letsencrypt', '__init__.py')
+with codecs.open(init_fn, encoding='utf8') as fd:
+ meta = dict(re.findall(r"""__([a-z]+)__ = "([^"]+)""", fd.read()))
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
-#sys.path.insert(0, os.path.abspath('.'))
+sys.path.insert(0, os.path.abspath(os.path.join(here, '..')))
# -- General configuration ------------------------------------------------
@@ -58,9 +67,9 @@
# built documents.
#
# The short X.Y version.
-version = '0.1'
+version = '.'.join(meta['version'].split('.')[:2])
# The full version, including alpha/beta/rc tags.
-release = '0.1'
+release = meta['version']
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
diff --git a/letsencrypt/__init__.py b/letsencrypt/__init__.py
index 9fe93c4db03..b36747b5fe0 100644
--- a/letsencrypt/__init__.py
+++ b/letsencrypt/__init__.py
@@ -1,2 +1,3 @@
"""Let's Encrypt."""
+# version number like 1.2.3a0, must have at least 2 parts, like 1.2
__version__ = "0.1"
| ...into sys.path
| https://api.github.com/repos/certbot/certbot/pulls/218 | 2015-01-31T05:03:15Z | 2015-02-02T23:13:31Z | 2015-02-02T23:13:31Z | 2016-05-06T19:21:50Z | 476 | certbot/certbot | 647 |
Apply cpython patch bpo-39492 for the reference counting issue in pickle5 | diff --git a/build.sh b/build.sh
index 19f95b7de3ee5..4a6800741ddc9 100755
--- a/build.sh
+++ b/build.sh
@@ -130,7 +130,7 @@ WORK_DIR=`mktemp -d`
pushd $WORK_DIR
git clone https://github.com/suquark/pickle5-backport
pushd pickle5-backport
- git checkout 43551fbb9add8ac2e8551b96fdaf2fe5a3b5997d
+ git checkout 8ffe41ceba9d5e2ce8a98190f6b3d2f3325e5a72
"$PYTHON_EXECUTABLE" setup.py bdist_wheel
unzip -o dist/*.whl -d "$ROOT_DIR/python/ray/pickle5_files"
popd
diff --git a/python/ray/tests/test_basic.py b/python/ray/tests/test_basic.py
index f477092f52326..f800fc50295c8 100644
--- a/python/ray/tests/test_basic.py
+++ b/python/ray/tests/test_basic.py
@@ -10,6 +10,7 @@
import threading
import time
import pickle
+import weakref
import numpy as np
import pytest
@@ -453,6 +454,27 @@ class ClassA:
ray.put(obj)
+def test_reducer_override_no_reference_cycle(ray_start_regular):
+ # bpo-39492: reducer_override used to induce a spurious reference cycle
+ # inside the Pickler object, that could prevent all serialized objects
+ # from being garbage-collected without explicity invoking gc.collect.
+ f = lambda: 4669201609102990671853203821578
+
+ wr = weakref.ref(f)
+
+ bio = io.BytesIO()
+ from ray.cloudpickle import CloudPickler, loads
+ p = CloudPickler(bio, protocol=5)
+ p.dump(f)
+ new_f = loads(bio.getvalue())
+ assert new_f() == 4669201609102990671853203821578
+
+ del p
+ del f
+
+ assert wr() is None
+
+
def test_passing_arguments_by_value_out_of_the_box(ray_start_regular):
@ray.remote
def f(x):
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
## Why are these changes needed?
This should fix https://github.com/cloudpipe/cloudpickle/issues/343 for python3.5, 3.6 and 3.7
It doesn't fix python3.8, but there will be official python3.8.2 fix
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://ray.readthedocs.io/en/latest/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failure rates at https://ray-travis-tracker.herokuapp.com/.
| https://api.github.com/repos/ray-project/ray/pulls/7177 | 2020-02-15T02:07:16Z | 2020-02-16T05:16:14Z | 2020-02-16T05:16:14Z | 2020-02-16T05:16:14Z | 517 | ray-project/ray | 19,622 |
Minor refactoring of EventBridge utils, fix location of EVENTS_TMP_DIR | diff --git a/localstack/services/awslambda/lambda_api.py b/localstack/services/awslambda/lambda_api.py
index e54b9a703ac44..cffedb7ad748a 100644
--- a/localstack/services/awslambda/lambda_api.py
+++ b/localstack/services/awslambda/lambda_api.py
@@ -1083,9 +1083,7 @@ def generic_handler(*_):
if not is_local_mount:
# Lambda code must be uploaded in Zip format
if not is_zip_file(zip_file_content):
- raise ClientError(
- "Uploaded Lambda code for runtime ({}) is not in Zip format".format(runtime)
- )
+ raise ClientError(f"Uploaded Lambda code for runtime ({runtime}) is not in Zip format")
# Unzip the Lambda archive contents
unzip(tmp_file, lambda_cwd)
diff --git a/localstack/services/events/events_listener.py b/localstack/services/events/events_listener.py
index 8cb27dfcc41cb..6221de79d048d 100644
--- a/localstack/services/events/events_listener.py
+++ b/localstack/services/events/events_listener.py
@@ -3,6 +3,7 @@
import os
import re
import time
+from typing import Dict, Set
from localstack import config
from localstack.constants import ENV_INTERNAL_TEST_RUN, MOTO_ACCOUNT_ID, TEST_AWS_ACCOUNT_ID
@@ -30,10 +31,13 @@
class EventsBackend(RegionBackend):
+ # maps event bus name to set of event rules - TODO: check if still required, or available upstream?
+ event_rules: Dict[str, Set]
+ # maps rule name to job_id
+ rule_scheduled_jobs: Dict[str, str]
+
def __init__(self):
- # maps event bus name to set of event rules - TODO: check if still required, or available upstream?
self.event_rules = {DEFAULT_EVENT_BUS_NAME: set()}
- # maps rule to job_id
self.rule_scheduled_jobs = {}
@@ -60,10 +64,8 @@ def _create_and_register_temp_dir():
def _dump_events_to_files(events_with_added_uuid):
current_time_millis = int(round(time.time() * 1000))
for event in events_with_added_uuid:
- save_file(
- os.path.join(EVENTS_TMP_DIR, "%s_%s" % (current_time_millis, event["uuid"])),
- json.dumps(event["event"]),
- )
+ target = os.path.join(_get_events_tmp_dir(), "%s_%s" % (current_time_millis, event["uuid"]))
+ save_file(target, json.dumps(event["event"]))
def _get_events_tmp_dir():
@@ -77,8 +79,9 @@ def func(*args, **kwargs):
targets = client.list_targets_by_rule(Rule=rule_name)["Targets"]
if targets:
LOG.debug(
- "Notifying %s targets in response to triggered Events rule %s"
- % (len(targets), rule_name)
+ "Notifying %s targets in response to triggered Events rule %s",
+ len(targets),
+ rule_name,
)
for target in targets:
arn = target.get("Arn")
@@ -115,7 +118,7 @@ def convert_schedule_to_cron(schedule):
return schedule
-def handle_put_rule(data):
+def handle_put_rule(data: Dict):
schedule = data.get("ScheduleExpression")
enabled = data.get("State") != "DISABLED"
@@ -131,7 +134,7 @@ def handle_put_rule(data):
return True
-def handle_delete_rule(rule_name):
+def handle_delete_rule(rule_name: str):
rule_scheduled_jobs = EventsBackend.get().rule_scheduled_jobs
job_id = rule_scheduled_jobs.get(rule_name)
if job_id:
@@ -139,7 +142,7 @@ def handle_delete_rule(rule_name):
JobScheduler.instance().cancel_job(job_id=job_id)
-def handle_disable_rule(rule_name):
+def handle_disable_rule(rule_name: str):
rule_scheduled_jobs = EventsBackend.get().rule_scheduled_jobs
job_id = rule_scheduled_jobs.get(rule_name)
if job_id:
diff --git a/localstack/services/logs/logs_listener.py b/localstack/services/logs/logs_listener.py
index 3856694a5814d..42288dd7cf54e 100644
--- a/localstack/services/logs/logs_listener.py
+++ b/localstack/services/logs/logs_listener.py
@@ -58,9 +58,8 @@ def _fix_next_token_response(response):
def publish_log_metrics_for_events(data):
"""Filter and publish log metrics for matching events"""
- from moto.logs.models import ( # TODO: create separate RegionBackend class to store state
- logs_backends,
- )
+ # TODO: create separate RegionBackend class to store state
+ from moto.logs.models import logs_backends
data = data if isinstance(data, dict) else json.loads(data)
log_events = data.get("logEvents") or []
diff --git a/localstack/utils/testutil.py b/localstack/utils/testutil.py
index 8ff69114b2d73..f5ec4f5ad8577 100644
--- a/localstack/utils/testutil.py
+++ b/localstack/utils/testutil.py
@@ -444,7 +444,7 @@ def list_all_s3_objects():
def delete_all_s3_objects(buckets):
s3_client = aws_stack.connect_to_service("s3")
- buckets = buckets if isinstance(buckets, list) else [buckets]
+ buckets = ensure_list(buckets)
for bucket in buckets:
keys = all_s3_object_keys(bucket)
deletes = [{"Key": key} for key in keys]
@@ -576,19 +576,22 @@ def check_expected_lambda_log_events_length(expected_length, function_name, rege
return events
+def list_all_log_events(log_group_name: str) -> List[Dict]:
+ logs = aws_stack.connect_to_service("logs")
+ return list_all_resources(
+ lambda kwargs: logs.filter_log_events(logGroupName=log_group_name, **kwargs),
+ last_token_attr_name="nextToken",
+ list_attr_name="events",
+ )
+
+
def get_lambda_log_events(
function_name, delay_time=DEFAULT_GET_LOG_EVENTS_DELAY, regex_filter: Optional[str] = None
):
- def get_log_events(function_name, delay_time):
- time.sleep(delay_time)
-
- logs = aws_stack.connect_to_service("logs")
- log_group_name = get_lambda_log_group_name(function_name)
- return list_all_resources(
- lambda kwargs: logs.filter_log_events(logGroupName=log_group_name, **kwargs),
- last_token_attr_name="nextToken",
- list_attr_name="events",
- )
+ def get_log_events(func_name, delay):
+ time.sleep(delay)
+ log_group_name = get_lambda_log_group_name(func_name)
+ return list_all_log_events(log_group_name)
try:
events = get_log_events(function_name, delay_time)
diff --git a/tests/integration/test_events.py b/tests/integration/test_events.py
index 0f998ca084efb..1e92625bb7140 100644
--- a/tests/integration/test_events.py
+++ b/tests/integration/test_events.py
@@ -7,7 +7,7 @@
from localstack import config
from localstack.services.awslambda.lambda_utils import LAMBDA_RUNTIME_PYTHON36
-from localstack.services.events.events_listener import EVENTS_TMP_DIR
+from localstack.services.events.events_listener import _get_events_tmp_dir
from localstack.services.generic_proxy import ProxyListener
from localstack.services.infra import start_proxy
from localstack.utils import testutil
@@ -88,13 +88,14 @@ def test_events_written_to_disk_are_timestamp_prefixed_for_chronological_orderin
]
)
+ events_tmp_dir = _get_events_tmp_dir()
sorted_events_written_to_disk = map(
- lambda filename: json.loads(str(load_file(os.path.join(EVENTS_TMP_DIR, filename)))),
- sorted(os.listdir(EVENTS_TMP_DIR)),
+ lambda filename: json.loads(str(load_file(os.path.join(events_tmp_dir, filename)))),
+ sorted(os.listdir(events_tmp_dir)),
)
sorted_events = list(
filter(
- lambda event: event["DetailType"] == event_type,
+ lambda event: event.get("DetailType") == event_type,
sorted_events_written_to_disk,
)
)
| * minor refactoring of EventBridge utils
* pull out `list_all_log_events()` util to make it reusable from other places
* fix location of `EVENTS_TMP_DIR` (was previously writing to CWD, instead of /tmp) | https://api.github.com/repos/localstack/localstack/pulls/5070 | 2021-12-03T21:53:32Z | 2021-12-03T22:10:32Z | 2021-12-03T22:10:32Z | 2021-12-03T22:32:44Z | 1,847 | localstack/localstack | 28,886 |
Fix broken links | diff --git a/README-ja.md b/README-ja.md
index f9a0024244..62d627b1e5 100644
--- a/README-ja.md
+++ b/README-ja.md
@@ -751,7 +751,7 @@ Layer 7 ロードバランサーは [アプリケーションレイヤー](#通
### その他の参考資料、ページ
* [スケールするシステムアーキテクチャを設計するためのイントロ](http://lethain.com/introduction-to-architecting-systems-for-scale)
-* [システム設計インタビューを紐解く](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+* [システム設計インタビューを紐解く](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [サービス指向アーキテクチャ](https://en.wikipedia.org/wiki/Service-oriented_architecture)
* [Zookeeperのイントロダクション](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
* [マイクロサービスを作るために知っておきたいこと](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
@@ -1408,7 +1408,7 @@ TCPよりもUDPを使うのは:
<p align="center">
<img src="http://i.imgur.com/iF4Mkb5.png">
<br/>
- <i><a href=http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/>Source: Crack the system design interview</a></i>
+ <i><a href=http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview>Source: Crack the system design interview</a></i>
</p>
RPCではクライアントがリモートサーバーなどの異なるアドレス空間でプロシージャーが処理されるようにします。プロシージャーはローカルでのコールのように、クライアントからサーバーにどのように通信するかという詳細を省いた状態でコードが書かれます。リモートのコールは普通、ローカルのコールよりも遅く、信頼性に欠けるため、RPCコールをローカルコールと区別させておくことが好ましいでしょう。人気のRPCフレームワークは以下です。[Protobuf](https://developers.google.com/protocol-buffers/)、 [Thrift](https://thrift.apache.org/)、[Avro](https://avro.apache.org/docs/current/)
@@ -1504,7 +1504,7 @@ RESTはデータを公開することに焦点を当てています。クライ
* [REST vs JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
* [Debunking the myths of RPC and REST](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
* [What are the drawbacks of using REST](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
-* [Crack the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Thrift](https://code.facebook.com/posts/1468950976659943/)
* [Why REST for internal use and not RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
@@ -1768,7 +1768,7 @@ Special thanks to:
* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
* [A distributed systems reading list](http://dancres.github.io/Pages/)
-* [Cracking the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
## Contact info
diff --git a/README-zh-Hans.md b/README-zh-Hans.md
index ad852e474f..5a8d01a05f 100644
--- a/README-zh-Hans.md
+++ b/README-zh-Hans.md
@@ -761,7 +761,7 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
### 来源及延伸阅读
- [可缩放系统构架介绍](http://lethain.com/introduction-to-architecting-systems-for-scale)
-- [破解系统设计面试](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+- [破解系统设计面试](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
- [面向服务架构](https://en.wikipedia.org/wiki/Service-oriented_architecture)
- [Zookeeper 介绍](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
- [构建微服务,你所需要知道的一切](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
@@ -1521,7 +1521,7 @@ REST 关注于暴露数据。它减少了客户端/服务端的耦合程度,
* [REST vs JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
* [揭开 RPC 和 REST 的神秘面纱](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
* [使用 REST 的缺点是什么](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
-* [破解系统设计面试](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+* [破解系统设计面试](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Thrift](https://code.facebook.com/posts/1468950976659943/)
* [为什么在内部使用 REST 而不是 RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
@@ -1782,7 +1782,7 @@ Notes
* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
* [A distributed systems reading list](http://dancres.github.io/Pages/)
-* [Cracking the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
## 联系方式
diff --git a/README-zh-TW.md b/README-zh-TW.md
index b608ec3277..1477e8bcea 100644
--- a/README-zh-TW.md
+++ b/README-zh-TW.md
@@ -750,7 +750,7 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
### 來源與延伸閱讀
* [可擴展式系統架構介紹](http://lethain.com/introduction-to-architecting-systems-for-scale)
-* [破解系統設計面試](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+* [破解系統設計面試](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [面向服務架構](https://en.wikipedia.org/wiki/Service-oriented_architecture)
* [Zookeeper 介紹](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
* [建構微服務系統你所需要知道的一切](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
@@ -1409,7 +1409,7 @@ UDP 的可靠性較低,但適合用在像是網路電話、視訊聊天、串
<p align="center">
<img src="http://i.imgur.com/iF4Mkb5.png">
<br/>
- <i><a href=http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/>資料來源:破解系統設計面試</a></i>
+ <i><a href=http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview>資料來源:破解系統設計面試</a></i>
</p>
在一個 RPC 中,客戶端會去呼叫另外一個位置空間(通常是在遠端的伺服器)的方法。呼叫的方式就像是呼叫本地端的一個方法一樣,客戶端和伺服器溝通的具體過程被抽象化,而遠端呼叫相較於本地端呼叫來說一般較慢,而且可靠性較差,因此了解如何區別這兩種方法是必要的。熱門的 RPC 框架包含了 [Protobuf](https://developers.google.com/protocol-buffers/)、[Thrift](https://thrift.apache.org/) 和 [Avro](https://avro.apache.org/docs/current/)。
@@ -1505,7 +1505,7 @@ REST 關注於揭露資料,減少客戶端/伺服器之間耦合的程度,
* [REST 和 JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
* [揭開 RPC 和 REST 的神秘面紗](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
* [使用 REST 的缺點](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
-* [破解系統設計面試](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+* [破解系統設計面試](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Thrift](https://code.facebook.com/posts/1468950976659943/)
* [為什麼在內部要使用 REST 而不是 RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
@@ -1767,7 +1767,7 @@ Notes
* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
* [A distributed systems reading list](http://dancres.github.io/Pages/)
-* [Cracking the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
## 聯絡資訊
diff --git a/README.md b/README.md
index fd894e4edd..913027493a 100644
--- a/README.md
+++ b/README.md
@@ -749,7 +749,7 @@ Systems such as [Consul](https://www.consul.io/docs/index.html), [Etcd](https://
### Source(s) and further reading
* [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
-* [Crack the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Service oriented architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture)
* [Introduction to Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
* [Here's what you need to know about building microservices](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
@@ -1406,7 +1406,7 @@ Use UDP over TCP when:
<p align="center">
<img src="http://i.imgur.com/iF4Mkb5.png">
<br/>
- <i><a href=http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/>Source: Crack the system design interview</a></i>
+ <i><a href=http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview>Source: Crack the system design interview</a></i>
</p>
In an RPC, a client causes a procedure to execute on a different address space, usually a remote server. The procedure is coded as if it were a local procedure call, abstracting away the details of how to communicate with the server from the client program. Remote calls are usually slower and less reliable than local calls so it is helpful to distinguish RPC calls from local calls. Popular RPC frameworks include [Protobuf](https://developers.google.com/protocol-buffers/), [Thrift](https://thrift.apache.org/), and [Avro](https://avro.apache.org/docs/current/).
@@ -1502,7 +1502,7 @@ REST is focused on exposing data. It minimizes the coupling between client/serv
* [REST vs JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
* [Debunking the myths of RPC and REST](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
* [What are the drawbacks of using REST](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
-* [Crack the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Thrift](https://code.facebook.com/posts/1468950976659943/)
* [Why REST for internal use and not RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
@@ -1768,7 +1768,7 @@ Special thanks to:
* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
* [A distributed systems reading list](http://dancres.github.io/Pages/)
-* [Cracking the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
## Contact info
| https://api.github.com/repos/donnemartin/system-design-primer/pulls/180 | 2018-07-19T07:57:12Z | 2018-07-25T01:55:55Z | 2018-07-25T01:55:55Z | 2018-07-25T01:56:09Z | 3,567 | donnemartin/system-design-primer | 36,710 | |
Supported customizing kwargs for lr_scheduler | diff --git a/deepspeed/runtime/engine.py b/deepspeed/runtime/engine.py
index 7431b2c892c4..76ba6af78b76 100755
--- a/deepspeed/runtime/engine.py
+++ b/deepspeed/runtime/engine.py
@@ -979,7 +979,7 @@ def clip_fp32_gradients(self):
torch.nn.utils.clip_grad_norm_(parameters=self.module.parameters(),
max_norm=self.gradient_clipping())
- def _take_model_step(self):
+ def _take_model_step(self, lr_kwargs):
if self.gradient_clipping() > 0.0:
if not self.fp16_enabled() and not self.amp_enabled():
self.clip_fp32_gradients()
@@ -1010,14 +1010,14 @@ def _take_model_step(self):
self.skipped_steps += 1
else:
if self.lr_scheduler is not None:
- self.lr_scheduler.step()
+ self.lr_scheduler.step(**(lr_kwargs or {}))
if report_progress and (self.global_steps + 1) % self.steps_per_print() == 0:
self._report_progress(self.global_steps + 1)
self.global_steps += 1
self.global_samples += self.train_batch_size()
- def step(self):
+ def step(self, lr_kwargs=None):
r"""Execute the weight update step after forward and backward propagation
on effective_train_batch.
"""
@@ -1034,7 +1034,7 @@ def step(self):
if self.progressive_layer_drop:
self.progressive_layer_drop.update_state(self.global_steps)
- self._take_model_step()
+ self._take_model_step(lr_kwargs)
self.tput_timer.stop(report_progress)
diff --git a/deepspeed/runtime/pipe/engine.py b/deepspeed/runtime/pipe/engine.py
index 954774e58912..5c5d896dfc0d 100644
--- a/deepspeed/runtime/pipe/engine.py
+++ b/deepspeed/runtime/pipe/engine.py
@@ -940,14 +940,14 @@ def _exec_recv_grads(self, buffer_id):
if self.wall_clock_breakdown():
self.timers('pipe_recv_grad').stop()
- def _exec_optimizer_step(self):
+ def _exec_optimizer_step(self, lr_kwargs=None):
if self.wall_clock_breakdown():
self.timers('step_microstep').start()
self.timers('step').start()
self.mem_status('BEFORE STEP', reset_max=True)
self._force_grad_boundary = True
- self._take_model_step()
+ self._take_model_step(lr_kwargs)
self._force_grad_boundary = False
self.mem_status('AFTER STEP')
| In some common schedulers (e.g. ReduceOnPlateau), we need to tell our schedulers some extra information (e.g. current metrics).
This commit should be able to support this with some minor fixes, without changing any default behaviours. | https://api.github.com/repos/microsoft/DeepSpeed/pulls/584 | 2020-12-07T03:32:21Z | 2020-12-11T21:52:07Z | 2020-12-11T21:52:07Z | 2020-12-11T21:52:07Z | 588 | microsoft/DeepSpeed | 10,079 |
Add deploy command to repos generated by cli template | diff --git a/libs/langchain/langchain/cli/create_repo/templates/pip/Makefile b/libs/langchain/langchain/cli/create_repo/templates/pip/Makefile
index 00cac9fdb9f222..92243442c67c05 100644
--- a/libs/langchain/langchain/cli/create_repo/templates/pip/Makefile
+++ b/libs/langchain/langchain/cli/create_repo/templates/pip/Makefile
@@ -26,6 +26,9 @@ format format_diff:
black $(PYTHON_FILES)
ruff --select I --fix $(PYTHON_FILES)
+deploy_gcp:
+ gcloud run deploy ____project_name_identifier --source . --port 8001 --env-vars-file .env.gcp.yaml --allow-unauthenticated --region us-central1 --min-instances 1
+
######################
# HELP
######################
@@ -36,3 +39,4 @@ help:
@echo 'make format - run code formatters'
@echo 'make lint - run linters'
@echo 'make test - run unit tests'
+ @echo 'make deploy_gcp - deploy to GCP'
diff --git a/libs/langchain/langchain/cli/create_repo/templates/poetry/Makefile b/libs/langchain/langchain/cli/create_repo/templates/poetry/Makefile
index cfc865ce6477e1..f19cf7b4e95a24 100644
--- a/libs/langchain/langchain/cli/create_repo/templates/poetry/Makefile
+++ b/libs/langchain/langchain/cli/create_repo/templates/poetry/Makefile
@@ -26,6 +26,9 @@ format format_diff:
poetry run black $(PYTHON_FILES)
poetry run ruff --select I --fix $(PYTHON_FILES)
+deploy_gcp:
+ gcloud run deploy ____project_name_identifier --source . --port 8001 --env-vars-file .env.gcp.yaml --allow-unauthenticated --region us-central1 --min-instances 1
+
######################
# HELP
######################
@@ -36,3 +39,4 @@ help:
@echo 'make format - run code formatters'
@echo 'make lint - run linters'
@echo 'make test - run unit tests'
+ @echo 'make deploy_gcp - deploy to GCP'
diff --git a/libs/langchain/langchain/cli/create_repo/templates/repo/.env.gcp.yaml b/libs/langchain/langchain/cli/create_repo/templates/repo/.env.gcp.yaml
new file mode 100644
index 00000000000000..54c854b6cd0c02
--- /dev/null
+++ b/libs/langchain/langchain/cli/create_repo/templates/repo/.env.gcp.yaml
@@ -0,0 +1 @@
+OPENAI_API_KEY: your_secret_key_here
diff --git a/libs/langchain/langchain/cli/create_repo/templates/repo/.gitignore b/libs/langchain/langchain/cli/create_repo/templates/repo/.gitignore
index 796dbb448b90b5..794a85127ab4f3 100644
--- a/libs/langchain/langchain/cli/create_repo/templates/repo/.gitignore
+++ b/libs/langchain/langchain/cli/create_repo/templates/repo/.gitignore
@@ -116,6 +116,7 @@ venv/
ENV/
env.bak/
venv.bak/
+.env.gcp.yaml
# Spyder project settings
.spyderproject
diff --git a/libs/langchain/langchain/cli/create_repo/templates/repo/README.md b/libs/langchain/langchain/cli/create_repo/templates/repo/README.md
index 3b3454692b966d..2cb2ae73a5d3e6 100644
--- a/libs/langchain/langchain/cli/create_repo/templates/repo/README.md
+++ b/libs/langchain/langchain/cli/create_repo/templates/repo/README.md
@@ -57,6 +57,15 @@ docker run -p 8001:8001 -e PORT=8001 ____project_name_identifier:latest
Don't forget to add any needed environment variables!
+## Deploy to GCP
+
+You can deploy to GCP Cloud Run using the following command:
+
+First edit `.env.gcp.yaml` file with any environment variables you need. Then run:
+
+```
+make deploy_gcp
+```
## Contributing
| <!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/extras` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->
| https://api.github.com/repos/langchain-ai/langchain/pulls/11711 | 2023-10-12T12:36:37Z | 2023-10-12T14:09:21Z | 2023-10-12T14:09:21Z | 2023-10-12T14:09:22Z | 944 | langchain-ai/langchain | 43,505 |
Update Process Replay Segments | diff --git a/cereal b/cereal
index 513dfc7ee00124..e4130c90558dfb 160000
--- a/cereal
+++ b/cereal
@@ -1 +1 @@
-Subproject commit 513dfc7ee001243cd68a57a9d92fe3170fc49c7d
+Subproject commit e4130c90558dfb491e132992dce36e0e620e070a
diff --git a/selfdrive/test/process_replay/ref_commit b/selfdrive/test/process_replay/ref_commit
index 7390610252eee7..062611316c94a9 100644
--- a/selfdrive/test/process_replay/ref_commit
+++ b/selfdrive/test/process_replay/ref_commit
@@ -1 +1 @@
-ef5395e5f36550d2b485216eee5406bf6062e9c9
\ No newline at end of file
+147410f09f242f05b922c9cc7ac04c3c3366419c
\ No newline at end of file
diff --git a/selfdrive/test/process_replay/test_processes.py b/selfdrive/test/process_replay/test_processes.py
index e8c2e1dc9494e6..0f118971c61bac 100755
--- a/selfdrive/test/process_replay/test_processes.py
+++ b/selfdrive/test/process_replay/test_processes.py
@@ -38,21 +38,21 @@
]
segments = [
- ("BODY", "regen660D86654BA|2022-07-06--14-27-15--0"),
- ("HYUNDAI", "regen114E5FF24D8|2022-07-14--17-08-47--0"),
- ("HYUNDAI", "d824e27e8c60172c|2022-08-19--17-58-07--2"),
- ("TOYOTA", "regenBA97410FBEC|2022-07-06--14-26-49--0"),
- ("TOYOTA2", "regenDEDB1D9C991|2022-07-06--14-54-08--0"),
- ("TOYOTA3", "regenDDC1FE60734|2022-07-06--14-32-06--0"),
- ("HONDA", "regenE62960EEC38|2022-07-14--19-33-24--0"),
- ("HONDA2", "regenC3EBD92F029|2022-07-14--19-29-47--0"),
- ("CHRYSLER", "regen38346FB33D0|2022-07-14--18-05-26--0"),
- ("RAM", "2f4452b03ccb98f0|2022-07-07--08-01-56--3"),
- ("SUBARU", "regen54A1E2BE5AA|2022-07-14--18-07-50--0"),
- ("GM", "regen76027B408B7|2022-08-16--19-56-58--0"),
- ("NISSAN", "regenCA0B0DC946E|2022-07-14--18-10-17--0"),
- ("VOLKSWAGEN", "regen007098CA0EF|2022-07-06--15-01-26--0"),
- ("MAZDA", "regen61BA413D53B|2022-07-06--14-39-42--0"),
+ ("BODY", "regen9D38397D30D|2022-09-09--13-12-48--0"),
+ ("HYUNDAI", "regenB3953B393C0|2022-09-09--14-49-37--0"),
+ ("HYUNDAI", "regen8DB830E5376|2022-09-13--17-24-37--0"),
+ ("TOYOTA", "regen8FCBB6F06F1|2022-09-09--13-14-07--0"),
+ ("TOYOTA2", "regen956BFA75300|2022-09-09--14-51-24--0"),
+ ("TOYOTA3", "regenE909BC2F430|2022-09-09--20-44-49--0"),
+ ("HONDA", "regenD1D10209015|2022-09-09--14-53-09--0"),
+ ("HONDA2", "regen3F7C2EFDC08|2022-09-09--19-41-19--0"),
+ ("CHRYSLER", "regen92783EAE66B|2022-09-09--13-15-44--0"),
+ ("RAM", "regenBE5DAAEF30F|2022-09-13--17-06-24--0"),
+ ("SUBARU", "regen8A363AF7E14|2022-09-13--17-20-39--0"),
+ ("GM", "regen31EA3F9A37C|2022-09-09--21-06-36--0"),
+ ("NISSAN", "regenAA21ADE5921|2022-09-09--19-44-37--0"),
+ ("VOLKSWAGEN", "regenA1BF4D17761|2022-09-09--19-46-24--0"),
+ ("MAZDA", "regen1994C97E977|2022-09-13--16-34-44--0"),
]
# dashcamOnly makes don't need to be tested until a full port is done
| https://api.github.com/repos/commaai/openpilot/pulls/25805 | 2022-09-16T02:37:50Z | 2022-09-16T03:15:57Z | 2022-09-16T03:15:57Z | 2022-09-16T03:15:58Z | 1,330 | commaai/openpilot | 9,599 | |
add a note in loc_kf | diff --git a/selfdrive/locationd/models/loc_kf.py b/selfdrive/locationd/models/loc_kf.py
index c6a92f1683f870..48e309d9c63877 100755
--- a/selfdrive/locationd/models/loc_kf.py
+++ b/selfdrive/locationd/models/loc_kf.py
@@ -50,6 +50,8 @@ class States():
CLOCK_ACCELERATION = slice(28, 29) # clock acceleration in light-meters/s**2,
ACCELEROMETER_SCALE = slice(29, 30) # scale of mems accelerometer
ACCELEROMETER_BIAS = slice(30, 33) # bias of mems accelerometer
+ # We curently do not use ACCELEROMETER_SCALE to avoid instability due to too many free variables (ACCELEROMETER_SCALE, ACCELEROMETER_BIAS, IMU_OFFSET).
+ # From experiments we see that ACCELEROMETER_BIAS is more correct than ACCELEROMETER_SCALE
# Error-state has different slices because it is an ESKF
ECEF_POS_ERR = slice(0, 3)
@@ -159,7 +161,6 @@ def generate_code(generated_dir, N=4):
glonass_bias = state[States.GLONASS_BIAS, :]
glonass_freq_slope = state[States.GLONASS_FREQ_SLOPE, :]
ca = state[States.CLOCK_ACCELERATION, :]
- # accel_scale = state[States.ACCELEROMETER_SCALE, :]
accel_bias = state[States.ACCELEROMETER_BIAS, :]
dt = sp.Symbol('dt')
| https://api.github.com/repos/commaai/openpilot/pulls/23082 | 2021-11-30T18:50:39Z | 2021-11-30T18:51:32Z | 2021-11-30T18:51:32Z | 2021-11-30T18:51:33Z | 362 | commaai/openpilot | 9,594 | |
Add support for connect timeouts | diff --git a/docs/api.rst b/docs/api.rst
index 42f7c5a052..69f138a282 100644
--- a/docs/api.rst
+++ b/docs/api.rst
@@ -5,7 +5,7 @@ Developer Interface
.. module:: requests
-This part of the documentation covers all the interfaces of Requests. For
+This part of the documentation covers all the interfaces of Requests. For
parts where Requests depends on external libraries, we document the most
important right here and provide links to the canonical documentation.
diff --git a/docs/user/advanced.rst b/docs/user/advanced.rst
index 8eb888b108..65970daf5f 100644
--- a/docs/user/advanced.rst
+++ b/docs/user/advanced.rst
@@ -707,3 +707,41 @@ Two excellent examples are `grequests`_ and `requests-futures`_.
.. _`grequests`: https://github.com/kennethreitz/grequests
.. _`requests-futures`: https://github.com/ross/requests-futures
+
+Timeouts
+--------
+
+Most requests to external servers should have a timeout attached, in case the
+server is not responding in a timely manner. Without a timeout, your code may
+hang for minutes or more.
+
+The **connect** timeout is the number of seconds Requests will wait for your
+client to establish a connection to a remote machine (corresponding to the
+`connect()`_) call on the socket. It's a good practice to set connect timeouts
+to slightly larger than a multiple of 3, which is the default `TCP packet
+retransmission window <http://www.hjp.at/doc/rfc/rfc2988.txt>`_.
+
+Once your client has connected to the server and sent the HTTP request, the
+**read** timeout is the number of seconds the client will wait for the server
+to send a response. (Specifically, it's the number of seconds that the client
+will wait *between* bytes sent from the server. In 99.9% of cases, this is the
+time before the server sends the first byte).
+
+If you specify a single value for the timeout, like this::
+
+ r = requests.get('https://github.com', timeout=5)
+
+The timeout value will be applied to both the ``connect`` and the ``read``
+timeouts. Specify a tuple if you would like to set the values separately::
+
+ r = requests.get('https://github.com', timeout=(3.05, 27))
+
+If the remote server is very slow, you can tell Requests to wait forever for
+a response, by passing None as a timeout value and then retrieving a cup of
+coffee.
+
+.. code-block:: python
+
+ r = requests.get('https://github.com', timeout=None)
+
+.. _`connect()`: http://linux.die.net/man/2/connect
diff --git a/requests/adapters.py b/requests/adapters.py
index 1ce54470c3..3c1e979f14 100644
--- a/requests/adapters.py
+++ b/requests/adapters.py
@@ -15,17 +15,19 @@
from .packages.urllib3.poolmanager import PoolManager, proxy_from_url
from .packages.urllib3.response import HTTPResponse
from .packages.urllib3.util import Timeout as TimeoutSauce
-from .compat import urlparse, basestring, urldefrag, unquote
+from .compat import urlparse, basestring, urldefrag
from .utils import (DEFAULT_CA_BUNDLE_PATH, get_encoding_from_headers,
prepend_scheme_if_needed, get_auth_from_url)
from .structures import CaseInsensitiveDict
-from .packages.urllib3.exceptions import MaxRetryError
-from .packages.urllib3.exceptions import TimeoutError
-from .packages.urllib3.exceptions import SSLError as _SSLError
+from .packages.urllib3.exceptions import ConnectTimeoutError
from .packages.urllib3.exceptions import HTTPError as _HTTPError
+from .packages.urllib3.exceptions import MaxRetryError
from .packages.urllib3.exceptions import ProxyError as _ProxyError
+from .packages.urllib3.exceptions import ReadTimeoutError
+from .packages.urllib3.exceptions import SSLError as _SSLError
from .cookies import extract_cookies_to_jar
-from .exceptions import ConnectionError, Timeout, SSLError, ProxyError
+from .exceptions import (ConnectionError, ConnectTimeout, ReadTimeout, SSLError,
+ ProxyError)
from .auth import _basic_auth_str
DEFAULT_POOLBLOCK = False
@@ -315,6 +317,7 @@ def send(self, request, stream=False, timeout=None, verify=True, cert=None, prox
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) The timeout on the request.
+ :type timeout: float or tuple (connect timeout, read timeout), eg (3.1, 20)
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
@@ -328,7 +331,18 @@ def send(self, request, stream=False, timeout=None, verify=True, cert=None, prox
chunked = not (request.body is None or 'Content-Length' in request.headers)
- timeout = TimeoutSauce(connect=timeout, read=timeout)
+ if isinstance(timeout, tuple):
+ try:
+ connect, read = timeout
+ timeout = TimeoutSauce(connect=connect, read=read)
+ except ValueError as e:
+ # this may raise a string formatting error.
+ err = ("Invalid timeout {0}. Pass a (connect, read) "
+ "timeout tuple, or a single float to set "
+ "both timeouts to the same value".format(timeout))
+ raise ValueError(err)
+ else:
+ timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
@@ -390,6 +404,9 @@ def send(self, request, stream=False, timeout=None, verify=True, cert=None, prox
raise ConnectionError(sockerr, request=request)
except MaxRetryError as e:
+ if isinstance(e.reason, ConnectTimeoutError):
+ raise ConnectTimeout(e, request=request)
+
raise ConnectionError(e, request=request)
except _ProxyError as e:
@@ -398,8 +415,8 @@ def send(self, request, stream=False, timeout=None, verify=True, cert=None, prox
except (_SSLError, _HTTPError) as e:
if isinstance(e, _SSLError):
raise SSLError(e, request=request)
- elif isinstance(e, TimeoutError):
- raise Timeout(e, request=request)
+ elif isinstance(e, ReadTimeoutError):
+ raise ReadTimeout(e, request=request)
else:
raise
diff --git a/requests/exceptions.py b/requests/exceptions.py
index a4ee9d630c..6dbd98a931 100644
--- a/requests/exceptions.py
+++ b/requests/exceptions.py
@@ -44,7 +44,22 @@ class SSLError(ConnectionError):
class Timeout(RequestException):
- """The request timed out."""
+ """The request timed out.
+
+ Catching this error will catch both :exc:`ConnectTimeout` and
+ :exc:`ReadTimeout` errors.
+ """
+
+
+class ConnectTimeout(ConnectionError, Timeout):
+ """The request timed out while trying to connect to the server.
+
+ Requests that produce this error are safe to retry
+ """
+
+
+class ReadTimeout(Timeout):
+ """The server did not send any data in the allotted amount of time."""
class URLRequired(RequestException):
diff --git a/requests/structures.py b/requests/structures.py
index 66cdad86e1..3e5f2faa2e 100644
--- a/requests/structures.py
+++ b/requests/structures.py
@@ -23,7 +23,7 @@ class CaseInsensitiveDict(collections.MutableMapping):
case of the last key to be set, and ``iter(instance)``,
``keys()``, ``items()``, ``iterkeys()``, and ``iteritems()``
will contain case-sensitive keys. However, querying and contains
- testing is case insensitive:
+ testing is case insensitive::
cid = CaseInsensitiveDict()
cid['Accept'] = 'application/json'
diff --git a/test_requests.py b/test_requests.py
index 34ebd8cae5..716c0dcff6 100755
--- a/test_requests.py
+++ b/test_requests.py
@@ -18,7 +18,8 @@
from requests.compat import (
Morsel, cookielib, getproxies, str, urljoin, urlparse, is_py3, builtin_str)
from requests.cookies import cookiejar_from_dict, morsel_to_cookie
-from requests.exceptions import InvalidURL, MissingSchema, ConnectionError
+from requests.exceptions import (InvalidURL, MissingSchema, ConnectTimeout,
+ ReadTimeout, ConnectionError, Timeout)
from requests.models import PreparedRequest
from requests.structures import CaseInsensitiveDict
from requests.sessions import SessionRedirectMixin
@@ -38,6 +39,9 @@ def u(s):
return s.decode('unicode-escape')
+# Requests to this URL should always fail with a connection timeout (nothing
+# listening on that port)
+TARPIT = "http://10.255.255.1"
HTTPBIN = os.environ.get('HTTPBIN_URL', 'http://httpbin.org/')
# Issue #1483: Make sure the URL always has a trailing slash
HTTPBIN = HTTPBIN.rstrip('/') + '/'
@@ -1308,10 +1312,53 @@ def test_max_age_invalid_str(self):
class TestTimeout:
def test_stream_timeout(self):
try:
- requests.get('https://httpbin.org/delay/10', timeout=5.0)
+ requests.get('https://httpbin.org/delay/10', timeout=2.0)
except requests.exceptions.Timeout as e:
assert 'Read timed out' in e.args[0].args[0]
+ def test_invalid_timeout(self):
+ with pytest.raises(ValueError) as e:
+ requests.get(httpbin('get'), timeout=(3, 4, 5))
+ assert '(connect, read)' in str(e)
+
+ with pytest.raises(ValueError) as e:
+ requests.get(httpbin('get'), timeout="foo")
+ assert 'must be an int or float' in str(e)
+
+ def test_none_timeout(self):
+ """ Check that you can set None as a valid timeout value.
+
+ To actually test this behavior, we'd want to check that setting the
+ timeout to None actually lets the request block past the system default
+ timeout. However, this would make the test suite unbearably slow.
+ Instead we verify that setting the timeout to None does not prevent the
+ request from succeeding.
+ """
+ r = requests.get(httpbin('get'), timeout=None)
+ assert r.status_code == 200
+
+ def test_read_timeout(self):
+ try:
+ requests.get(httpbin('delay/10'), timeout=(None, 0.1))
+ assert False, "The recv() request should time out."
+ except ReadTimeout:
+ pass
+
+ def test_connect_timeout(self):
+ try:
+ requests.get(TARPIT, timeout=(0.1, None))
+ assert False, "The connect() request should time out."
+ except ConnectTimeout as e:
+ assert isinstance(e, ConnectionError)
+ assert isinstance(e, Timeout)
+
+ def test_total_timeout_connect(self):
+ try:
+ requests.get(TARPIT, timeout=(0.1, 0.1))
+ assert False, "The connect() request should time out."
+ except ConnectTimeout:
+ pass
+
SendCall = collections.namedtuple('SendCall', ('args', 'kwargs'))
| Modifies the timeout interface to also accept a tuple (connect, read) which
would be used to set individual connect and read timeouts for Requests. Adds
Advanced documentation explaining the interface and providing guidance for
timeout values.
| https://api.github.com/repos/psf/requests/pulls/2176 | 2014-08-23T19:39:43Z | 2014-08-26T19:13:30Z | 2014-08-26T19:13:30Z | 2021-09-08T11:00:47Z | 2,672 | psf/requests | 33,030 |
Attempt to quote anyway if unquoting fails | diff --git a/requests/utils.py b/requests/utils.py
index 7467941447..29413964d4 100644
--- a/requests/utils.py
+++ b/requests/utils.py
@@ -418,10 +418,18 @@ def requote_uri(uri):
This function passes the given URI through an unquote/quote cycle to
ensure that it is fully and consistently quoted.
"""
- # Unquote only the unreserved characters
- # Then quote only illegal characters (do not quote reserved, unreserved,
- # or '%')
- return quote(unquote_unreserved(uri), safe="!#$%&'()*+,/:;=?@[]~")
+ safe_with_percent = "!#$%&'()*+,/:;=?@[]~"
+ safe_without_percent = "!#$&'()*+,/:;=?@[]~"
+ try:
+ # Unquote only the unreserved characters
+ # Then quote only illegal characters (do not quote reserved,
+ # unreserved, or '%')
+ return quote(unquote_unreserved(uri), safe=safe_with_percent)
+ except InvalidURL:
+ # We couldn't unquote the given URI, so let's try quoting it, but
+ # there may be unquoted '%'s in the URI. We need to make sure they're
+ # properly quoted so they do not cause issues elsewhere.
+ return quote(uri, safe=safe_without_percent)
def address_in_network(ip, net):
diff --git a/test_requests.py b/test_requests.py
index 34348d3e47..9337b0e219 100755
--- a/test_requests.py
+++ b/test_requests.py
@@ -1301,6 +1301,22 @@ def test_get_auth_from_url(self):
assert username == percent_encoding_test_chars
assert password == percent_encoding_test_chars
+ def test_requote_uri_with_unquoted_percents(self):
+ """Ensure we handle unquoted percent signs in redirects.
+
+ See: https://github.com/kennethreitz/requests/issues/2356
+ """
+ from requests.utils import requote_uri
+ bad_uri = 'http://example.com/fiz?buz=%ppicture'
+ quoted = 'http://example.com/fiz?buz=%25ppicture'
+ assert quoted == requote_uri(bad_uri)
+
+ def test_requote_uri_properly_requotes(self):
+ """Ensure requoting doesn't break expectations."""
+ from requests.utils import requote_uri
+ quoted = 'http://example.com/fiz?buz=%25ppicture'
+ assert quoted == requote_uri(quoted)
+
class TestMorselToCookieExpires(unittest.TestCase):
| Fixes #2356
TODO
- [x] Add test
| https://api.github.com/repos/psf/requests/pulls/2393 | 2014-12-27T02:06:03Z | 2015-01-27T18:24:34Z | 2015-01-27T18:24:34Z | 2021-09-08T09:00:59Z | 594 | psf/requests | 32,995 |
Fix typo | diff --git a/doc/doc_en/recognition_en.md b/doc/doc_en/recognition_en.md
index 51857ba16b..3ec6f198e6 100644
--- a/doc/doc_en/recognition_en.md
+++ b/doc/doc_en/recognition_en.md
@@ -1,7 +1,7 @@
# Text Recognition
- [1. Data Preparation](#DATA_PREPARATION)
- - [1.1 Costom Dataset](#Costom_Dataset)
+ - [1.1 Custom Dataset](#Custom_Dataset)
- [1.2 Dataset Download](#Dataset_download)
- [1.3 Dictionary](#Dictionary)
- [1.4 Add Space Category](#Add_space_category)
@@ -35,8 +35,8 @@ ln -sf <path/to/dataset> <path/to/paddle_ocr>/train_data/dataset
mklink /d <path/to/paddle_ocr>/train_data/dataset <path/to/dataset>
```
-<a name="Costom_Dataset"></a>
-### 1.1 Costom Dataset
+<a name="Custom_Dataset"></a>
+### 1.1 Custom Dataset
If you want to use your own data for training, please refer to the following to organize your data.
| Fixed typo in `recognition_en.md`. Changed "costom" to "custom" | https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/5540 | 2022-02-21T17:25:39Z | 2022-02-22T05:21:21Z | 2022-02-22T05:21:21Z | 2022-02-22T05:21:21Z | 279 | PaddlePaddle/PaddleOCR | 42,376 |
improve system prediction and remove some hard code | diff --git a/configs/rec/ch_PP-OCRv2/ch_PP-OCRv2_rec_enhanced_ctc_loss.yml b/configs/rec/ch_PP-OCRv2/ch_PP-OCRv2_rec_enhanced_ctc_loss.yml
index 7161203035..5be96969fd 100644
--- a/configs/rec/ch_PP-OCRv2/ch_PP-OCRv2_rec_enhanced_ctc_loss.yml
+++ b/configs/rec/ch_PP-OCRv2/ch_PP-OCRv2_rec_enhanced_ctc_loss.yml
@@ -62,8 +62,7 @@ Loss:
weight: 0.05
num_classes: 6625
feat_dim: 96
- init_center: false
- center_file_path: "./train_center.pkl"
+ center_file_path:
# you can also try to add ace loss on your own dataset
# - ACELoss:
# weight: 0.1
diff --git a/doc/doc_ch/models_list.md b/doc/doc_ch/models_list.md
index 31ab6a2c1c..8f1a53bcca 100644
--- a/doc/doc_ch/models_list.md
+++ b/doc/doc_ch/models_list.md
@@ -33,8 +33,8 @@ PaddleOCR提供的可下载模型包括`推理模型`、`训练模型`、`预训
|模型名称|模型简介|配置文件|推理模型大小|下载地址|
| --- | --- | --- | --- | --- |
-|ch_PP-OCRv2_det_slim|【最新】slim量化+蒸馏版超轻量模型,支持中英文、多语种文本检测|[ch_PP-OCRv2_det_cml.yml](../../configs/det/ch_PP-OCRv2/ch_PP-OCR_det_cml.yml)| 3M |[推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_slim_quant_infer.tar)|
-|ch_PP-OCRv2_det|【最新】原始超轻量模型,支持中英文、多语种文本检测|[ch_PP-OCRv2_det_cml.yml](../../configs/det/ch_PP-OCRv2/ch_PP-OCR_det_cml.yml)|3M|[推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_distill_train.tar)|
+|ch_PP-OCRv2_det_slim|【最新】slim量化+蒸馏版超轻量模型,支持中英文、多语种文本检测|[ch_PP-OCRv2_det_cml.yml](../../configs/det/ch_PP-OCRv2/ch_PP-OCRv2_det_cml.yml)| 3M |[推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_slim_quant_infer.tar)|
+|ch_PP-OCRv2_det|【最新】原始超轻量模型,支持中英文、多语种文本检测|[ch_PP-OCRv2_det_cml.yml](../../configs/det/ch_PP-OCRv2/ch_PP-OCRv2_det_cml.yml)|3M|[推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_distill_train.tar)|
|ch_ppocr_mobile_slim_v2.0_det|slim裁剪版超轻量模型,支持中英文、多语种文本检测|[ch_det_mv3_db_v2.0.yml](../../configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml)| 2.6M |[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar)|
|ch_ppocr_mobile_v2.0_det|原始超轻量模型,支持中英文、多语种文本检测|[ch_det_mv3_db_v2.0.yml](../../configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml)|3M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar)|
|ch_ppocr_server_v2.0_det|通用模型,支持中英文、多语种文本检测,比超轻量模型更大,但效果更好|[ch_det_res18_db_v2.0.yml](../../configs/det/ch_ppocr_v2.0/ch_det_res18_db_v2.0.yml)|47M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_train.tar)|
diff --git a/doc/doc_en/models_list_en.md b/doc/doc_en/models_list_en.md
index dbb4860279..e3cf251c34 100644
--- a/doc/doc_en/models_list_en.md
+++ b/doc/doc_en/models_list_en.md
@@ -29,8 +29,8 @@ Relationship of the above models is as follows.
|model name|description|config|model size|download|
| --- | --- | --- | --- | --- |
-|ch_PP-OCRv2_det_slim|[New] slim quantization with distillation lightweight model, supporting Chinese, English, multilingual text detection|[ch_PP-OCRv2_det_cml.yml](../../configs/det/ch_PP-OCRv2/ch_PP-OCR_det_cml.yml)| 3M |[inference model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_slim_quant_infer.tar)|
-|ch_PP-OCRv2_det|[New] Original lightweight model, supporting Chinese, English, multilingual text detection|[ch_PP-OCRv2_det_cml.yml](../../configs/det/ch_PP-OCRv2/ch_PP-OCR_det_cml.yml)|3M|[inference model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_distill_train.tar)|
+|ch_PP-OCRv2_det_slim|[New] slim quantization with distillation lightweight model, supporting Chinese, English, multilingual text detection|[ch_PP-OCRv2_det_cml.yml](../../configs/det/ch_PP-OCRv2/ch_PP-OCRv2_det_cml.yml)| 3M |[inference model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_slim_quant_infer.tar)|
+|ch_PP-OCRv2_det|[New] Original lightweight model, supporting Chinese, English, multilingual text detection|[ch_PP-OCRv2_det_cml.yml](../../configs/det/ch_PP-OCRv2/ch_PP-OCRv2_det_cml.yml)|3M|[inference model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_distill_train.tar)|
|ch_ppocr_mobile_slim_v2.0_det|Slim pruned lightweight model, supporting Chinese, English, multilingual text detection|[ch_det_mv3_db_v2.0.yml](../../configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml)|2.6M |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar)|
|ch_ppocr_mobile_v2.0_det|Original lightweight model, supporting Chinese, English, multilingual text detection|[ch_det_mv3_db_v2.0.yml](../../configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml)|3M|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar)|
|ch_ppocr_server_v2.0_det|General model, which is larger than the lightweight model, but achieved better performance|[ch_det_res18_db_v2.0.yml](../../configs/det/ch_ppocr_v2.0/ch_det_res18_db_v2.0.yml)|47M|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_train.tar)|
diff --git a/ppocr/losses/center_loss.py b/ppocr/losses/center_loss.py
index f8c57fdd5c..f62b8af373 100644
--- a/ppocr/losses/center_loss.py
+++ b/ppocr/losses/center_loss.py
@@ -30,21 +30,17 @@ class CenterLoss(nn.Layer):
Reference: Wen et al. A Discriminative Feature Learning Approach for Deep Face Recognition. ECCV 2016.
"""
- def __init__(self,
- num_classes=6625,
- feat_dim=96,
- init_center=False,
- center_file_path=None):
+ def __init__(self, num_classes=6625, feat_dim=96, center_file_path=None):
super().__init__()
self.num_classes = num_classes
self.feat_dim = feat_dim
self.centers = paddle.randn(
shape=[self.num_classes, self.feat_dim]).astype("float64")
- if init_center:
+ if center_file_path is not None:
assert os.path.exists(
center_file_path
- ), f"center path({center_file_path}) must exist when init_center is set as True."
+ ), f"center path({center_file_path}) must exist when it is not None."
with open(center_file_path, 'rb') as f:
char_dict = pickle.load(f)
for key in char_dict.keys():
diff --git a/tools/infer/predict_system.py b/tools/infer/predict_system.py
index b5edd01589..8d674809a5 100755
--- a/tools/infer/predict_system.py
+++ b/tools/infer/predict_system.py
@@ -49,11 +49,19 @@ def __init__(self, args):
if self.use_angle_cls:
self.text_classifier = predict_cls.TextClassifier(args)
- def print_draw_crop_rec_res(self, img_crop_list, rec_res):
+ self.args = args
+ self.crop_image_res_index = 0
+
+ def draw_crop_rec_res(self, output_dir, img_crop_list, rec_res):
+ os.makedirs(output_dir, exist_ok=True)
bbox_num = len(img_crop_list)
for bno in range(bbox_num):
- cv2.imwrite("./output/img_crop_%d.jpg" % bno, img_crop_list[bno])
- logger.info(bno, rec_res[bno])
+ cv2.imwrite(
+ os.path.join(output_dir,
+ f"mg_crop_{bno+self.crop_image_res_index}.jpg"),
+ img_crop_list[bno])
+ logger.debug(f"{bno}, {rec_res[bno]}")
+ self.crop_image_res_index += bbox_num
def __call__(self, img, cls=True):
ori_im = img.copy()
@@ -80,7 +88,9 @@ def __call__(self, img, cls=True):
rec_res, elapse = self.text_recognizer(img_crop_list)
logger.debug("rec_res num : {}, elapse : {}".format(
len(rec_res), elapse))
- # self.print_draw_crop_rec_res(img_crop_list, rec_res)
+ if self.args.save_crop_res:
+ self.draw_crop_rec_res(self.args.crop_res_save_dir, img_crop_list,
+ rec_res)
filter_boxes, filter_rec_res = [], []
for box, rec_reuslt in zip(dt_boxes, rec_res):
text, score = rec_reuslt
@@ -135,17 +145,17 @@ def main(args):
if not flag:
img = cv2.imread(image_file)
if img is None:
- logger.info("error in loading image:{}".format(image_file))
+ logger.debug("error in loading image:{}".format(image_file))
continue
starttime = time.time()
dt_boxes, rec_res = text_sys(img)
elapse = time.time() - starttime
total_time += elapse
- logger.info(
+ logger.debug(
str(idx) + " Predict time of %s: %.3fs" % (image_file, elapse))
for text, score in rec_res:
- logger.info("{}, {:.3f}".format(text, score))
+ logger.debug("{}, {:.3f}".format(text, score))
if is_visualize:
image = Image.fromarray(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
@@ -160,19 +170,17 @@ def main(args):
scores,
drop_score=drop_score,
font_path=font_path)
- draw_img_save = "./inference_results/"
- if not os.path.exists(draw_img_save):
- os.makedirs(draw_img_save)
+ draw_img_save_dir = args.draw_img_save_dir
+ os.makedirs(draw_img_save_dir, exist_ok=True)
if flag:
image_file = image_file[:-3] + "png"
cv2.imwrite(
- os.path.join(draw_img_save, os.path.basename(image_file)),
+ os.path.join(draw_img_save_dir, os.path.basename(image_file)),
draw_img[:, :, ::-1])
- logger.info("The visualized image saved in {}".format(
- os.path.join(draw_img_save, os.path.basename(image_file))))
+ logger.debug("The visualized image saved in {}".format(
+ os.path.join(draw_img_save_dir, os.path.basename(image_file))))
logger.info("The predict total time is {}".format(time.time() - _st))
- logger.info("\nThe predict total time is {}".format(total_time))
if args.benchmark:
text_sys.text_detector.autolog.report()
text_sys.text_recognizer.autolog.report()
diff --git a/tools/infer/utility.py b/tools/infer/utility.py
index cab918419a..85f68d9bdb 100755
--- a/tools/infer/utility.py
+++ b/tools/infer/utility.py
@@ -110,7 +110,13 @@ def init_args():
parser.add_argument("--enable_mkldnn", type=str2bool, default=False)
parser.add_argument("--cpu_threads", type=int, default=10)
parser.add_argument("--use_pdserving", type=str2bool, default=False)
- parser.add_argument("--warmup", type=str2bool, default=True)
+ parser.add_argument("--warmup", type=str2bool, default=False)
+
+ #
+ parser.add_argument(
+ "--draw_img_save_dir", type=str, default="./inference_results")
+ parser.add_argument("--save_crop_res", type=str2bool, default=False)
+ parser.add_argument("--crop_res_save_dir", type=str, default="./output")
# multi-process
parser.add_argument("--use_mp", type=str2bool, default=False)
| att | https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/4643 | 2021-11-13T03:36:20Z | 2021-11-17T04:32:05Z | 2021-11-17T04:32:05Z | 2021-11-17T04:32:05Z | 3,642 | PaddlePaddle/PaddleOCR | 42,619 |
Bump actions/cache from 2.1.7 to 3 | diff --git a/.github/workflows/diff_shades.yml b/.github/workflows/diff_shades.yml
index 51fcebcff63..ade71e7aa8d 100644
--- a/.github/workflows/diff_shades.yml
+++ b/.github/workflows/diff_shades.yml
@@ -68,7 +68,7 @@ jobs:
- name: Attempt to use cached baseline analysis
id: baseline-cache
- uses: actions/cache@v2.1.7
+ uses: actions/cache@v3
with:
path: ${{ matrix.baseline-analysis }}
key: ${{ matrix.baseline-cache-key }}
| Bumps [actions/cache](https://github.com/actions/cache) from 2.1.7 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/actions/cache/releases">actions/cache's releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<ul>
<li>
<p>This change adds a minimum runner version(node12 -> node16), which can break users using an out-of-date/fork of the runner. This would be most commonly affecting users on GHES 3.3 or before, as those runners do not support node16 actions and they can use actions from github.com via <a href="https://docs.github.com/en/enterprise-server@3.0/admin/github-actions/managing-access-to-actions-from-githubcom/enabling-automatic-access-to-githubcom-actions-using-github-connect">github connect</a> or manually copying the repo to their GHES instance.</p>
</li>
<li>
<p>Few dependencies and cache action usage examples have also been updated.</p>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/actions/cache/commit/4b0cf6cc4619e737324ddfcec08fff2413359514"><code>4b0cf6c</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/actions/cache/issues/769">#769</a> from actions/users/ashwinsangem/bump_major_version</li>
<li><a href="https://github.com/actions/cache/commit/60c606a2b4c5358e11c2ca7b4694e59049d008d1"><code>60c606a</code></a> Update licensed files</li>
<li><a href="https://github.com/actions/cache/commit/b6e9a919a7da3606e9b2db756823ee1c39c7b48d"><code>b6e9a91</code></a> Revert "Updated to the latest version."</li>
<li><a href="https://github.com/actions/cache/commit/c8425035834f98c304ecf92f5d50f41d433885c1"><code>c842503</code></a> Updated to the latest version.</li>
<li><a href="https://github.com/actions/cache/commit/2b7da2a62c3af9fa2692cd8d2d117da76faf31ac"><code>2b7da2a</code></a> Bumped up to a major version.</li>
<li><a href="https://github.com/actions/cache/commit/deae296ab340574da1ec86242984dfc91f0a7b81"><code>deae296</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/actions/cache/issues/651">#651</a> from magnetikonline/fix-golang-windows-example</li>
<li><a href="https://github.com/actions/cache/commit/c7c46bcb6db3c571021a3a2dc2d2557b512ecace"><code>c7c46bc</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/actions/cache/issues/707">#707</a> from duxtland/main</li>
<li><a href="https://github.com/actions/cache/commit/6535c5fb5fe2870754afba7bd4e514867ac9cb98"><code>6535c5f</code></a> Regenerated <code>examples.md</code> TOC</li>
<li><a href="https://github.com/actions/cache/commit/3fdafa472e0db16435add384585aa138ffdd16d3"><code>3fdafa4</code></a> Update GitHub Actions status badge markdown in <code>README.md</code></li>
<li><a href="https://github.com/actions/cache/commit/341e6d75d9826beb2fa659263d862f6aec63a064"><code>341e6d7</code></a> Merge branch 'actions:main' into fix-golang-windows-example</li>
<li>Additional commits viewable in <a href="https://github.com/actions/cache/compare/v2.1.7...v3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details> | https://api.github.com/repos/psf/black/pulls/2962 | 2022-03-28T06:14:03Z | 2022-03-29T01:42:53Z | 2022-03-29T01:42:53Z | 2022-03-29T01:42:58Z | 146 | psf/black | 24,194 |
Remove broken links from Plino and Sentim-API | diff --git a/README.md b/README.md
index 934a799430..4a9888d931 100644
--- a/README.md
+++ b/README.md
@@ -285,7 +285,6 @@ API | Description | Auth | HTTPS | CORS |
| [Nationalize.io](https://nationalize.io) | Estimate the nationality of a first name | No | Yes | Yes |
| [OOPSpam](https://oopspam.com/) | Multiple spam filtering service | No | Yes | Yes |
| [PageCDN](https://pagecdn.com/docs/public-api) | Public API for javascript, css and font libraries on PageCDN | `apiKey` | Yes | Yes |
-| [Plino](https://plino.herokuapp.com/) | Spam filtering system | No | Yes | Unknown |
| [Postman](https://docs.api.getpostman.com/) | Tool for testing APIs | `apiKey` | Yes | Unknown |
| [ProxyCrawl](https://proxycrawl.com) | Scraping and crawling anticaptcha service | `apiKey` | Yes | Unknown |
| [Public APIs](https://github.com/davemachado/public-api) | A collective list of free JSON APIs for use in web development | No | Yes | Unknown |
@@ -579,7 +578,6 @@ API | Description | Auth | HTTPS | CORS |
| [Dialogflow](https://dialogflow.com) | Natural Language Processing | `apiKey` | Yes | Unknown |
| [EXUDE-API](http://uttesh.com/exude-api/) | Used for the primary ways for filtering the stopping, stemming words from the text data | No | Yes | Yes |
| [Keen IO](https://keen.io/) | Data Analytics | `apiKey` | Yes | Unknown |
-| [Sentim-API](https://sentim-api.herokuapp.com) | Text sentiment analysis | No | Yes | Yes |
| [Time Door](https://timedoor.io) | A time series analysis API | `apiKey` | Yes | Yes |
| [Unplugg](https://unplu.gg/test_api.html) | Forecasting API for timeseries data | `apiKey` | Yes | Unknown |
| [Wit.ai](https://wit.ai/) | Natural Language Processing | `OAuth` | Yes | Unknown |
| The links https://plino.herokuapp.com and https://sentim-api.herokuapp.com were broken
| https://api.github.com/repos/public-apis/public-apis/pulls/1608 | 2021-03-31T05:09:33Z | 2021-03-31T05:10:36Z | 2021-03-31T05:10:36Z | 2021-03-31T05:10:36Z | 488 | public-apis/public-apis | 35,668 |
Allow passing kwargs through to TFBertTokenizer | diff --git a/src/transformers/models/bert/tokenization_bert_tf.py b/src/transformers/models/bert/tokenization_bert_tf.py
index e0e38d68a58c3..281d222fbdaaa 100644
--- a/src/transformers/models/bert/tokenization_bert_tf.py
+++ b/src/transformers/models/bert/tokenization_bert_tf.py
@@ -48,7 +48,9 @@ class TFBertTokenizer(tf.keras.layers.Layer):
return_attention_mask (`bool`, *optional*, defaults to `True`):
Whether to return the attention_mask.
use_fast_bert_tokenizer (`bool`, *optional*, defaults to `True`):
- If set to false will use standard TF Text BertTokenizer, making it servable by TF Serving.
+ If True, will use the FastBertTokenizer class from Tensorflow Text. If False, will use the BertTokenizer
+ class instead. BertTokenizer supports some additional options, but is slower and cannot be exported to
+ TFLite.
"""
def __init__(
@@ -65,11 +67,12 @@ def __init__(
return_token_type_ids: bool = True,
return_attention_mask: bool = True,
use_fast_bert_tokenizer: bool = True,
+ **tokenizer_kwargs,
):
super().__init__()
if use_fast_bert_tokenizer:
self.tf_tokenizer = FastBertTokenizer(
- vocab_list, token_out_type=tf.int64, lower_case_nfd_strip_accents=do_lower_case
+ vocab_list, token_out_type=tf.int64, lower_case_nfd_strip_accents=do_lower_case, **tokenizer_kwargs
)
else:
lookup_table = tf.lookup.StaticVocabularyTable(
@@ -81,7 +84,9 @@ def __init__(
),
num_oov_buckets=1,
)
- self.tf_tokenizer = BertTokenizerLayer(lookup_table, token_out_type=tf.int64, lower_case=do_lower_case)
+ self.tf_tokenizer = BertTokenizerLayer(
+ lookup_table, token_out_type=tf.int64, lower_case=do_lower_case, **tokenizer_kwargs
+ )
self.vocab_list = vocab_list
self.do_lower_case = do_lower_case
| There are some kwargs like `preserve_unused_tokens` in the underlying TF tokenizer layers that might be useful to expose to users. This PR exposes them by passing through any unrecognized `kwargs` in the model `__init__` to the TF tokenizer layer.
Fixes #23798 | https://api.github.com/repos/huggingface/transformers/pulls/24324 | 2023-06-16T16:28:11Z | 2023-06-20T11:49:06Z | 2023-06-20T11:49:06Z | 2023-06-20T11:49:36Z | 490 | huggingface/transformers | 12,740 |
Fix various Sphinx warnings, errors | diff --git a/acme/acme/challenges.py b/acme/acme/challenges.py
index cfd6f8533c6..a2235b61ee5 100644
--- a/acme/acme/challenges.py
+++ b/acme/acme/challenges.py
@@ -116,7 +116,7 @@ def uri(self, domain, chall):
def gen_resource(self, chall):
"""Generate provisioned resource.
- :param .SimpleHTTP chall:
+ :param challenges.SimpleHTTP chall:
:rtype: SimpleHTTPProvisionedResource
"""
@@ -125,7 +125,7 @@ def gen_resource(self, chall):
def gen_validation(self, chall, account_key, alg=jose.RS256, **kwargs):
"""Generate validation.
- :param .SimpleHTTP chall:
+ :param challenges.SimpleHTTP chall:
:param .JWK account_key: Private account key.
:param .JWA alg:
@@ -142,14 +142,14 @@ def check_validation(self, validation, chall, account_public_key):
"""Check validation.
:param .JWS validation:
- :param .SimpleHTTP chall:
+ :param challenges.SimpleHTTP chall:
:type account_public_key:
`~cryptography.hazmat.primitives.asymmetric.rsa.RSAPublicKey`
or
`~cryptography.hazmat.primitives.asymmetric.dsa.DSAPublicKey`
or
`~cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePublicKey`
- wrapped in `.ComparableKey
+ wrapped in `.ComparableKey`
:rtype: bool
@@ -173,7 +173,7 @@ def simple_verify(self, chall, domain, account_public_key, port=None):
ignore the certificate provided by the HTTPS server", so
``requests.get`` is called with ``verify=False``.
- :param .SimpleHTTP chall: Corresponding challenge.
+ :param challenges.SimpleHTTP chall: Corresponding challenge.
:param unicode domain: Domain name being verified.
:param account_public_key: Public key for the key pair
being authorized. If ``None`` key verification is not
@@ -184,7 +184,7 @@ def simple_verify(self, chall, domain, account_public_key, port=None):
`~cryptography.hazmat.primitives.asymmetric.dsa.DSAPublicKey`
or
`~cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePublicKey`
- wrapped in `.ComparableKey
+ wrapped in `.ComparableKey`
:param int port: Port used in the validation.
:returns: ``True`` iff validation is successful, ``False``
@@ -306,7 +306,7 @@ def z_domain(self):
def chall(self):
"""Get challenge encoded in the `validation` payload.
- :rtype: DVSNI
+ :rtype: challenges.DVSNI
"""
# pylint: disable=no-member
@@ -370,7 +370,7 @@ def simple_verify(self, chall, domain, account_public_key,
`~cryptography.hazmat.primitives.asymmetric.dsa.DSAPublicKey`
or
`~cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePublicKey`
- wrapped in `.ComparableKey
+ wrapped in `.ComparableKey`
:param OpenSSL.crypto.X509 cert: Optional certificate. If not
provided (``None``) certificate will be retrieved using
`probe_cert`.
diff --git a/acme/acme/messages.py b/acme/acme/messages.py
index 33157899ef2..0855ae00811 100644
--- a/acme/acme/messages.py
+++ b/acme/acme/messages.py
@@ -231,7 +231,7 @@ class ChallengeBody(ResourceBody):
call ``challb.x`` to get ``challb.chall.x`` contents.
:ivar acme.messages.Status status:
:ivar datetime.datetime validated:
- :ivar Error error:
+ :ivar messages.Error error:
"""
__slots__ = ('chall',)
diff --git a/docs/contributing.rst b/docs/contributing.rst
index 5aa6e3e76ff..a663c942afa 100644
--- a/docs/contributing.rst
+++ b/docs/contributing.rst
@@ -82,7 +82,7 @@ If you would like to test `letsencrypt_nginx` plugin (highly
encouraged) make sure to install prerequisites as listed in
``tests/integration/nginx.sh``:
-.. include:: ../tests/integration/nginx.sh
+.. include:: ../letsencrypt-nginx/tests/boulder-integration.sh
:start-line: 1
:end-line: 2
:code: shell
diff --git a/letsencrypt-apache/letsencrypt_apache/configurator.py b/letsencrypt-apache/letsencrypt_apache/configurator.py
index 01c9d4f3045..8403b974c5b 100644
--- a/letsencrypt-apache/letsencrypt_apache/configurator.py
+++ b/letsencrypt-apache/letsencrypt_apache/configurator.py
@@ -953,9 +953,10 @@ def enable_site(self, vhost):
"""Enables an available site, Apache restart required.
.. note:: Does not make sure that the site correctly works or that all
- modules are enabled appropriately.
+ modules are enabled appropriately.
.. todo:: This function should number subdomains before the domain vhost
+
.. todo:: Make sure link is not broken...
:param vhost: vhost to enable
@@ -1034,8 +1035,9 @@ def restart(self):
.. todo:: This function will be converted to using reload
- :raises .errors.MisconfigurationError: If unable to restart due to a
- configuration problem, or if the restart subprocess cannot be run.
+ :raises .errors.MisconfigurationError: If unable to restart due
+ to a configuration problem, or if the restart subprocess
+ cannot be run.
"""
return apache_restart(self.conf("init-script"))
diff --git a/letsencrypt-apache/letsencrypt_apache/obj.py b/letsencrypt-apache/letsencrypt_apache/obj.py
index c0dcc6c438e..8cd2378a41a 100644
--- a/letsencrypt-apache/letsencrypt_apache/obj.py
+++ b/letsencrypt-apache/letsencrypt_apache/obj.py
@@ -41,21 +41,24 @@ def _rank_specific_addr(self):
return 2
def conflicts(self, addr):
- """Returns if address could conflict with correct function of self.
+ r"""Returns if address could conflict with correct function of self.
Could addr take away service provided by self within Apache?
.. note::IP Address is more important than wildcard.
Connection from 127.0.0.1:80 with choices of *:80 and 127.0.0.1:*
- chooses 127.0.0.1:*
+ chooses 127.0.0.1:\*
.. todo:: Handle domain name addrs...
Examples:
- 127.0.0.1:*.conflicts(127.0.0.1:443) - True
- 127.0.0.1:443.conflicts(127.0.0.1:*) - False
- *:443.conflicts(*:80) - False
- _default_:443.conflicts(*:443) - True
+
+ ========================================= =====
+ ``127.0.0.1:\*.conflicts(127.0.0.1:443)`` True
+ ``127.0.0.1:443.conflicts(127.0.0.1:\*)`` False
+ ``\*:443.conflicts(\*:80)`` False
+ ``_default_:443.conflicts(\*:443)`` True
+ ========================================= =====
"""
if self._addr_less_specific(addr):
@@ -72,9 +75,10 @@ def is_wildcard(self):
def get_sni_addr(self, port):
"""Returns the least specific address that resolves on the port.
- Example:
- 1.2.3.4:443 -> 1.2.3.4:<port>
- 1.2.3.4:* -> 1.2.3.4:*
+ Examples:
+
+ - ``1.2.3.4:443`` -> ``1.2.3.4:<port>``
+ - ``1.2.3.4:*`` -> ``1.2.3.4:*``
:param str port: Desired port
@@ -100,8 +104,9 @@ class VirtualHost(object): # pylint: disable=too-few-public-methods
:ivar bool enabled: Virtual host is enabled
https://httpd.apache.org/docs/2.4/vhosts/details.html
+
.. todo:: Any vhost that includes the magic _default_ wildcard is given the
- same ServerName as the main server.
+ same ServerName as the main server.
"""
# ?: is used for not returning enclosed characters
diff --git a/letsencrypt/crypto_util.py b/letsencrypt/crypto_util.py
index b7d9987fca4..279330f0c54 100644
--- a/letsencrypt/crypto_util.py
+++ b/letsencrypt/crypto_util.py
@@ -267,5 +267,5 @@ def asn1_generalizedtime_to_dt(timestamp):
def pyopenssl_x509_name_as_text(x509name):
- """Convert `OpenSSL.crypto.X509Name to text."""
+ """Convert `OpenSSL.crypto.X509Name` to text."""
return "/".join("{0}={1}" for key, value in x509name.get_components())
| https://api.github.com/repos/certbot/certbot/pulls/703 | 2015-08-24T08:08:04Z | 2015-08-24T21:08:53Z | 2015-08-24T21:08:53Z | 2016-05-06T19:21:33Z | 2,200 | certbot/certbot | 938 | |
A few dashboard fixes | diff --git a/website/src/components/Widgets/LeaderboardWidget.tsx b/website/src/components/Dashboard/LeaderboardTable.tsx
similarity index 91%
rename from website/src/components/Widgets/LeaderboardWidget.tsx
rename to website/src/components/Dashboard/LeaderboardTable.tsx
index 5f680179a2..b958d4b7c2 100644
--- a/website/src/components/Widgets/LeaderboardWidget.tsx
+++ b/website/src/components/Dashboard/LeaderboardTable.tsx
@@ -1,6 +1,6 @@
import { Badge, Box, Image, Link, Stack, StackDivider, Text, useColorModeValue } from "@chakra-ui/react";
-export function LeaderboardWidget() {
+export function LeaderboardTable() {
const backgroundColor = useColorModeValue("white", "gray.700");
const accentColor = useColorModeValue("gray.200", "gray.900");
@@ -54,7 +54,7 @@ export function LeaderboardWidget() {
<div className="flex flex-col gap-4">
<div className="flex items-end justify-between">
<Text className="text-2xl font-bold">Top 5 Contributors</Text>
- <Link key="Leaderboard" href="#" _hover={{ textDecoration: "none" }}>
+ <Link href="#" _hover={{ textDecoration: "none" }}>
<Text color="blue.400" className="text-sm font-bold">
View All ->
</Text>
@@ -74,8 +74,8 @@ export function LeaderboardWidget() {
<p>Score</p>
</div>
</div>
- {leaderInfo.map((item) => (
- <div key="User" className="grid grid-cols-4 items-center">
+ {leaderInfo.map((item, itemIndex) => (
+ <div key={itemIndex} className="grid grid-cols-4 items-center">
<div className="flex items-center gap-3">
<Image alt="Profile Picture" src={item.image} boxSize="7" borderRadius="full"></Image>
<p>{item.name}</p>
diff --git a/website/src/components/Widgets/SideMenu.tsx b/website/src/components/Dashboard/SideMenu.tsx
similarity index 93%
rename from website/src/components/Widgets/SideMenu.tsx
rename to website/src/components/Dashboard/SideMenu.tsx
index 8b21b71f0a..30a45777d0 100644
--- a/website/src/components/Widgets/SideMenu.tsx
+++ b/website/src/components/Dashboard/SideMenu.tsx
@@ -37,15 +37,15 @@ export function SideMenu() {
className="grid grid-cols-4 gap-2 sm:flex sm:flex-col sm:justify-between p-4 h-full"
>
<nav className="grid grid-cols-3 col-span-3 sm:flex sm:flex-col gap-2">
- {buttonOptions.map((item) => (
+ {buttonOptions.map((item, itemIndex) => (
<Tooltip
- key="Tooltip"
+ key={itemIndex}
fontFamily="inter"
label={item.label}
placement="right"
className="hidden lg:hidden sm:block"
>
- <Link key="{item.label}" href={item.pathname} style={{ textDecoration: "none" }}>
+ <Link key={`${item.label}-${itemIndex}`} href={item.pathname} style={{ textDecoration: "none" }}>
<Button
justifyContent={["center", "center", "center", "left"]}
gap="3"
diff --git a/website/src/components/Widgets/TaskOption.tsx b/website/src/components/Dashboard/TaskOption.tsx
similarity index 94%
rename from website/src/components/Widgets/TaskOption.tsx
rename to website/src/components/Dashboard/TaskOption.tsx
index f807f3914a..6b17a0792f 100644
--- a/website/src/components/Widgets/TaskOption.tsx
+++ b/website/src/components/Dashboard/TaskOption.tsx
@@ -46,8 +46,8 @@ export const TaskOption = () => {
<div>
<Text className="text-2xl font-bold pb-4">Create</Text>
<SimpleGrid columns={[1, 2, 2, 3, 4]} gap={4}>
- {crTasks.map((item) => (
- <Link key="Create Option" href={item.pathname}>
+ {crTasks.map((item, itemIndex) => (
+ <Link key={itemIndex} href={item.pathname}>
<GridItem
bg={backgroundColor}
borderRadius="xl"
@@ -82,8 +82,8 @@ export const TaskOption = () => {
<div>
<Text className="text-2xl font-bold pb-4">Evaluate</Text>
<SimpleGrid columns={[1, 2, 2, 3, 4]} gap={4}>
- {evTasks.map((item) => (
- <Link key="Evaluate Option" href={item.pathname}>
+ {evTasks.map((item, itemIndex) => (
+ <Link key={itemIndex} href={item.pathname}>
<GridItem
bg={backgroundColor}
borderRadius="xl"
diff --git a/website/src/components/Widgets/index.ts b/website/src/components/Dashboard/index.ts
similarity index 58%
rename from website/src/components/Widgets/index.ts
rename to website/src/components/Dashboard/index.ts
index 48a40e7419..0b4ff49aaf 100644
--- a/website/src/components/Widgets/index.ts
+++ b/website/src/components/Dashboard/index.ts
@@ -1,3 +1,3 @@
-export { LeaderboardWidget } from "./LeaderboardWidget";
+export { LeaderboardTable } from "./LeaderboardTable";
export { SideMenu } from "./SideMenu";
export { TaskOption } from "./TaskOption";
diff --git a/website/src/pages/dashboard.tsx b/website/src/pages/dashboard.tsx
index 018227af7b..dfc5cb0326 100644
--- a/website/src/pages/dashboard.tsx
+++ b/website/src/pages/dashboard.tsx
@@ -1,7 +1,7 @@
import { Box, useColorMode } from "@chakra-ui/react";
import Head from "next/head";
import { Header } from "src/components/Header";
-import { LeaderboardWidget, SideMenu, TaskOption } from "src/components/Widgets";
+import { LeaderboardTable, SideMenu, TaskOption } from "src/components/Dashboard";
import { colors } from "styles/Theme/colors";
const Dashboard = () => {
@@ -19,7 +19,7 @@ const Dashboard = () => {
</Box>
<Box className="flex flex-col overflow-auto p-6 sm:pl-0 gap-14">
<TaskOption />
- <LeaderboardWidget />
+ <LeaderboardTable />
</Box>
</Box>
</Box>
| - A few quick fixes on use of the 'key' attribute.
- Also renaming "widgets" folder which contained dashboard components to "dashboard" and removing "widget" from component names. | https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/370 | 2023-01-04T11:30:34Z | 2023-01-04T11:37:40Z | 2023-01-04T11:37:40Z | 2023-01-04T12:02:01Z | 1,532 | LAION-AI/Open-Assistant | 37,712 |
🌐 Add Portuguese translation for `docs/pt/docs/tutorial/body-nested-models.md` | diff --git a/docs/pt/docs/tutorial/body-nested-models.md b/docs/pt/docs/tutorial/body-nested-models.md
new file mode 100644
index 0000000000000..8ab77173e96d0
--- /dev/null
+++ b/docs/pt/docs/tutorial/body-nested-models.md
@@ -0,0 +1,248 @@
+# Corpo - Modelos aninhados
+
+Com o **FastAPI**, você pode definir, validar, documentar e usar modelos profundamente aninhados de forma arbitrária (graças ao Pydantic).
+
+## Campos do tipo Lista
+
+Você pode definir um atributo como um subtipo. Por exemplo, uma `list` do Python:
+
+```Python hl_lines="14"
+{!../../../docs_src/body_nested_models/tutorial001.py!}
+```
+
+Isso fará com que tags seja uma lista de itens mesmo sem declarar o tipo dos elementos desta lista.
+
+## Campos do tipo Lista com um parâmetro de tipo
+
+Mas o Python tem uma maneira específica de declarar listas com tipos internos ou "parâmetros de tipo":
+
+### Importe `List` do typing
+
+Primeiramente, importe `List` do módulo `typing` que já vem por padrão no Python:
+
+```Python hl_lines="1"
+{!../../../docs_src/body_nested_models/tutorial002.py!}
+```
+
+### Declare a `List` com um parâmetro de tipo
+
+Para declarar tipos que têm parâmetros de tipo(tipos internos), como `list`, `dict`, `tuple`:
+
+* Importe os do modulo `typing`
+* Passe o(s) tipo(s) interno(s) como "parâmetros de tipo" usando colchetes: `[` e `]`
+
+```Python
+from typing import List
+
+my_list: List[str]
+```
+
+Essa é a sintaxe padrão do Python para declarações de tipo.
+
+Use a mesma sintaxe padrão para atributos de modelo com tipos internos.
+
+Portanto, em nosso exemplo, podemos fazer com que `tags` sejam especificamente uma "lista de strings":
+
+
+```Python hl_lines="14"
+{!../../../docs_src/body_nested_models/tutorial002.py!}
+```
+
+## Tipo "set"
+
+
+Mas então, quando nós pensamos mais, percebemos que as tags não devem se repetir, elas provavelmente devem ser strings únicas.
+
+E que o Python tem um tipo de dados especial para conjuntos de itens únicos, o `set`.
+
+Então podemos importar `Set` e declarar `tags` como um `set` de `str`s:
+
+
+```Python hl_lines="1 14"
+{!../../../docs_src/body_nested_models/tutorial003.py!}
+```
+
+Com isso, mesmo que você receba uma requisição contendo dados duplicados, ela será convertida em um conjunto de itens exclusivos.
+
+E sempre que você enviar esses dados como resposta, mesmo se a fonte tiver duplicatas, eles serão gerados como um conjunto de itens exclusivos.
+
+E também teremos anotações/documentação em conformidade.
+
+## Modelos aninhados
+
+Cada atributo de um modelo Pydantic tem um tipo.
+
+Mas esse tipo pode ser outro modelo Pydantic.
+
+Portanto, você pode declarar "objects" JSON profundamente aninhados com nomes, tipos e validações de atributos específicos.
+
+Tudo isso, aninhado arbitrariamente.
+
+### Defina um sub-modelo
+
+Por exemplo, nós podemos definir um modelo `Image`:
+
+```Python hl_lines="9-11"
+{!../../../docs_src/body_nested_models/tutorial004.py!}
+```
+
+### Use o sub-modelo como um tipo
+
+E então podemos usa-lo como o tipo de um atributo:
+
+```Python hl_lines="20"
+{!../../../docs_src/body_nested_models/tutorial004.py!}
+```
+
+Isso significa que o **FastAPI** vai esperar um corpo similar à:
+
+```JSON
+{
+ "name": "Foo",
+ "description": "The pretender",
+ "price": 42.0,
+ "tax": 3.2,
+ "tags": ["rock", "metal", "bar"],
+ "image": {
+ "url": "http://example.com/baz.jpg",
+ "name": "The Foo live"
+ }
+}
+```
+
+Novamente, apenas fazendo essa declaração, com o **FastAPI**, você ganha:
+
+* Suporte do editor de texto (compleção, etc), inclusive para modelos aninhados
+* Conversão de dados
+* Validação de dados
+* Documentação automatica
+
+## Tipos especiais e validação
+
+Além dos tipos singulares normais como `str`, `int`, `float`, etc. Você também pode usar tipos singulares mais complexos que herdam de `str`.
+
+Para ver todas as opções possíveis, cheque a documentação para os<a href="https://pydantic-docs.helpmanual.io/usage/types/" class="external-link" target="_blank">tipos exoticos do Pydantic</a>. Você verá alguns exemplos no próximo capitulo.
+
+Por exemplo, no modelo `Image` nós temos um campo `url`, nós podemos declara-lo como um `HttpUrl` do Pydantic invés de como uma `str`:
+
+```Python hl_lines="4 10"
+{!../../../docs_src/body_nested_models/tutorial005.py!}
+```
+
+A string será verificada para se tornar uma URL válida e documentada no esquema JSON/1OpenAPI como tal.
+
+## Atributos como listas de submodelos
+
+Você também pode usar modelos Pydantic como subtipos de `list`, `set`, etc:
+
+```Python hl_lines="20"
+{!../../../docs_src/body_nested_models/tutorial006.py!}
+```
+
+Isso vai esperar(converter, validar, documentar, etc) um corpo JSON tal qual:
+
+```JSON hl_lines="11"
+{
+ "name": "Foo",
+ "description": "The pretender",
+ "price": 42.0,
+ "tax": 3.2,
+ "tags": [
+ "rock",
+ "metal",
+ "bar"
+ ],
+ "images": [
+ {
+ "url": "http://example.com/baz.jpg",
+ "name": "The Foo live"
+ },
+ {
+ "url": "http://example.com/dave.jpg",
+ "name": "The Baz"
+ }
+ ]
+}
+```
+
+!!! Informação
+ Note como o campo `images` agora tem uma lista de objetos de image.
+
+## Modelos profundamente aninhados
+
+Você pode definir modelos profundamente aninhados de forma arbitrária:
+
+```Python hl_lines="9 14 20 23 27"
+{!../../../docs_src/body_nested_models/tutorial007.py!}
+```
+
+!!! Informação
+ Note como `Offer` tem uma lista de `Item`s, que por sua vez possui opcionalmente uma lista `Image`s
+
+## Corpos de listas puras
+
+Se o valor de primeiro nível do corpo JSON que você espera for um `array` do JSON (uma` lista` do Python), você pode declarar o tipo no parâmetro da função, da mesma forma que nos modelos do Pydantic:
+
+
+```Python
+images: List[Image]
+```
+
+como em:
+
+```Python hl_lines="15"
+{!../../../docs_src/body_nested_models/tutorial008.py!}
+```
+
+## Suporte de editor em todo canto
+
+E você obtém suporte do editor em todos os lugares.
+
+Mesmo para itens dentro de listas:
+
+<img src="/img/tutorial/body-nested-models/image01.png">
+
+Você não conseguiria este tipo de suporte de editor se estivesse trabalhando diretamente com `dict` em vez de modelos Pydantic.
+
+Mas você também não precisa se preocupar com eles, os dicts de entrada são convertidos automaticamente e sua saída é convertida automaticamente para JSON também.
+
+## Corpos de `dict`s arbitrários
+
+Você também pode declarar um corpo como um `dict` com chaves de algum tipo e valores de outro tipo.
+
+Sem ter que saber de antemão quais são os nomes de campos/atributos válidos (como seria o caso dos modelos Pydantic).
+
+Isso seria útil se você deseja receber chaves que ainda não conhece.
+
+---
+
+Outro caso útil é quando você deseja ter chaves de outro tipo, por exemplo, `int`.
+
+É isso que vamos ver aqui.
+
+Neste caso, você aceitaria qualquer `dict`, desde que tenha chaves` int` com valores `float`:
+
+```Python hl_lines="9"
+{!../../../docs_src/body_nested_models/tutorial009.py!}
+```
+
+!!! Dica
+ Leve em condideração que o JSON só suporta `str` como chaves.
+
+ Mas o Pydantic tem conversão automática de dados.
+
+ Isso significa que, embora os clientes da API só possam enviar strings como chaves, desde que essas strings contenham inteiros puros, o Pydantic irá convertê-los e validá-los.
+
+ E o `dict` que você recebe como `weights` terá, na verdade, chaves `int` e valores` float`.
+
+## Recapitulação
+
+Com **FastAPI** você tem a flexibilidade máxima fornecida pelos modelos Pydantic, enquanto seu código é mantido simples, curto e elegante.
+
+Mas com todos os benefícios:
+
+* Suporte do editor (compleção em todo canto!)
+* Conversão de dados (leia-se parsing/serialização)
+* Validação de dados
+* Documentação dos esquemas
+* Documentação automática
diff --git a/docs/pt/mkdocs.yml b/docs/pt/mkdocs.yml
index a8ab4cb329908..658e6ae496710 100644
--- a/docs/pt/mkdocs.yml
+++ b/docs/pt/mkdocs.yml
@@ -71,6 +71,7 @@ nav:
- tutorial/body.md
- tutorial/body-multiple-params.md
- tutorial/body-fields.md
+ - tutorial/body-nested-models.md
- tutorial/extra-data-types.md
- tutorial/query-params-str-validations.md
- tutorial/path-params-numeric-validations.md
| https://api.github.com/repos/tiangolo/fastapi/pulls/4053 | 2021-10-14T20:50:19Z | 2023-04-13T18:15:35Z | 2023-04-13T18:15:35Z | 2023-04-13T18:15:35Z | 2,462 | tiangolo/fastapi | 23,184 | |
fix(actionable-items): Remove source map debug from actionable items | diff --git a/src/sentry/api/endpoints/actionable_items.py b/src/sentry/api/endpoints/actionable_items.py
index 010cd4c1e439f..845352040ae7f 100644
--- a/src/sentry/api/endpoints/actionable_items.py
+++ b/src/sentry/api/endpoints/actionable_items.py
@@ -14,12 +14,9 @@
ActionPriority,
deprecated_event_errors,
errors_to_hide,
- find_debug_frames,
priority_ranking,
- sourcemap_sdks,
)
-from sentry.api.helpers.source_map_helper import source_map_debug
-from sentry.models import EventError, Organization, Project, SourceMapProcessingIssue
+from sentry.models import EventError, Organization, Project
class ActionableItemResponse(TypedDict):
@@ -59,21 +56,6 @@ def get(self, request: Request, project: Project, event_id: str) -> Response:
raise NotFound(detail="Event not found")
actions = []
- debug_frames = []
-
- sdk_info = event.data.get("sdk")
- # Find debug frames if event has frontend js sdk
- if sdk_info and sdk_info["name"] in sourcemap_sdks:
- debug_frames = find_debug_frames(event)
-
- for frame_idx, exception_idx in debug_frames:
- debug_response = source_map_debug(project, event.event_id, exception_idx, frame_idx)
- issue, data = debug_response.issue, debug_response.data
-
- if issue:
- response = SourceMapProcessingIssue(issue, data=data).get_api_context()
- actions.append(response)
-
event_errors = event.data.get("errors", [])
# Add event errors to actionable items
diff --git a/tests/sentry/api/endpoints/test_actionable_items.py b/tests/sentry/api/endpoints/test_actionable_items.py
index fd09c484ec6ab..0286c7edf8450 100644
--- a/tests/sentry/api/endpoints/test_actionable_items.py
+++ b/tests/sentry/api/endpoints/test_actionable_items.py
@@ -1,15 +1,6 @@
-from django.core.files.base import ContentFile
from rest_framework import status
-from sentry.api.helpers.actionable_items_helper import get_file_extension, is_frame_filename_valid
-from sentry.models import (
- Distribution,
- EventError,
- File,
- Release,
- ReleaseFile,
- SourceMapProcessingIssue,
-)
+from sentry.models import EventError
from sentry.testutils.cases import APITestCase
from sentry.testutils.helpers import with_feature
from sentry.testutils.silo import region_silo_test
@@ -25,84 +16,10 @@ class ActionableItemsEndpointTestCase(APITestCase):
# and how event errors are handled.
endpoint = "sentry-api-0-event-actionable-items"
- base_data = {
- "event_id": "a" * 32,
- "sdk": {
- "name": "sentry.javascript.browser",
- "version": "7.3.0",
- },
- "exception": {
- "values": [
- {
- "type": "Error",
- "stacktrace": {
- "frames": [
- {
- "abs_path": "https://app.example.com/static/js/main.fa8fe19f.js",
- "filename": "/static/js/main.fa8fe19f.js",
- "lineno": 1,
- "colno": 39,
- "context_line": "function foo() {",
- "in_app": True,
- }
- ]
- },
- },
- ]
- },
- }
-
- class TestFrame:
- def __init__(self, abs_path, filename=None, in_app=None, function=None):
- self.abs_path = abs_path
- self.filename = filename
- self.in_app = in_app
- self.function = function
-
def setUp(self) -> None:
self.login_as(self.user)
return super().setUp()
- def test_get_file_extension(self):
- cases = [("foo.js", "js"), ("foo.spec.js", "js"), ("foo", None)]
- for filename, expected in cases:
- assert get_file_extension(filename) == expected
-
- def test_is_frame_filename_valid(self):
- cases = [
- (
- self.TestFrame(
- abs_path="https://app.example.com/static/js/main.fa8fe19f.js",
- filename="<anonymous>",
- in_app=True,
- ),
- False,
- ),
- (
- self.TestFrame(
- abs_path="https://app.example.com/static/js/main.fa8fe19f.js",
- function="@webkit-masked-url",
- in_app=True,
- ),
- False,
- ),
- (
- self.TestFrame(
- abs_path="https://app.example.com/static/js/main",
- ),
- False,
- ),
- (
- self.TestFrame(
- abs_path="https://app.example.com/static/js/main.fa8fe19f.js",
- ),
- True,
- ),
- ]
-
- for frame, expected in cases:
- assert is_frame_filename_valid(frame) == expected
-
def test_no_feature_flag(self):
event = self.store_event(
data={"event_id": "a" * 32},
@@ -129,183 +46,6 @@ def test_missing_event(self):
)
assert resp.data["detail"] == "Event not found"
- @with_feature("organizations:actionable-items")
- def test_event_is_not_javascript(self):
- data = {
- "event_id": "a" * 32,
- "sdk": {
- "name": "sentry.python",
- "version": "1.29.2",
- },
- "exception": {
- "values": [
- {
- "type": "Error",
- "stacktrace": {
- "frames": [
- {
- "abs_path": "https://app.example.com/static/py/main.py",
- "filename": "/static/py/main.py",
- "lineno": 1,
- "colno": 39,
- "context_line": "return results",
- "in_app": True,
- }
- ]
- },
- },
- ]
- },
- }
-
- event = self.store_event(
- data=data,
- project_id=self.project.id,
- )
-
- resp = self.get_success_response(
- self.organization.slug,
- self.project.slug,
- event.event_id,
- )
-
- assert resp.data["errors"] == []
-
- @with_feature("organizations:actionable-items")
- def test_event_has_no_release(self):
- event = self.store_event(
- data=self.base_data,
- project_id=self.project.id,
- )
-
- resp = self.get_success_response(
- self.organization.slug,
- self.project.slug,
- event.event_id,
- )
-
- error = resp.data["errors"][0]
- assert error["type"] == "no_release_on_event"
- assert error["message"] == "The event is missing a release"
-
- @with_feature("organizations:actionable-items")
- def test_multiple_source_map_errors(self):
- data = {
- "event_id": "a" * 32,
- "sdk": {
- "name": "sentry.javascript.browser",
- "version": "7.3.0",
- },
- "exception": {
- "values": [
- {
- "type": "Error",
- "stacktrace": {
- "frames": [
- {
- "abs_path": "https://app.example.com/static/js/main.fa8fe19f.js",
- "filename": "/static/js/main.fa8fe19f.js",
- "lineno": 1,
- "colno": 39,
- "context_line": "function foo() {",
- "in_app": True,
- },
- {
- "abs_path": "https://app.example.com/static/js/main.fa8fe19f.js",
- "filename": "/static/js/main.fa8fe19f.js",
- "lineno": 10,
- "colno": 15,
- "context_line": "function baz() {",
- "in_app": True,
- },
- {
- "abs_path": "https://app.example.com/static/js/main.a1b2c3.js",
- "filename": "/static/js/main.a1b2c3.js",
- "lineno": 2,
- "colno": 50,
- "context_line": "function bar() {",
- "in_app": True,
- },
- ]
- },
- },
- ]
- },
- }
- event = self.store_event(
- data=data,
- project_id=self.project.id,
- )
-
- resp = self.get_success_response(
- self.organization.slug,
- self.project.slug,
- event.event_id,
- )
-
- errors = resp.data["errors"]
-
- # Should have 2 errors, one path is repeated so it shouldn't have an error
- assert len(errors) == 2
-
- assert errors[0]["type"] == "no_release_on_event"
- assert errors[1]["type"] == "no_release_on_event"
-
- @with_feature("organizations:actionable-items")
- def test_event_has_no_release_with_event_error(self):
- data = {
- "event_id": "a" * 32,
- "sdk": {
- "name": "sentry.javascript.browser",
- "version": "7.3.0",
- },
- "exception": {
- "values": [
- {
- "type": "Error",
- "stacktrace": {
- "frames": [
- {
- "abs_path": "https://app.example.com/static/js/main.fa8fe19f.js",
- "filename": "/static/js/main.fa8fe19f.js",
- "lineno": 1,
- "colno": 39,
- "context_line": "function foo() {",
- "in_app": True,
- }
- ]
- },
- },
- ]
- },
- "errors": [
- {"type": EventError.JS_MISSING_SOURCES_CONTENT, "url": "http://example.com"}
- ],
- }
-
- event = self.store_event(
- data=data,
- project_id=self.project.id,
- assert_no_errors=False,
- )
-
- resp = self.get_success_response(
- self.organization.slug,
- self.project.slug,
- event.event_id,
- )
-
- errors = resp.data["errors"]
-
- assert len(errors) == 2
-
- # Sourcemap error should be first
- sourcemap_error = errors[0]
- event_error = errors[1]
-
- assert sourcemap_error["type"] == SourceMapProcessingIssue.MISSING_RELEASE
- assert event_error["type"] == EventError.JS_MISSING_SOURCES_CONTENT
-
@with_feature("organizations:actionable-items")
def test_orders_event_errors_by_priority(self):
event = self.store_event(
@@ -342,36 +82,6 @@ def test_orders_event_errors_by_priority(self):
project_id=self.project.id,
assert_no_errors=False,
)
- release = Release.objects.get(organization=self.organization, version=event.release)
- release.update(user_agent="test_user_agent")
-
- dist = Distribution.objects.get(
- organization_id=self.organization.id, name="my-dist", release_id=release.id
- )
-
- file = File.objects.create(name="application.js", type="release.file")
- fileobj = ContentFile(b"a\n//# sourceMappingURL=application.js.map")
- file.putfile(fileobj)
-
- ReleaseFile.objects.create(
- organization_id=self.project.organization_id,
- release_id=release.id,
- file=file,
- name="~/application.js",
- dist_id=dist.id,
- )
-
- ReleaseFile.objects.create(
- organization_id=self.project.organization_id,
- release_id=release.id,
- file=file,
- name="~/application.js.map",
- dist_id=dist.id,
- )
-
- sourcemapfile = File.objects.create(name="application.js.map", type="release.file")
- sourcemapfileobj = ContentFile(b"mapping code")
- sourcemapfile.putfile(sourcemapfileobj)
resp = self.get_success_response(
self.organization.slug,
| this pr removes source map debugging from actionable items since we are no longer showing those alerts on the issue details page. | https://api.github.com/repos/getsentry/sentry/pulls/56405 | 2023-09-18T17:58:33Z | 2023-09-19T15:31:54Z | 2023-09-19T15:31:54Z | 2023-10-05T00:03:58Z | 2,837 | getsentry/sentry | 44,086 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.