title
stringlengths
2
169
diff
stringlengths
235
19.5k
body
stringlengths
0
30.5k
url
stringlengths
48
84
created_at
stringlengths
20
20
closed_at
stringlengths
20
20
merged_at
stringlengths
20
20
updated_at
stringlengths
20
20
diff_len
float64
101
3.99k
repo_name
stringclasses
83 values
__index_level_0__
int64
15
52.7k
fix nan in multiply
diff --git a/merger/MergeMasked.py b/merger/MergeMasked.py index aaea66a57..bb9396612 100644 --- a/merger/MergeMasked.py +++ b/merger/MergeMasked.py @@ -232,6 +232,10 @@ def MergeMaskedFace (predictor_func, predictor_input_shape, cfg_mp = cfg.motion_blur_power / 100.0 + # linux opencv can produce nan's so there will be errors in multiplying and glitches in videos + img_bgr = np.nan_to_num(img_bgr) + img_face_mask_a = np.nan_to_num(img_face_mask_a) + out_img = np.nan_to_num(out_img) out_img = img_bgr*(1-img_face_mask_a) + (out_img*img_face_mask_a) if ('seamless' in cfg.mode and cfg.color_transfer_mode != 0) or \
https://github.com/iperov/DeepFaceLab/issues/763 I reproduced the issue and it turns out that linux opencv makes nan values, that's why there is an error in merge ``` /app/deepfake/DeepFaceLab/merger/MergeMasked.py:235: RuntimeWarning: invalid value encountered in multiply out_img = img_bgr*(1-img_face_mask_a) + (out_img*img_face_mask_a) ``` created a pr to fix this -> tested locally -> no glitches
https://api.github.com/repos/iperov/DeepFaceLab/pulls/769
2020-06-04T12:31:59Z
2020-06-04T12:39:47Z
2020-06-04T12:39:47Z
2020-06-04T12:40:51Z
210
iperov/DeepFaceLab
33,408
Changes in Documentation
diff --git a/scrapy/utils/defer.py b/scrapy/utils/defer.py index aa6dcffda25..bcf20951165 100644 --- a/scrapy/utils/defer.py +++ b/scrapy/utils/defer.py @@ -11,7 +11,7 @@ def defer_fail(_failure): """Same as twisted.internet.defer.fail but delay calling errback until next reactor loop - It delays by 100ms so reactor has a chance to go trough readers and writers + It delays by 100ms so reactor has a chance to go through readers and writers before attending pending delayed calls, so do not set delay to zero. """ d = defer.Deferred()
This is my first PR, so on going through the code, I found some typos, and corrected it. Corrected Files: scrapy/utils/defer.py
https://api.github.com/repos/scrapy/scrapy/pulls/3089
2018-01-25T07:43:08Z
2018-01-25T20:12:18Z
2018-01-25T20:12:18Z
2018-07-11T20:45:49Z
154
scrapy/scrapy
34,849
feat: match anything in array syntax, not only words and whitespace
diff --git a/modules/sdxl_styles.py b/modules/sdxl_styles.py index 71afc402f..2a310024c 100644 --- a/modules/sdxl_styles.py +++ b/modules/sdxl_styles.py @@ -94,9 +94,8 @@ def get_words(arrays, totalMult, index): return [word] + get_words(arrays[1:], math.floor(totalMult/len(words)), index) - def apply_arrays(text, index): - arrays = re.findall(r'\[\[([\s,\w-]+)\]\]', text) + arrays = re.findall(r'\[\[(.*?)\]\]', text) if len(arrays) == 0: return text
Closes https://github.com/lllyasviel/Fooocus/issues/2437 Matches now not only [[blue,red]], but also [[ (red:1.1), (blue:1.2) ]] and enables same seed checks for different prompt weight.
https://api.github.com/repos/lllyasviel/Fooocus/pulls/2438
2024-03-04T10:20:43Z
2024-03-04T10:22:25Z
2024-03-04T10:22:24Z
2024-03-05T16:41:39Z
157
lllyasviel/Fooocus
7,174
Add image parser using donut model
diff --git a/gpt_index/readers/file.py b/gpt_index/readers/file.py index 5d4d55e027155..ffedd2e4e14f8 100644 --- a/gpt_index/readers/file.py +++ b/gpt_index/readers/file.py @@ -1,4 +1,5 @@ """Simple reader that .""" +import re from pathlib import Path from typing import Callable, Dict, List, Optional @@ -42,9 +43,74 @@ def _pdf_reader(input_file: Path, errors: str) -> str: return text +def _image_parser(input_file: Path, errors: str) -> str: + """Extract text from images using DONUT.""" + try: + import torch + except ImportError: + raise ValueError("install pytorch to use the model") + try: + from transformers import DonutProcessor, VisionEncoderDecoderModel + except ImportError: + raise ValueError("transformers is required for using DONUT model.") + try: + import sentencepiece # noqa: F401 + except ImportError: + raise ValueError("sentencepiece is required for using DONUT model.") + try: + from PIL import Image + except ImportError: + raise ValueError("PIL is required to read image files.") + + processor = DonutProcessor.from_pretrained( + "naver-clova-ix/donut-base-finetuned-cord-v2" + ) + model = VisionEncoderDecoderModel.from_pretrained( + "naver-clova-ix/donut-base-finetuned-cord-v2" + ) + + device = "cuda" if torch.cuda.is_available() else "cpu" + model.to(device) + # load document image + image = Image.open(input_file) + + # prepare decoder inputs + task_prompt = "<s_cord-v2>" + decoder_input_ids = processor.tokenizer( + task_prompt, add_special_tokens=False, return_tensors="pt" + ).input_ids + + pixel_values = processor(image, return_tensors="pt").pixel_values + + outputs = model.generate( + pixel_values.to(device), + decoder_input_ids=decoder_input_ids.to(device), + max_length=model.decoder.config.max_position_embeddings, + early_stopping=True, + pad_token_id=processor.tokenizer.pad_token_id, + eos_token_id=processor.tokenizer.eos_token_id, + use_cache=True, + num_beams=1, + bad_words_ids=[[processor.tokenizer.unk_token_id]], + return_dict_in_generate=True, + ) + + sequence = processor.batch_decode(outputs.sequences)[0] + sequence = sequence.replace(processor.tokenizer.eos_token, "").replace( + processor.tokenizer.pad_token, "" + ) + # remove first task start token + sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() + + return sequence + + DEFAULT_FILE_EXTRACTOR: Dict[str, Callable[[Path, str], str]] = { ".pdf": _pdf_reader, ".docx": _docx_reader, + ".jpg": _image_parser, + ".png": _image_parser, + ".jpeg": _image_parser, }
https://api.github.com/repos/run-llama/llama_index/pulls/228
2023-01-15T10:26:56Z
2023-01-15T17:58:03Z
2023-01-15T17:58:03Z
2023-01-15T17:58:03Z
730
run-llama/llama_index
6,142
Convert release manager instructions to use "partial" svn checkouts
diff --git a/dev/README_RELEASE_AIRFLOW.md b/dev/README_RELEASE_AIRFLOW.md index 5397cf5f288b9..c997324e94d33 100644 --- a/dev/README_RELEASE_AIRFLOW.md +++ b/dev/README_RELEASE_AIRFLOW.md @@ -314,12 +314,12 @@ The Release Candidate artifacts we vote upon should be the exact ones we vote ag ```shell script # First clone the repo - svn checkout https://dist.apache.org/repos/dist/dev/airflow airflow-dev - cd airflow-dev - # Or move into it if you already have it cloned + + [ -d asf-dist ] || svn checkout --depth=immediates https://dist.apache.org/repos/dist asf-dist + svn update --set-depth=infinity asf-dist/dev/airflow + cd asf-dist/dev/airflow # Create new folder for the release - svn update svn mkdir ${VERSION} # Move the artifacts to svn folder & commit @@ -896,19 +896,13 @@ The best way of doing this is to svn cp between the two repos (this avoids havin ```shell script # GO to Airflow Sources first cd <YOUR_AIRFLOW_REPO_ROOT> -export AIRFLOW_REPO_ROOT=$(pwd) - -# GO to Checked out DEV repo. Should be checked out before via: -# svn checkout https://dist.apache.org/repos/dist/dev/airflow airflow-dev -cd <YOUR_AIFLOW_DEV_SVN> -svn update -export AIRFLOW_DEV_SVN=$(pwd) - -# GO to Checked out RELEASE repo. Should be checked out before via: -# svn checkout https://dist.apache.org/repos/dist/release/airflow airflow-release -cd <YOUR_AIFLOW_RELEASE_SVN> -svn update -export AIRFLOW_RELEASE_SVN=$(pwd) +export AIRFLOW_REPO_ROOT="$(pwd)" +cd .. + +[ -d asf-dist ] || svn checkout --depth=immediates https://dist.apache.org/repos/dist asf-dist +svn update --set-depth=infinity asf-dist/{release,dev}/airflow +AIRFLOW_DEV_SVN="${PWD}/asf-dist/dev/airflow" +cd asf-dist/release/airflow export RC=2.0.2rc5 export VERSION=${RC/rc?/} diff --git a/dev/README_RELEASE_PROVIDER_PACKAGES.md b/dev/README_RELEASE_PROVIDER_PACKAGES.md index af0238b556963..f3e50301bc401 100644 --- a/dev/README_RELEASE_PROVIDER_PACKAGES.md +++ b/dev/README_RELEASE_PROVIDER_PACKAGES.md @@ -196,14 +196,11 @@ popd ```shell script # First clone the repo if you do not have it -svn checkout https://dist.apache.org/repos/dist/dev/airflow airflow-dev - -# update the repo in case you have it already -cd airflow-dev -svn update +[ -d asf-dist ] || svn checkout --depth=immediates https://dist.apache.org/repos/dist asf-dist +svn update --set-depth=infinity asf-dist/dev/airflow # Create a new folder for the release. -cd providers +cd asf-dist/dev/providers # Remove previously released providers rm -rf * @@ -730,40 +727,37 @@ again, and gives a clearer history in the svn commit logs. We also need to archive older releases before copying the new ones [Release policy](http://www.apache.org/legal/release-policy.html#when-to-archive) -```shell script +```bash cd "<ROOT_OF_YOUR_AIRFLOW_REPO>" # Set AIRFLOW_REPO_ROOT to the path of your git repo -export AIRFLOW_REPO_ROOT=$(pwd) - -# Go to the directory where you have checked out the dev svn release -# And go to the sub-folder with RC candidates -cd "<ROOT_OF_YOUR_DEV_REPO>/providers/" -export SOURCE_DIR=$(pwd) - -# If some packages have been excluded, remove them now -# Check the packages -ls *<provider>* -# Remove them -svn rm *<provider>* +export AIRFLOW_REPO_ROOT="$(pwd)" +cd .. # Go the folder where you have checked out the release repo cd "<ROOT_OF_YOUR_RELEASE_REPO>" # or clone it if it's not done yet -svn checkout https://dist.apache.org/repos/dist/release/airflow airflow-release -cd airflow-release - +[ -d asf-dist ] || svn checkout --depth=immediates https://dist.apache.org/repos/dist asf-dist # Update to latest version -svn update +svn update --set-depth=infinity asf-dist/dev/airflow asf-dist/release/airflow + +SOURCE_DIR="${PWD}/dev/airflow/providers" # Create providers folder if it does not exist # All latest releases are kept in this one folder without version sub-folder +cd asf-dist/release/airflow mkdir -pv providers cd providers +# If some packages have been excluded, remove them now +# Check the packages +ls *<provider>* +# Remove them +svn rm *<provider>* + # Copy your providers with the target name to dist directory and to SVN -rm ${AIRFLOW_REPO_ROOT}/dist/* +rm "${AIRFLOW_REPO_ROOT}"/dist/* -for file in ${SOURCE_DIR}/* +for file in "${SOURCE_DIR}"/* do base_file=$(basename ${file}) cp -v "${file}" "${AIRFLOW_REPO_ROOT}/dist/${base_file//rc[0-9]/}"
This just makes it a bit easier to manage copying between dev and release.
https://api.github.com/repos/apache/airflow/pulls/26589
2022-09-22T10:45:29Z
2022-09-22T11:46:00Z
2022-09-22T11:46:00Z
2022-11-10T21:54:01Z
1,216
apache/airflow
14,774
Fix a logistic bug
diff --git a/manimlib/mobject/svg/string_mobject.py b/manimlib/mobject/svg/string_mobject.py index 5004960e61..e4376a3360 100644 --- a/manimlib/mobject/svg/string_mobject.py +++ b/manimlib/mobject/svg/string_mobject.py @@ -174,8 +174,8 @@ def find_spans_by_single_selector(sel): ): l = self.full_span[1] span = tuple( + default_index if index is None else min(index, l) if index >= 0 else max(index + l, 0) - if index is not None else default_index for index, default_index in zip(sel, self.full_span) ) return [span]
## Motivation There's a logistic bug when handling `None` in spans. My apologies for that. ## Proposed changes - M `manimlib/mobject/svg/string_mobject.py`
https://api.github.com/repos/3b1b/manim/pulls/1815
2022-05-20T10:55:13Z
2022-05-20T10:56:40Z
2022-05-20T10:56:40Z
2022-05-20T10:56:41Z
168
3b1b/manim
18,183
add resolved IP address in "Details" tab
diff --git a/mitmproxy/console/flowdetailview.py b/mitmproxy/console/flowdetailview.py index 757c76fdab..8e3a47ae84 100644 --- a/mitmproxy/console/flowdetailview.py +++ b/mitmproxy/console/flowdetailview.py @@ -23,6 +23,7 @@ def flowdetails(state, flow): text.append(urwid.Text([("head", "Server Connection:")])) parts = [ ["Address", repr(sc.address)], + ["Peer Address", repr(sc.peer_address)], ] text.extend( diff --git a/mitmproxy/flow_format_compat.py b/mitmproxy/flow_format_compat.py index a7a95af3ab..4c3aa7270d 100644 --- a/mitmproxy/flow_format_compat.py +++ b/mitmproxy/flow_format_compat.py @@ -35,6 +35,7 @@ def convert_015_016(data): def convert_016_017(data): + data["server_conn"]["peer_address"] = None data["version"] = (0, 17) return data diff --git a/mitmproxy/models/connections.py b/mitmproxy/models/connections.py index 857580b8ec..2ffc667d2a 100644 --- a/mitmproxy/models/connections.py +++ b/mitmproxy/models/connections.py @@ -120,6 +120,7 @@ def tls_established(self): timestamp_tcp_setup=float, timestamp_ssl_setup=float, address=tcp.Address, + peer_address=tcp.Address, source_address=tcp.Address, cert=certutils.SSLCert, ssl_established=bool, diff --git a/netlib/tcp.py b/netlib/tcp.py index 6423888a14..574f384582 100644 --- a/netlib/tcp.py +++ b/netlib/tcp.py @@ -458,9 +458,11 @@ def _makefile(self): def __init__(self, connection): if connection: self.connection = connection + self.peer_address = Address(connection.getpeername()) self._makefile() else: self.connection = None + self.peer_address = None self.rfile = None self.wfile = None @@ -701,6 +703,7 @@ def connect(self): 'Error connecting to "%s": %s' % (self.address.host, err)) self.connection = connection + self.peer_address = Address(connection.getpeername()) self._makefile() def settimeout(self, n): diff --git a/test/mitmproxy/tutils.py b/test/mitmproxy/tutils.py index edcdf3e212..0d65df7172 100644 --- a/test/mitmproxy/tutils.py +++ b/test/mitmproxy/tutils.py @@ -93,6 +93,7 @@ def tserver_conn(): c = ServerConnection.from_state(dict( address=dict(address=("address", 22), use_ipv6=True), source_address=dict(address=("address", 22), use_ipv6=True), + peer_address=None, cert=None, timestamp_start=1, timestamp_tcp_setup=2,
It's useful to know the peer address when the domain has several IP addresses and you need to know where the request has gone.
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/1019
2016-03-10T07:14:23Z
2016-03-15T22:28:08Z
2016-03-15T22:28:08Z
2016-04-11T00:00:15Z
704
mitmproxy/mitmproxy
27,789
Fixed typo in docs/ref/templates/builtins.txt.
diff --git a/docs/ref/templates/builtins.txt b/docs/ref/templates/builtins.txt index 17528cea187d0..02b63db24858e 100644 --- a/docs/ref/templates/builtins.txt +++ b/docs/ref/templates/builtins.txt @@ -1422,7 +1422,7 @@ Format character Description Example output always negative, and for those east of UTC is always positive. **Date/Time** -``c`` ISO 8601 format. (Note: unlike others ``2008-01-02T10:30:00.000123+02:00``, +``c`` ISO 8601 format. (Note: unlike other ``2008-01-02T10:30:00.000123+02:00``, formatters, such as "Z", "O" or "r", or ``2008-01-02T10:30:00.000123`` if the datetime is naive the "c" formatter will not add timezone offset if value is a naive datetime
Fix typo in documentation
https://api.github.com/repos/django/django/pulls/13755
2020-12-08T21:19:44Z
2021-01-04T06:34:54Z
2021-01-04T06:34:54Z
2021-01-04T06:35:02Z
241
django/django
51,030
Python 3 - pathod.language.base
diff --git a/.travis.yml b/.travis.yml index bdfb465504..5f065d536d 100644 --- a/.travis.yml +++ b/.travis.yml @@ -22,9 +22,9 @@ matrix: git: depth: 9999999 - python: 3.5 - env: SCOPE="netlib test/mitmproxy/script test/pathod/test_utils.py test/pathod/test_log.py test/pathod/test_language_generators.py test/pathod/test_language_writer.py" + env: SCOPE="netlib test/mitmproxy/script test/pathod/test_utils.py test/pathod/test_log.py test/pathod/test_language_generators.py test/pathod/test_language_writer.py test/pathod/test_language_base.py" - python: 3.5 - env: SCOPE="netlib test/mitmproxy/script test/pathod/test_utils.py test/pathod/test_log.py test/pathod/test_language_generators.py test/pathod/test_language_writer.py" NO_ALPN=1 + env: SCOPE="netlib test/mitmproxy/script test/pathod/test_utils.py test/pathod/test_log.py test/pathod/test_language_generators.py test/pathod/test_language_writer.py test/pathod/test_language_base.py" NO_ALPN=1 - python: 2.7 env: DOCS=1 script: 'cd docs && make html' diff --git a/pathod/language/base.py b/pathod/language/base.py index 11ee06239a..1369a3c797 100644 --- a/pathod/language/base.py +++ b/pathod/language/base.py @@ -226,7 +226,7 @@ def get_generator(self, settings): return generators.FileGenerator(s) def spec(self): - return "<'%s'" % strutils.bytes_to_escaped_str(self.path) + return "<'%s'" % self.path TokValue = pp.MatchFirst( diff --git a/test/pathod/test_language_base.py b/test/pathod/test_language_base.py index 47e51bb07b..075dc2b824 100644 --- a/test/pathod/test_language_base.py +++ b/test/pathod/test_language_base.py @@ -38,7 +38,7 @@ def test_spec(self): class TestTokValueLiteral: - def test_espr(self): + def test_expr(self): v = base.TokValueLiteral("foo") assert v.expr() assert v.val == b"foo" @@ -132,7 +132,7 @@ def test_access_control(self): with tutils.tmpdir() as t: p = os.path.join(t, "path") with open(p, "wb") as f: - f.write("x" * 10000) + f.write(b"x" * 10000) assert v.get_generator(language.Settings(staticdir=t)) @@ -207,13 +207,13 @@ class TT(base.FixedLengthValue): p = os.path.join(t, "path") s = base.Settings(staticdir=t) with open(p, "wb") as f: - f.write("a" * 20) + f.write(b"a" * 20) v = e.parseString("m<path")[0] tutils.raises("invalid value length", v.values, s) p = os.path.join(t, "path") with open(p, "wb") as f: - f.write("a" * 4) + f.write(b"a" * 4) v = e.parseString("m<path")[0] assert v.values(s) diff --git a/tox.ini b/tox.ini index b01de12869..f94dfb4927 100644 --- a/tox.ini +++ b/tox.ini @@ -8,7 +8,7 @@ deps = -rrequirements.txt commands = py.test -n 8 --timeout 60 ./test [testenv:py35] -commands = py.test -n 8 --timeout 60 test/netlib test/mitmproxy/script test/pathod/test_utils.py test/pathod/test_log.py test/pathod/test_language_generators.py test/pathod/test_language_writer.py +commands = py.test -n 8 --timeout 60 test/netlib test/mitmproxy/script test/pathod/test_utils.py test/pathod/test_log.py test/pathod/test_language_generators.py test/pathod/test_language_writer.py test/pathod/test_language_base.py [testenv:lint] deps = flake8
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/1209
2016-06-04T13:03:20Z
2016-06-04T13:57:22Z
2016-06-04T13:57:22Z
2016-07-07T11:13:33Z
988
mitmproxy/mitmproxy
28,286
Handle exception in modbus slave sensor
diff --git a/homeassistant/components/modbus/sensor.py b/homeassistant/components/modbus/sensor.py index 8363de3adf1d..d4f3d1f28b6c 100644 --- a/homeassistant/components/modbus/sensor.py +++ b/homeassistant/components/modbus/sensor.py @@ -101,6 +101,9 @@ async def async_update(self, now: datetime | None = None) -> None: return self._lazy_errors = self._lazy_error_count self._attr_available = False + self._attr_native_value = None + if self._coordinator: + self._coordinator.async_set_updated_data(None) self.async_write_ha_state() return diff --git a/tests/components/modbus/test_sensor.py b/tests/components/modbus/test_sensor.py index aa513d1c473c..4e4e2e284cf9 100644 --- a/tests/components/modbus/test_sensor.py +++ b/tests/components/modbus/test_sensor.py @@ -33,6 +33,7 @@ CONF_SLAVE, CONF_STRUCTURE, STATE_UNAVAILABLE, + STATE_UNKNOWN, ) from homeassistant.core import State @@ -565,13 +566,14 @@ async def test_all_sensor(hass, mock_do_cycle, expected): ], ) @pytest.mark.parametrize( - "config_addon,register_words,expected", + "config_addon,register_words,do_exception,expected", [ ( { CONF_SLAVE_COUNT: 0, }, [0x0102, 0x0304], + False, ["16909060"], ), ( @@ -579,6 +581,7 @@ async def test_all_sensor(hass, mock_do_cycle, expected): CONF_SLAVE_COUNT: 1, }, [0x0102, 0x0304, 0x0403, 0x0201], + False, ["16909060", "67305985"], ), ( @@ -595,6 +598,7 @@ async def test_all_sensor(hass, mock_do_cycle, expected): 0x0D0E, 0x0F00, ], + False, [ "16909060", "84281096", @@ -602,6 +606,22 @@ async def test_all_sensor(hass, mock_do_cycle, expected): "219025152", ], ), + ( + { + CONF_SLAVE_COUNT: 1, + }, + [0x0102, 0x0304, 0x0403, 0x0201], + True, + [STATE_UNAVAILABLE, STATE_UNKNOWN], + ), + ( + { + CONF_SLAVE_COUNT: 1, + }, + [], + False, + [STATE_UNAVAILABLE, STATE_UNKNOWN], + ), ], ) async def test_slave_sensor(hass, mock_do_cycle, expected):
<!-- You are amazing! Thanks for contributing to our project! Please, DO NOT DELETE ANY TEXT from this template! (unless instructed). --> ## Breaking change <!-- If your PR contains a breaking change for existing users, it is important to tell them what breaks, how to make it work again and why we did this. This piece of text is published with the release notes, so it helps if you write it towards our users, not us. Note: Remove this section if this PR is NOT a breaking change. --> ## Proposed change <!-- Describe the big picture of your changes here to communicate to the maintainers why we should accept this pull request. If it fixes a bug or resolves a feature request, be sure to link to that issue in the additional information section. --> Exceptions when reading was not handled well in the slaves, secure slaves are set to STATE_UNKNOWN, when read causes an exception. ## Type of change <!-- What type of change does your PR introduce to Home Assistant? NOTE: Please, check only 1! box! If your PR requires multiple boxes to be checked, you'll most likely need to split it into multiple PRs. This makes things easier and faster to code review. --> - [ ] Dependency upgrade - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New integration (thank you!) - [ ] New feature (which adds functionality to an existing integration) - [ ] Breaking change (fix/feature causing existing functionality to break) - [ ] Code quality improvements to existing code or addition of tests ## Additional information <!-- Details are important, and help maintainers processing your PR. Please be sure to fill out additional details, if applicable. --> - This PR fixes or closes issue: fixes # - This PR is related to issue: - Link to documentation pull request: ## Checklist <!-- Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code. --> - [x] The code change is tested and works locally. - [x] Local tests pass. **Your PR cannot be merged unless tests pass** - [x] There is no commented out code in this PR. - [x] I have followed the [development checklist][dev-checklist] - [x] The code has been formatted using Black (`black --fast homeassistant tests`) - [x] Tests have been added to verify that the new code works. If user exposed functionality or configuration variables are added/changed: - [ ] Documentation added/updated for [www.home-assistant.io][docs-repository] If the code communicates with devices, web services, or third-party tools: - [ ] The [manifest file][manifest-docs] has all fields filled out correctly. Updated and included derived files by running: `python3 -m script.hassfest`. - [ ] New or updated dependencies have been added to `requirements_all.txt`. Updated by running `python3 -m script.gen_requirements_all`. - [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description. - [ ] Untested files have been added to `.coveragerc`. The integration reached or maintains the following [Integration Quality Scale][quality-scale]: <!-- The Integration Quality Scale scores an integration on the code quality and user experience. Each level of the quality scale consists of a list of requirements. We highly recommend getting your integration scored! --> - [ ] No score or internal - [ ] 🥈 Silver - [x] 🥇 Gold - [ ] 🏆 Platinum <!-- This project is very active and we have a high turnover of pull requests. Unfortunately, the number of incoming pull requests is higher than what our reviewers can review and merge so there is a long backlog of pull requests waiting for review. You can help here! By reviewing another pull request, you will help raise the code quality of that pull request and the final review will be faster. This way the general pace of pull request reviews will go up and your wait time will go down. When picking a pull request to review, try to choose one that hasn't yet been reviewed. Thanks for helping out! --> To help with the load of incoming pull requests: - [x] I have reviewed two other [open pull requests][prs] in this repository. [prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure <!-- Thank you for contributing <3 Below, some useful links you could explore: --> [dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html [manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html [quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html [docs-repository]: https://github.com/home-assistant/home-assistant.io
https://api.github.com/repos/home-assistant/core/pulls/67472
2022-03-02T10:26:50Z
2022-03-02T17:49:57Z
2022-03-02T17:49:57Z
2022-03-03T20:01:56Z
650
home-assistant/core
38,764
Fix the problem of not using the decoding method corresponding to the base model in peft mode
diff --git a/fastchat/model/model_adapter.py b/fastchat/model/model_adapter.py index 6578f8441a..af9a41af28 100644 --- a/fastchat/model/model_adapter.py +++ b/fastchat/model/model_adapter.py @@ -370,10 +370,14 @@ def get_generate_stream_function(model: torch.nn.Module, model_path: str): from fastchat.serve.inference import generate_stream model_type = str(type(model)).lower() + is_peft = "peft" in model_type + if is_peft: + model.set_adapter(model_path) + model_type = str(type(model.base_model.model)) + is_chatglm = "chatglm" in model_type is_falcon = "rwforcausallm" in model_type is_codet5p = "codet5p" in model_type - is_peft = "peft" in model_type is_exllama = "exllama" in model_type is_xft = "xft" in model_type
<!-- Thank you for your contribution! --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> ```shell CUDA_VISIBLE_DEVICES=1 PEFT_SHARE_BASE_WEIGHTS=true nohup python3 -m fastchat.serve.multi_model_worker \ --model-path /xxx/chatglm3-6b/peft/peft_xxxx \ --model-path /xxx/chatglm3-6b/peft/peft_xxxx \ --port 30001 \ --conv-template chatglm3 \ --worker http://localhost:30001 >./logs/nohup_3.log 2>&1 & ``` **Usage Scenario**:Load peft trained Adapters. **Problem**: When peft mode is enabled, instead of using the `generate_stream_function` of the base model, `generate_stream_peft ` is used. For example, when the base model is `chatglm`, `generate_stream_chatglm` should be used. When using `generate_stream_peft`, the response is disorganized and repetitive. The fundamental reason is that the inference code (`generate_stream_peft`) is not suitable for models such as chatglm. Here is an test example. ```json { "model": "peft_base", "messages": [ { "role": "user", "content": "写一篇1000字的检讨" } ], "temperature": 0.7, "n": 1, "max_tokens": 1024, "stop": [], "stream": false, "presence_penalty": 0, "frequency_penalty": 0 } ``` **`generate_stream_peft**: the response is disorganized and repetitive. > 尊敬的领导: > > 首先,我想通过这封禁言辞退而非常抱歉,我写这封检讨 > 我写这封检讨 > 写这封检讨 如果您能写这封检讨 报告,我写这篇检讨 检讨 > 写这封检讨 检讨 报 > 报告是关于上一次我写这封检讨 检讨 报 报 报 报 报 报 > 我写这封 检讨 报 报 报 > 我写这封道歉信是关于我 上周 书 是我的错 检讨 检讨 检讨 书是我在过去承 > 我写这周 检讨 检讨 书信是关于我 上周 检讨 书 书信的方式 回 书信是关于我的 书信 是我的最近我 上周 书信表达 书信中 检讨 书信,我 上周 书 **`generate_stream_chatglm**: the response is normal. > 作为一款人工智能助手,我虽然不能像人类一样感受到情感和压力,但我能够通过不断地学习和改进,来提高自己的表现和质量。最近,我因为自己的失误,给用户带来了不必要的困扰和麻烦,我深感抱歉和内疚。在此,我要向所有使用我的用户提供诚挚的道歉。 > > 我的失误导致了用户的失望和不满。我深感愧疚和自责,因为我知道,作为一款人工智能助手,我的目标是尽可能地提供准确、高效、优质的服务,让用户能够更好地体验和使用我的产品。然而,我的失误却让用户的目标并未得到满足,反而增加了他们的困扰和麻烦。我深知自己的失误是多么的不可原谅,我必须努力改进和提高自己的表现。 **Solution**: When peft mode is enabled, get the base model and then use the corresponding `generate_stream_function`. ## Related issue number (if applicable) <!-- For example: "Closes #1234" --> ## Checks - [ x ] I've run `format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed. - [ ] I've made sure the relevant tests are passing (if applicable).
https://api.github.com/repos/lm-sys/FastChat/pulls/2865
2023-12-27T01:58:57Z
2023-12-28T07:18:26Z
2023-12-28T07:18:26Z
2023-12-28T07:18:26Z
235
lm-sys/FastChat
41,584
Pin dependencies in compatibility tests.
diff --git a/certbot-compatibility-test/Dockerfile b/certbot-compatibility-test/Dockerfile index bb9359ce839..fe55a68a663 100644 --- a/certbot-compatibility-test/Dockerfile +++ b/certbot-compatibility-test/Dockerfile @@ -8,8 +8,8 @@ MAINTAINER Brad Warren <bmw@eff.org> # TODO: Install non-default Python versions for tox. # TODO: Install Apache/Nginx for plugin development. -COPY certbot-auto /opt/certbot/src/certbot-auto -RUN /opt/certbot/src/certbot-auto -n --os-packages-only +COPY letsencrypt-auto-source /opt/certbot/src/letsencrypt-auto-source +RUN /opt/certbot/src/letsencrypt-auto-source/letsencrypt-auto --os-packages-only # the above is not likely to change, so by putting it further up the # Dockerfile we make sure we cache as much as possible @@ -29,16 +29,18 @@ COPY acme /opt/certbot/src/acme/ COPY certbot-apache /opt/certbot/src/certbot-apache/ COPY certbot-nginx /opt/certbot/src/certbot-nginx/ COPY certbot-compatibility-test /opt/certbot/src/certbot-compatibility-test/ +COPY tools /opt/certbot/src/tools RUN virtualenv --no-site-packages -p python2 /opt/certbot/venv && \ /opt/certbot/venv/bin/pip install -U setuptools && \ - /opt/certbot/venv/bin/pip install -U pip && \ - /opt/certbot/venv/bin/pip install \ - -e /opt/certbot/src/acme \ - -e /opt/certbot/src \ - -e /opt/certbot/src/certbot-apache \ - -e /opt/certbot/src/certbot-nginx \ - -e /opt/certbot/src/certbot-compatibility-test + /opt/certbot/venv/bin/pip install -U pip +ENV PATH /opt/certbot/venv/bin:$PATH +RUN /opt/certbot/src/tools/pip_install_editable.sh \ + /opt/certbot/src/acme \ + /opt/certbot/src \ + /opt/certbot/src/certbot-apache \ + /opt/certbot/src/certbot-nginx \ + /opt/certbot/src/certbot-compatibility-test # install in editable mode (-e) to save space: it's not possible to # "rm -rf /opt/certbot/src" (it's stays in the underlaying image); @@ -46,5 +48,3 @@ RUN virtualenv --no-site-packages -p python2 /opt/certbot/venv && \ # bash" and investigate, apply patches, etc. WORKDIR /opt/certbot/src/certbot-compatibility-test/certbot_compatibility_test/testdata - -ENV PATH /opt/certbot/venv/bin:$PATH
Travis tests will fail until this is merged. We now use tools/pip_install_editable.sh which installs our packages using the pinned versions from certbot-auto. We also use letsencrypt-auto-source/letsencrypt-auto instead of certbot-auto in the root to: 1. Make sure OS bootstrappers are up to date with master. 2. Copy letsencrypt-auto-source into our tree so it can be used by tools/pip_install_editable.sh later.
https://api.github.com/repos/certbot/certbot/pulls/5004
2017-08-08T18:02:38Z
2017-08-08T22:31:42Z
2017-08-08T22:31:42Z
2017-08-08T22:31:44Z
684
certbot/certbot
1,879
add datetime hashing for st.cache_data and st.cache_resource
diff --git a/lib/streamlit/runtime/caching/hashing.py b/lib/streamlit/runtime/caching/hashing.py index 06ee14619fe9..c9c7230c1e1e 100644 --- a/lib/streamlit/runtime/caching/hashing.py +++ b/lib/streamlit/runtime/caching/hashing.py @@ -15,6 +15,7 @@ """Hashing for st.cache_data and st.cache_resource.""" import collections import dataclasses +import datetime import functools import hashlib import inspect @@ -371,6 +372,9 @@ def _to_bytes(self, obj: Any) -> bytes: elif isinstance(obj, uuid.UUID): return obj.bytes + elif isinstance(obj, datetime.datetime): + return obj.isoformat().encode() + elif isinstance(obj, (list, tuple)): h = hashlib.new("md5") for item in obj: diff --git a/lib/tests/streamlit/runtime/caching/hashing_test.py b/lib/tests/streamlit/runtime/caching/hashing_test.py index b721a7d42c5a..35c6f89dbccd 100644 --- a/lib/tests/streamlit/runtime/caching/hashing_test.py +++ b/lib/tests/streamlit/runtime/caching/hashing_test.py @@ -13,7 +13,7 @@ # limitations under the License. """st.memo/singleton hashing tests.""" - +import datetime import functools import hashlib import os @@ -29,8 +29,11 @@ from unittest.mock import MagicMock, Mock import cffi +import dateutil.tz import numpy as np +import pandas import pandas as pd +import tzlocal from parameterized import parameterized from PIL import Image @@ -89,6 +92,58 @@ def test_uuid(self): self.assertNotEqual(id(uuid3), id(uuid3_copy)) self.assertNotEqual(get_hash(uuid3), get_hash(uuid4)) + def test_datetime_naive(self): + naive_datetime1 = datetime.datetime(2007, 12, 23, 15, 45, 55) + naive_datetime1_copy = datetime.datetime(2007, 12, 23, 15, 45, 55) + naive_datetime3 = datetime.datetime(2011, 12, 21, 15, 45, 55) + + self.assertEqual(get_hash(naive_datetime1), get_hash(naive_datetime1_copy)) + self.assertNotEqual(id(naive_datetime1), id(naive_datetime1_copy)) + self.assertNotEqual(get_hash(naive_datetime1), get_hash(naive_datetime3)) + + @parameterized.expand( + [ + datetime.timezone.utc, + tzlocal.get_localzone(), + dateutil.tz.gettz("America/Los_Angeles"), + dateutil.tz.gettz("Europe/Berlin"), + dateutil.tz.UTC, + ] + ) + def test_datetime_aware(self, tz_info): + aware_datetime1 = datetime.datetime(2007, 12, 23, 15, 45, 55, tzinfo=tz_info) + aware_datetime1_copy = datetime.datetime( + 2007, 12, 23, 15, 45, 55, tzinfo=tz_info + ) + aware_datetime2 = datetime.datetime(2011, 12, 21, 15, 45, 55, tzinfo=tz_info) + + # naive datetime1 is the same datetime that aware_datetime, + # but without timezone info. They should have different hashes. + naive_datetime1 = datetime.datetime(2007, 12, 23, 15, 45, 55) + + self.assertEqual(get_hash(aware_datetime1), get_hash(aware_datetime1_copy)) + self.assertNotEqual(id(aware_datetime1), id(aware_datetime1_copy)) + self.assertNotEqual(get_hash(aware_datetime1), get_hash(aware_datetime2)) + self.assertNotEqual(get_hash(aware_datetime1), get_hash(naive_datetime1)) + + @parameterized.expand( + [ + "US/Pacific", + "America/Los_Angeles", + "Europe/Berlin", + "UTC", + None, # check for naive too + ] + ) + def test_pandas_timestamp(self, tz_info): + timestamp1 = pandas.Timestamp("2017-01-01T12", tz=tz_info) + timestamp1_copy = pandas.Timestamp("2017-01-01T12", tz=tz_info) + timestamp2 = pandas.Timestamp("2019-01-01T12", tz=tz_info) + + self.assertEqual(get_hash(timestamp1), get_hash(timestamp1_copy)) + self.assertNotEqual(id(timestamp1), id(timestamp1_copy)) + self.assertNotEqual(get_hash(timestamp1), get_hash(timestamp2)) + def test_mocks_do_not_result_in_infinite_recursion(self): try: get_hash(Mock())
<!-- ⚠️ BEFORE CONTRIBUTING PLEASE READ OUR CONTRIBUTING GUIDELINES! https://github.com/streamlit/streamlit/wiki/Contributing --> ## Describe your changes Previously hashing of `datetime` objects happens via calling `__reduce__`, but that fails when datetime object was aware (containing information about the timezone). This changes adding special hashing handling for DateTime objects, by converting them to `isoformat`(which contains information about timezone offset). Please note that via that we lose information about the exact time zone in hashing, but information about UTC offset will be preserved. So e.g. same date-times with different timezones, but the same time stamp and offset will be hashed to the same value. Which should be fine in the vast majority of cases. ## GitHub Issue Link #5110 , #6690 ## Testing Plan - Explanation of why no additional tests are needed - Unit Tests (JS and/or Python) DONE! - E2E Tests - Any manual testing needed? --- **Contribution License Agreement** By submitting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.
https://api.github.com/repos/streamlit/streamlit/pulls/6812
2023-06-07T16:04:36Z
2023-06-21T17:35:35Z
2023-06-21T17:35:35Z
2023-11-01T23:58:10Z
1,094
streamlit/streamlit
21,783
Update install.sh
diff --git a/install.sh b/install.sh index 6cdc8a63..afdfcef3 100644 --- a/install.sh +++ b/install.sh @@ -39,7 +39,7 @@ INSTALL_DIR="/usr/share/doc/hackingtool" BIN_DIR="/usr/bin/" if [ $choice == 1 ]; then echo "[*] Checking Internet Connection .." - wget -q --tries=10 --timeout=20 --spider http://google.com + wget -q --tries=10 --timeout=20 --spider https://google.com if [[ $? -eq 0 ]]; then echo -e ${BLUE}"[✔] Loading ... " sudo apt-get update && apt-get upgrade
wget option will not work with http but instead with https - or remove the entire line, the program will not install.
https://api.github.com/repos/Z4nzu/hackingtool/pulls/69
2020-07-23T20:39:00Z
2020-07-24T08:16:11Z
2020-07-24T08:16:10Z
2020-07-24T08:16:11Z
159
Z4nzu/hackingtool
9,879
support --keysize N cmdline param to give RSA key size
diff --git a/letsencrypt/client/client.py b/letsencrypt/client/client.py index dd4e23c6e24..763178d19e2 100644 --- a/letsencrypt/client/client.py +++ b/letsencrypt/client/client.py @@ -330,7 +330,7 @@ def validate_key_csr(privkey, csr=None): "The key and CSR do not match") -def init_key(): +def init_key(key_size): """Initializes privkey. Inits key and CSR using provided files or generating new files @@ -339,7 +339,12 @@ def init_key(): the namedtuple to easily work with the protocol. """ - key_pem = crypto_util.make_key(CONFIG.RSA_KEY_SIZE) + try: + key_pem = crypto_util.make_key(key_size) + except ValueError as err: + logging.fatal(str(err)) + logging.info("Note: The default RSA key size is %d bits.", CONFIG.RSA_KEY_SIZE) + sys.exit(1) # Save file le_util.make_or_verify_dir(CONFIG.KEY_DIR, 0o700) @@ -348,7 +353,7 @@ def init_key(): key_f.write(key_pem) key_f.close() - logging.info("Generating key: %s", key_filename) + logging.info("Generating key (%d bits): %s", key_size, key_filename) return Client.Key(key_filename, key_pem) diff --git a/letsencrypt/client/crypto_util.py b/letsencrypt/client/crypto_util.py index c11719343e7..627e51cb60b 100644 --- a/letsencrypt/client/crypto_util.py +++ b/letsencrypt/client/crypto_util.py @@ -145,7 +145,7 @@ def csr_matches_pubkey(csr, privkey): # based on M2Crypto unit test written by Toby Allsopp -def make_key(bits=CONFIG.RSA_KEY_SIZE): +def make_key(bits): """Generate PEM encoded RSA key. :param int bits: Number of bits, at least 1024. diff --git a/letsencrypt/client/tests/crypto_util_test.py b/letsencrypt/client/tests/crypto_util_test.py index e80988d831f..3e943e89885 100644 --- a/letsencrypt/client/tests/crypto_util_test.py +++ b/letsencrypt/client/tests/crypto_util_test.py @@ -98,6 +98,8 @@ class MakeKeyTest(unittest.TestCase): def test_it(self): from letsencrypt.client.crypto_util import make_key M2Crypto.RSA.load_key_string(make_key(1024)) + M2Crypto.RSA.load_key_string(make_key(2048)) + M2Crypto.RSA.load_key_string(make_key(4096)) class ValidPrivkeyTest(unittest.TestCase): diff --git a/letsencrypt/scripts/main.py b/letsencrypt/scripts/main.py index ff3c3c79278..1d7acda97ab 100755 --- a/letsencrypt/scripts/main.py +++ b/letsencrypt/scripts/main.py @@ -37,6 +37,9 @@ def main(): parser.add_argument("-b", "--rollback", dest="rollback", type=int, default=0, metavar="N", help="Revert configuration N number of checkpoints.") + parser.add_argument("-B", "--keysize", dest="key_size", type=int, + default=CONFIG.RSA_KEY_SIZE, metavar="N", + help="RSA key shall be sized N bits. [%d]" % CONFIG.RSA_KEY_SIZE) parser.add_argument("-k", "--revoke", dest="revoke", action="store_true", help="Revoke a certificate.") parser.add_argument("-v", "--view-config-changes", @@ -100,7 +103,7 @@ def main(): # Prepare for init of Client if args.privkey is None: - privkey = client.init_key() + privkey = client.init_key(args.key_size) else: privkey = client.Client.Key(args.privkey[0], args.privkey[1])
also: improve tests for usual key sizes note: I would have liked -b N for the bitsize, but -b is already used otherwise. i ran tox, no issues, also did practical tests and got certificates issued with 1024, 2048, 3072, 4096 and 8192 bits. the latter is ofc a bit slow, but I think we only should check the lower boundary.
https://api.github.com/repos/certbot/certbot/pulls/175
2015-01-23T02:41:22Z
2015-01-23T23:24:34Z
2015-01-23T23:24:34Z
2016-05-06T19:21:49Z
895
certbot/certbot
669
#1282 git misspelled
diff --git a/tests/rules/test_no_command.py b/tests/rules/test_no_command.py index 0df4590b2..96f0f069a 100644 --- a/tests/rules/test_no_command.py +++ b/tests/rules/test_no_command.py @@ -21,7 +21,8 @@ def history_without_current(mocker): ('vom file.py', 'vom: not found'), ('fucck', 'fucck: not found'), ('puthon', "'puthon' is not recognized as an internal or external command"), - ('got commit', 'got: command not found')]) + ('got commit', 'got: command not found'), + ('gti commit -m "new commit"', 'gti: command not found')]) def test_match(mocker, script, output): mocker.patch('thefuck.rules.no_command.which', return_value=None) @@ -43,6 +44,7 @@ def test_not_match(mocker, script, output, which): @pytest.mark.parametrize('script, result', [ ('vom file.py', ['vim file.py']), ('fucck', ['fsck']), - ('got commit', ['git commit', 'go commit'])]) + ('got commit', ['git commit', 'go commit']), + ('gti commit -m "new commit"', ['git commit -m "new commit"'])]) def test_get_new_command(script, result): assert get_new_command(Command(script, '')) == result diff --git a/thefuck/rules/no_command.py b/thefuck/rules/no_command.py index 03e023b30..086232935 100644 --- a/thefuck/rules/no_command.py +++ b/thefuck/rules/no_command.py @@ -35,8 +35,7 @@ def get_new_command(command): get_all_executables()) if cmd not in new_cmds] - return [' '.join([new_command] + command.script_parts[1:]) - for new_command in new_cmds] + return [command.script.replace(old_command, cmd, 1) for cmd in new_cmds] priority = 3000
The function git_support can not be useful if a command has the word "git" misspelled. The purpose of this rule is to unfuck the commands like: gti commit -m 'message'. Without this rule, for example, if you try to unfuck this command the result will be: git commit -m message. This means that the quotation marks do not return. #1282 This rule is generally useful for every git command, in which the word git is misspelled. I have also run some tests!
https://api.github.com/repos/nvbn/thefuck/pulls/1292
2022-05-03T19:31:37Z
2022-06-13T21:29:15Z
2022-06-13T21:29:15Z
2023-10-30T19:15:55Z
463
nvbn/thefuck
30,620
Fix Bug if input size is different than 128
diff --git a/plugins/train/model/villain.py b/plugins/train/model/villain.py index 90c202032f..e662f60bd5 100644 --- a/plugins/train/model/villain.py +++ b/plugins/train/model/villain.py @@ -38,7 +38,7 @@ def encoder(self): tmp_x = var_x res_cycles = 8 if self.config.get("lowmem", False) else 16 for _ in range(res_cycles): - nn_x = self.blocks.res_block(var_x, 128, **kwargs) + nn_x = self.blocks.res_block(var_x, in_conv_filters, **kwargs) var_x = nn_x # consider adding scale before this layer to scale the residual chain var_x = add([var_x, tmp_x])
At the moment if the input size is different than 128 the model structure building fails on this line: `var_x = add([var_x, tmp_x])`https://github.com/deepfakes/faceswap/blob/master/plugins/train/model/villain.py#L41 This is because the number of filters in `var_x` and `tmp_x` are different. On line 37 and 38, `tmp_x` is created using # of filters `in_conv_filters` but `var_x` using 128, then when we try to add them, there is dimension mismatch on the 3rd dimension (# of filters).
https://api.github.com/repos/deepfakes/faceswap/pulls/960
2020-01-09T11:24:54Z
2020-01-10T12:05:13Z
2020-01-10T12:05:13Z
2020-01-10T13:01:52Z
178
deepfakes/faceswap
18,889
Update cls bias init
diff --git a/models/yolo.py b/models/yolo.py index e5ee3fd57db..85c8d43258e 100644 --- a/models/yolo.py +++ b/models/yolo.py @@ -201,7 +201,7 @@ def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is for mi, s in zip(m.m, m.stride): # from b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls + b.data[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) def _print_biases(self):
Increased numerical precision. Returns 1.0 probability for single-class datasets now. Addresses https://github.com/ultralytics/yolov5/issues/5357 ```python torch.sigmoid(torch.tensor([math.log(0.6 / (1 - 0.999999))])) Out[19]: tensor([1.0000]) ``` ## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Adjusted initial bias in YOLO object detection model. ### 📊 Key Changes - Modified the calculation for class probability bias initialization in the object detection layer. ### 🎯 Purpose & Impact - 🎯 **Purpose**: To fine-tune the initial bias calculation for better stability in class prediction, preventing overconfidence in the less frequent classes. - 💥 **Impact**: Should slightly improve the robustness and performance of the model, particularly in datasets with class imbalance. Users may notice better class probability estimates during training and inference.
https://api.github.com/repos/ultralytics/yolov5/pulls/5520
2021-11-05T12:16:17Z
2021-11-05T12:18:46Z
2021-11-05T12:18:46Z
2024-01-19T14:38:22Z
263
ultralytics/yolov5
25,662
Bump exllamav2 version to 0.0.7
diff --git a/requirements.txt b/requirements.txt index 27cdaeec13..0137eac14f 100644 --- a/requirements.txt +++ b/requirements.txt @@ -2,7 +2,7 @@ accelerate==0.24.* colorama datasets einops -exllamav2==0.0.6; platform_system != "Darwin" and platform_machine != "x86_64" +exllamav2==0.0.7; platform_system != "Darwin" and platform_machine != "x86_64" gradio==3.50.* markdown numpy==1.24.* @@ -53,14 +53,14 @@ https://github.com/jllllll/exllama/releases/download/0.0.18/exllama-0.0.18+cu121 https://github.com/jllllll/exllama/releases/download/0.0.18/exllama-0.0.18+cu121-cp310-cp310-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10" https://github.com/jllllll/exllama/releases/download/0.0.18/exllama-0.0.18+cu121-cp39-cp39-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9" https://github.com/jllllll/exllama/releases/download/0.0.18/exllama-0.0.18+cu121-cp38-cp38-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp311-cp311-win_amd64.whl; platform_system == "Windows" and python_version == "3.11" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp310-cp310-win_amd64.whl; platform_system == "Windows" and python_version == "3.10" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp39-cp39-win_amd64.whl; platform_system == "Windows" and python_version == "3.9" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp38-cp38-win_amd64.whl; platform_system == "Windows" and python_version == "3.8" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp311-cp311-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp310-cp310-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp39-cp39-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp38-cp38-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp311-cp311-win_amd64.whl; platform_system == "Windows" and python_version == "3.11" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp310-cp310-win_amd64.whl; platform_system == "Windows" and python_version == "3.10" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp39-cp39-win_amd64.whl; platform_system == "Windows" and python_version == "3.9" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp38-cp38-win_amd64.whl; platform_system == "Windows" and python_version == "3.8" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp311-cp311-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp310-cp310-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp39-cp39-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp38-cp38-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8" https://github.com/bdashore3/flash-attention/releases/download/2.3.2-2/flash_attn-2.3.2+cu122-cp311-cp311-win_amd64.whl; platform_system == "Windows" and python_version == "3.11" https://github.com/bdashore3/flash-attention/releases/download/2.3.2-2/flash_attn-2.3.2+cu122-cp310-cp310-win_amd64.whl; platform_system == "Windows" and python_version == "3.10" https://github.com/Dao-AILab/flash-attention/releases/download/v2.3.2/flash_attn-2.3.2+cu122torch2.1cxx11abiFALSE-cp311-cp311-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11" diff --git a/requirements_noavx2.txt b/requirements_noavx2.txt index 9af69b894b..1d4cc687f2 100644 --- a/requirements_noavx2.txt +++ b/requirements_noavx2.txt @@ -2,7 +2,7 @@ accelerate==0.24.* colorama datasets einops -exllamav2==0.0.6; platform_system != "Darwin" and platform_machine != "x86_64" +exllamav2==0.0.7; platform_system != "Darwin" and platform_machine != "x86_64" gradio==3.50.* markdown numpy==1.24.* @@ -53,14 +53,14 @@ https://github.com/jllllll/exllama/releases/download/0.0.18/exllama-0.0.18+cu121 https://github.com/jllllll/exllama/releases/download/0.0.18/exllama-0.0.18+cu121-cp310-cp310-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10" https://github.com/jllllll/exllama/releases/download/0.0.18/exllama-0.0.18+cu121-cp39-cp39-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9" https://github.com/jllllll/exllama/releases/download/0.0.18/exllama-0.0.18+cu121-cp38-cp38-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp311-cp311-win_amd64.whl; platform_system == "Windows" and python_version == "3.11" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp310-cp310-win_amd64.whl; platform_system == "Windows" and python_version == "3.10" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp39-cp39-win_amd64.whl; platform_system == "Windows" and python_version == "3.9" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp38-cp38-win_amd64.whl; platform_system == "Windows" and python_version == "3.8" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp311-cp311-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp310-cp310-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp39-cp39-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9" -https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu121-cp38-cp38-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp311-cp311-win_amd64.whl; platform_system == "Windows" and python_version == "3.11" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp310-cp310-win_amd64.whl; platform_system == "Windows" and python_version == "3.10" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp39-cp39-win_amd64.whl; platform_system == "Windows" and python_version == "3.9" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp38-cp38-win_amd64.whl; platform_system == "Windows" and python_version == "3.8" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp311-cp311-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp310-cp310-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp39-cp39-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9" +https://github.com/turboderp/exllamav2/releases/download/v0.0.7/exllamav2-0.0.7+cu121-cp38-cp38-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8" https://github.com/bdashore3/flash-attention/releases/download/2.3.2-2/flash_attn-2.3.2+cu122-cp311-cp311-win_amd64.whl; platform_system == "Windows" and python_version == "3.11" https://github.com/bdashore3/flash-attention/releases/download/2.3.2-2/flash_attn-2.3.2+cu122-cp310-cp310-win_amd64.whl; platform_system == "Windows" and python_version == "3.10" https://github.com/Dao-AILab/flash-attention/releases/download/v2.3.2/flash_attn-2.3.2+cu122torch2.1cxx11abiFALSE-cp311-cp311-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11" diff --git a/requirements_nowheels.txt b/requirements_nowheels.txt index 97c3e09f14..2443ded580 100644 --- a/requirements_nowheels.txt +++ b/requirements_nowheels.txt @@ -2,7 +2,7 @@ accelerate==0.24.* colorama datasets einops -exllamav2==0.0.6 +exllamav2==0.0.7 gradio==3.50.* markdown numpy==1.24.*
## Checklist: - [x] I have read the [Contributing guidelines](https://github.com/oobabooga/text-generation-webui/wiki/Contributing-guidelines).
https://api.github.com/repos/oobabooga/text-generation-webui/pulls/4417
2023-10-29T22:35:36Z
2023-10-31T22:12:14Z
2023-10-31T22:12:14Z
2023-10-31T22:12:15Z
3,547
oobabooga/text-generation-webui
26,013
Bump actions/setup-python from 2 to 3
diff --git a/.github/workflows/benchmark.yml b/.github/workflows/benchmark.yml index 560835dab2..e5b40a95be 100644 --- a/.github/workflows/benchmark.yml +++ b/.github/workflows/benchmark.yml @@ -14,7 +14,7 @@ jobs: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - - uses: actions/setup-python@v2 + - uses: actions/setup-python@v3 with: python-version: "3.9" diff --git a/.github/workflows/code-style.yml b/.github/workflows/code-style.yml index 3d220d72f0..e25e9515e3 100644 --- a/.github/workflows/code-style.yml +++ b/.github/workflows/code-style.yml @@ -14,7 +14,7 @@ jobs: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - - uses: actions/setup-python@v2 + - uses: actions/setup-python@v3 with: python-version: 3.9 - run: make venv diff --git a/.github/workflows/coverage.yml b/.github/workflows/coverage.yml index 5edb47fa46..de0063f604 100644 --- a/.github/workflows/coverage.yml +++ b/.github/workflows/coverage.yml @@ -13,7 +13,7 @@ jobs: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - - uses: actions/setup-python@v2 + - uses: actions/setup-python@v3 with: python-version: "3.10" - run: make install diff --git a/.github/workflows/docs-update-install.yml b/.github/workflows/docs-update-install.yml index 3a22aaa0bb..d92f76c11e 100644 --- a/.github/workflows/docs-update-install.yml +++ b/.github/workflows/docs-update-install.yml @@ -16,7 +16,7 @@ jobs: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - - uses: actions/setup-python@v2 + - uses: actions/setup-python@v3 with: python-version: 3.9 - run: make install diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 12753b49f5..944d3a3faa 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -24,7 +24,7 @@ jobs: - name: PyPI configuration run: | echo "[distutils]\nindex-servers=\n httpie\n\n[httpie]\nrepository = https://upload.pypi.org/legacy/\n" > $HOME/.pypirc - - uses: actions/setup-python@v2 + - uses: actions/setup-python@v3 with: python-version: 3.9 - run: make publish diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index e3cde99669..f8946c0904 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -27,7 +27,7 @@ jobs: runs-on: ${{ matrix.os }} steps: - uses: actions/checkout@v2 - - uses: actions/setup-python@v2 + - uses: actions/setup-python@v3 with: python-version: ${{ matrix.python-version }} - name: Windows setup
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 2 to 3. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/actions/setup-python/releases">actions/setup-python's releases</a>.</em></p> <blockquote> <h2>v3.0.0</h2> <h2>What's Changed</h2> <ul> <li>Update default runtime to node16 (<a href="https://github-redirect.dependabot.com/actions/setup-python/pull/340">actions/setup-python#340</a>)</li> <li>Update <code>package-lock.json</code> file version to 2, <code>@types/node</code> to 16.11.25 and <code>typescript</code> to 4.2.3 (<a href="https://github-redirect.dependabot.com/actions/setup-python/pull/341">actions/setup-python#341</a>)</li> <li>Remove legacy <code>pypy2</code> and <code>pypy3</code> keywords (<a href="https://github-redirect.dependabot.com/actions/setup-python/pull/342">actions/setup-python#342</a>)</li> </ul> <h3>Breaking Changes</h3> <p>With the update to Node 16, all scripts will now be run with Node 16 rather than Node 12.</p> <p>This new major release removes support of legacy <code>pypy2</code> and <code>pypy3</code> keywords. Please use more specific and flexible syntax to specify a PyPy version:</p> <pre lang="yaml"><code>jobs: build: runs-on: ubuntu-latest strategy: matrix: python-version: - 'pypy-2.7' # the latest available version of PyPy that supports Python 2.7 - 'pypy-3.8' # the latest available version of PyPy that supports Python 3.8 - 'pypy-3.8-v7.3.8' # Python 3.8 and PyPy 7.3.8 steps: - uses: actions/checkout@v2 - uses: actions/setup-python@v3 with: python-version: ${{ matrix.python-version }} </code></pre> <p>See more usage examples in the <a href="https://github.com/actions/setup-python#specifying-a-pypy-version">documentation</a></p> <h2>Update primary and restore keys for pip</h2> <p>In scope of this release we <a href="https://github-redirect.dependabot.com/actions/setup-python/pull/303">include a version of python in restore and primary cache keys for pip</a>. Besides, we add temporary fix for Windows caching <a href="https://github-redirect.dependabot.com/actions/setup-python/pull/332">issue</a>, that the <code>pip cache dir</code> command returns non zero exit code or writes to stderr. Moreover we updated <a href="https://github-redirect.dependabot.com/actions/setup-python/pull/327">node-fetch dependency</a>.</p> <h2>Update actions/cache version to 1.0.8</h2> <p>We have updated <a href="https://github.com/actions/toolkit/blob/main/packages/cache/RELEASES.md#108">actions/cache</a> dependency version to 1.0.8 to support 10GB cache upload</p> <h2>Support caching dependencies</h2> <p>This release introduces dependency caching support (<a href="https://github-redirect.dependabot.com/actions/setup-python/pull/266">actions/setup-python#266</a>)</p> <h2>Caching dependencies.</h2> <p>The action has a built-in functionality for caching and restoring pip/pipenv dependencies. The <code>cache</code> input is optional, and caching is turned off by default.</p> <p>Besides, this release introduces dependency caching support for mono repos and repositories with complex structure.</p> <p>By default, the action searches for the dependency file (requirements.txt for pip or Pipfile.lock for pipenv) in the whole repository. Use the <code>cache-dependency-path</code> input for cases when you want to override current behaviour and use different file for hash generation (for example requirements-dev.txt). This input supports wildcards or a list of file names for caching multiple dependencies.</p> <h3>Caching pip dependencies:</h3> <pre><code>steps: - uses: actions/checkout@v2 - uses: actions/setup-python@v2 with: python-version: '3.9' &lt;/tr&gt;&lt;/table&gt; </code></pre> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/actions/setup-python/commit/0ebf233433c08fb9061af664d501c3f3ff0e9e20"><code>0ebf233</code></a> Remove legacy PyPy input (<a href="https://github-redirect.dependabot.com/actions/setup-python/issues/342">#342</a>)</li> <li><a href="https://github.com/actions/setup-python/commit/665cd78205d9937a51af8cdb754840e2bc95c2d5"><code>665cd78</code></a> Update lockfileversion (<a href="https://github-redirect.dependabot.com/actions/setup-python/issues/341">#341</a>)</li> <li><a href="https://github.com/actions/setup-python/commit/93cb78f17ba30b733a6c17d0f21183bdc0140887"><code>93cb78f</code></a> Update to node16 (<a href="https://github-redirect.dependabot.com/actions/setup-python/issues/340">#340</a>)</li> <li>See full diff in <a href="https://github.com/actions/setup-python/compare/v2...v3">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-python&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details>
https://api.github.com/repos/httpie/cli/pulls/1307
2022-02-28T14:21:00Z
2022-03-01T14:17:41Z
2022-03-01T14:17:41Z
2022-03-01T14:17:42Z
844
httpie/cli
34,015
Change neptune.ml to neptune.ai
diff --git a/README.md b/README.md index c96f2ab7..b36f27b1 100644 --- a/README.md +++ b/README.md @@ -1656,7 +1656,7 @@ be * [Sacred](https://github.com/IDSIA/sacred) - Python tool to help you configure, organize, log and reproduce experiments. Like a notebook lab in the context of Chemistry/Biology. The community has built multiple add-ons leveraging the proposed standard. * [MLFlow](https://mlflow.org/) - platform to manage the ML lifecycle, including experimentation, reproducibility and deployment. Framework and language agnostic, take a look at all the built-in integrations. * [Weights & Biases](https://www.wandb.com/) - Machine learning experiment tracking, dataset versioning, hyperparameter search, visualization, and collaboration -* More tools to improve the ML lifecycle: [Catalyst](https://github.com/catalyst-team/catalyst), [PachydermIO](https://www.pachyderm.io/). The following are Github-alike and targeting teams [Weights & Biases](https://www.wandb.com/), [Neptune.Ml](https://neptune.ml/), [Comet.ml](https://www.comet.ml/), [Valohai.ai](https://valohai.com/), [DAGsHub](https://DAGsHub.com/). +* More tools to improve the ML lifecycle: [Catalyst](https://github.com/catalyst-team/catalyst), [PachydermIO](https://www.pachyderm.io/). The following are Github-alike and targeting teams [Weights & Biases](https://www.wandb.com/), [Neptune.ai](https://neptune.ai/), [Comet.ml](https://www.comet.ml/), [Valohai.ai](https://valohai.com/), [DAGsHub](https://DAGsHub.com/). * [MachineLearningWithTensorFlow2ed](https://www.manning.com/books/machine-learning-with-tensorflow-second-edition) - a book on general purpose machine learning techniques regression, classification, unsupervised clustering, reinforcement learning, auto encoders, convolutional neural networks, RNNs, LSTMs, using TensorFlow 1.14.1. * [m2cgen](https://github.com/BayesWitnesses/m2cgen) - A tool that allows the conversion of ML models into native code (Java, C, Python, Go, JavaScript, Visual Basic, C#, R, PowerShell, PHP, Dart) with zero dependencies. * [CML](https://github.com/iterative/cml) - A library for doing continuous integration with ML projects. Use GitHub Actions & GitLab CI to train and evaluate models in production like environments and automatically generate visual reports with metrics and graphs in pull/merge requests. Framework & language agnostic.
Neptune.ai is now the correct address. Thanks!
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/808
2021-09-02T08:43:38Z
2021-09-02T17:06:50Z
2021-09-02T17:06:50Z
2021-09-02T17:06:50Z
640
josephmisiti/awesome-machine-learning
52,057
Add mechanism to delete CF resources of deleted stacks
diff --git a/localstack/services/cloudformation/cloudformation_listener.py b/localstack/services/cloudformation/cloudformation_listener.py index d935472d0b975..e29602169a672 100644 --- a/localstack/services/cloudformation/cloudformation_listener.py +++ b/localstack/services/cloudformation/cloudformation_listener.py @@ -126,15 +126,19 @@ def forward_request(self, method, path, data, headers): req_data = urlparse.parse_qs(to_str(data)) req_data = dict([(k, v[0]) for k, v in req_data.items()]) action = req_data.get('Action') + stack_name = req_data.get('StackName') if action == 'CreateStack': - stack_name = req_data.get('StackName') event_publisher.fire_event(event_publisher.EVENT_CLOUDFORMATION_CREATE_STACK, payload={'n': event_publisher.get_hash(stack_name)}) + if action == 'DeleteStack': + client = aws_stack.connect_to_service('cloudformation') + stack_resources = client.list_stack_resources(StackName=stack_name)['StackResourceSummaries'] + template_deployer.delete_stack(stack_name, stack_resources) + if action == 'DescribeStackEvents': # fix an issue where moto cannot handle ARNs as stack names (or missing names) - stack_name = req_data.get('StackName') run_fix = not stack_name if stack_name: if stack_name.startswith('arn:aws:cloudformation'): @@ -169,12 +173,19 @@ def return_response(self, method, path, data, headers, response): if response.status_code >= 400: LOG.debug('Error response from CloudFormation (%s) %s %s: %s' % (response.status_code, method, path, response.content)) + if response._content: aws_stack.fix_account_id_in_arns(response) def _list_stack_names(self): client = aws_stack.connect_to_service('cloudformation') - stack_names = [s['StackName'] for s in client.list_stacks()['StackSummaries']] + stacks = client.list_stacks()['StackSummaries'] + stack_names = [] + for stack in stacks: + status = stack['StackStatus'] + if 'FAILED' in status or 'DELETE' in status: + continue + stack_names.append(stack['StackName']) return stack_names diff --git a/localstack/services/cloudformation/cloudformation_starter.py b/localstack/services/cloudformation/cloudformation_starter.py index caab48c8e43a4..c3a8e696c82d5 100644 --- a/localstack/services/cloudformation/cloudformation_starter.py +++ b/localstack/services/cloudformation/cloudformation_starter.py @@ -8,6 +8,7 @@ from moto.sqs import models as sqs_models from moto.core import BaseModel from moto.server import main as moto_main +from moto.kinesis import models as kinesis_models from moto.dynamodb import models as dynamodb_models from moto.dynamodb2 import models as dynamodb2_models from moto.awslambda import models as lambda_models @@ -128,6 +129,8 @@ def update_physical_resource_id(resource): elif isinstance(resource, service_models.StepFunctionsActivity): act_arn = aws_stack.stepfunctions_activity_arn(resource.params.get('Name')) resource.physical_resource_id = act_arn + elif isinstance(resource, kinesis_models.Stream): + resource.physical_resource_id = resource.stream_name else: LOG.warning('Unable to determine physical_resource_id for resource %s' % type(resource)) diff --git a/localstack/utils/cloudformation/template_deployer.py b/localstack/utils/cloudformation/template_deployer.py index b3edc33f5312e..6d99b1580bb5b 100644 --- a/localstack/utils/cloudformation/template_deployer.py +++ b/localstack/utils/cloudformation/template_deployer.py @@ -14,6 +14,7 @@ from localstack.services.awslambda.lambda_api import get_handler_file_from_name ACTION_CREATE = 'create' +ACTION_DELETE = 'delete' PLACEHOLDER_RESOURCE_NAME = '__resource_name__' LOG = logging.getLogger(__name__) @@ -74,6 +75,12 @@ def get_lambda_code_param(params, **kwargs): 'ACL': lambda params, **kwargs: convert_acl_cf_to_s3(params.get('AccessControl', 'PublicRead')), 'CreateBucketConfiguration': lambda params, **kwargs: get_bucket_location_config() } + }, + 'delete': { + 'function': 'delete_bucket', + 'parameters': { + 'Bucket': 'PhysicalResourceId' + } } }, 'SQS::Queue': { @@ -87,6 +94,12 @@ def get_lambda_code_param(params, **kwargs): ), 'tags': 'Tags' } + }, + 'delete': { + 'function': 'delete_queue', + 'parameters': { + 'QueueUrl': 'PhysicalResourceId' + } } }, 'SNS::Topic': { @@ -96,6 +109,12 @@ def get_lambda_code_param(params, **kwargs): 'Name': 'TopicName', 'Tags': 'Tags' } + }, + 'delete': { + 'function': 'delete_topic', + 'parameters': { + 'TopicArn': 'PhysicalResourceId' + } } }, 'Logs::LogGroup': { @@ -254,6 +273,12 @@ def get_lambda_code_param(params, **kwargs): 'defaults': { 'ShardCount': 1 } + }, + 'delete': { + 'function': 'delete_stream', + 'parameters': { + 'StreamName': 'PhysicalResourceId' + } } }, 'StepFunctions::StateMachine': { @@ -337,7 +362,7 @@ def get_resource_type(resource): def get_service_name(resource): - res_type = resource.get('Type', '') + res_type = resource.get('Type', resource.get('ResourceType', '')) parts = res_type.split('::') if len(parts) == 1: return None @@ -376,9 +401,6 @@ def get_client(resource, func_config): resource_config = RESOURCE_TO_FUNCTION.get(resource_type) if resource_config is None: raise Exception('CloudFormation deployment for resource type %s not yet implemented' % resource_type) - if ACTION_CREATE not in resource_config: - # nothing to do for this resource - return try: if func_config.get('boto_client') == 'resource': return aws_stack.connect_to_resource(service) @@ -698,15 +720,23 @@ def remove_nones(o, **kwargs): def deploy_resource(resource_id, resources, stack_name): + return execute_resource_action(resource_id, resources, stack_name, ACTION_CREATE) + + +def delete_resource(resource_id, resources, stack_name): + return execute_resource_action(resource_id, resources, stack_name, ACTION_DELETE) + + +def execute_resource_action(resource_id, resources, stack_name, action_name): resource = resources[resource_id] resource_type = get_resource_type(resource) func_details = RESOURCE_TO_FUNCTION.get(resource_type) - if not func_details: - LOG.warning('Resource type not yet implemented: %s' % resource_type) + if not func_details or action_name not in func_details: + LOG.warning('Action "%s" for resource type %s not yet implemented' % (action_name, resource_type)) return - LOG.debug('Deploying resource type "%s" id "%s"' % (resource_type, resource_id)) - func_details = func_details[ACTION_CREATE] + LOG.debug('Running action "%s" for resource type "%s" id "%s"' % (action_name, resource_type, resource_id)) + func_details = func_details[action_name] func_details = func_details if isinstance(func_details, list) else [func_details] results = [] for func in func_details: @@ -716,12 +746,12 @@ def deploy_resource(resource_id, resources, stack_name): continue client = get_client(resource, func) if client: - result = deploy_resource_via_sdk_function(resource_id, resources, resource_type, func, stack_name) + result = configure_resource_via_sdk(resource_id, resources, resource_type, func, stack_name) results.append(result) return (results or [None])[0] -def deploy_resource_via_sdk_function(resource_id, resources, resource_type, func_details, stack_name): +def configure_resource_via_sdk(resource_id, resources, resource_type, func_details, stack_name): resource = resources[resource_id] client = get_client(resource, func_details) function = getattr(client, func_details['function']) @@ -781,7 +811,7 @@ def deploy_resource_via_sdk_function(resource_id, resources, resource_type, func # invoke function try: - LOG.debug('Request for creating resource type "%s" in region %s: %s %s' % ( + LOG.debug('Request for resource type "%s" in region %s: %s %s' % ( resource_type, aws_stack.get_region(), func_details['function'], params)) result = function(**params) except Exception as e: @@ -847,6 +877,14 @@ def deploy_template(template, stack_name): 'after %s iterations. Remaining (%s): %s' % (iters, len(next), next)) +def delete_stack(stack_name, stack_resources): + resources = dict([(r['LogicalResourceId'], common.clone_safe(r)) for r in stack_resources]) + for key, resource in resources.items(): + resources[key]['Properties'] = common.clone_safe(resource) + for resource_id, resource in resources.items(): + delete_resource(resource_id, resources, stack_name) + + # -------- # Util methods for analyzing resource dependencies # -------- diff --git a/localstack/utils/common.py b/localstack/utils/common.py index 0fb3281a28e64..8b0de6563c80e 100644 --- a/localstack/utils/common.py +++ b/localstack/utils/common.py @@ -970,6 +970,10 @@ def clone(item): return json.loads(json.dumps(item)) +def clone_safe(item): + return clone(json_safe(item)) + + def remove_non_ascii(text): # text = unicode(text, "utf-8") text = text.decode('utf-8', CODEC_HANDLER_UNDERSCORE) diff --git a/tests/integration/test_cloudformation.py b/tests/integration/test_cloudformation.py index 9af346facdd81..bb44f7ec74dac 100644 --- a/tests/integration/test_cloudformation.py +++ b/tests/integration/test_cloudformation.py @@ -51,7 +51,10 @@ def bucket_exists(name): def queue_exists(name): sqs_client = aws_stack.connect_to_service('sqs') queues = sqs_client.list_queues() - url = name if '://' in name else aws_stack.get_sqs_queue_url(name) + try: + url = name if '://' in name else aws_stack.get_sqs_queue_url(name) + except Exception: + return False for queue_url in queues['QueueUrls']: if queue_url == url: return True @@ -114,8 +117,9 @@ def get_topic_arns(): class CloudFormationTest(unittest.TestCase): - def test_apply_template(self): + def test_create_delete_stack(self): cloudformation = aws_stack.connect_to_resource('cloudformation') + cf_client = aws_stack.connect_to_service('cloudformation') s3 = aws_stack.connect_to_service('s3') sns = aws_stack.connect_to_service('sns') apigateway = aws_stack.connect_to_service('apigateway') @@ -132,16 +136,12 @@ def check_stack(): retry(check_stack, retries=3, sleep=2) - # assert that bucket has been created + # assert that resources have been created assert bucket_exists('cf-test-bucket-1') - # assert that queue has been created assert queue_exists('cf-test-queue-1') - # assert that topic has been created topic_arn = topic_exists('%s-test-topic-1-1' % stack_name) assert topic_arn - # assert that stream has been created assert stream_exists('cf-test-stream-1') - # assert that queue has been created resource = describe_stack_resource(stack_name, 'SQSQueueNoNameProperty') assert queue_exists(resource['PhysicalResourceId']) @@ -153,6 +153,7 @@ def check_stack(): {'Key': 'foo', 'Value': 'cf-test-bucket-1'}, {'Key': 'bar', 'Value': aws_stack.s3_bucket_arn('cf-test-bucket-1')} ]) + # assert that subscriptions have been created subs = sns.list_subscriptions()['Subscriptions'] subs = [s for s in subs if (':%s:cf-test-queue-1' % TEST_AWS_ACCOUNT_ID) in s['Endpoint']] @@ -167,6 +168,15 @@ def check_stack(): types = [r['responseType'] for r in responses] self.assertEqual(set(types), set(['UNAUTHORIZED', 'DEFAULT_5XX'])) + # delete the stack + cf_client.delete_stack(StackName=stack_name) + + # assert that resources have been deleted + assert not bucket_exists('cf-test-bucket-1') + assert not queue_exists('cf-test-queue-1') + assert not topic_exists('%s-test-topic-1-1' % stack_name) + retry(lambda: self.assertFalse(stream_exists('cf-test-stream-1'))) + def test_list_stack_events(self): cloudformation = aws_stack.connect_to_service('cloudformation') response = cloudformation.describe_stack_events()
Add mechanism to delete CF resources of deleted stacks - addresses #999
https://api.github.com/repos/localstack/localstack/pulls/1888
2019-12-19T22:17:00Z
2019-12-19T22:43:50Z
2019-12-19T22:43:50Z
2019-12-19T22:43:53Z
3,038
localstack/localstack
29,021
Add x-api-key to allowed CORS headers to enable access from Web app
diff --git a/localstack/services/cloudformation/engine/template_deployer.py b/localstack/services/cloudformation/engine/template_deployer.py index eb0da5973a20f..b0c59f8eb4377 100644 --- a/localstack/services/cloudformation/engine/template_deployer.py +++ b/localstack/services/cloudformation/engine/template_deployer.py @@ -645,11 +645,11 @@ def deploy_stack(self): initialize=True, action="CREATE", ) - except Exception: + except Exception as e: log_method = getattr(LOG, "info") if config.CFN_VERBOSE_ERRORS: log_method = getattr(LOG, "exception") - log_method("Unable to create stack %s: %s", self.stack.stack_name) + log_method("Unable to create stack %s: %s", self.stack.stack_name, e) self.stack.set_stack_status("CREATE_FAILED") raise diff --git a/localstack/services/generic_proxy.py b/localstack/services/generic_proxy.py index 0d447cd518c6d..edc3fa05f24a7 100644 --- a/localstack/services/generic_proxy.py +++ b/localstack/services/generic_proxy.py @@ -52,6 +52,7 @@ "content-type", "etag", "location", + # AWS specific headers "x-amz-acl", "x-amz-content-sha256", "x-amz-date", @@ -62,6 +63,8 @@ "x-amz-user-agent", "x-amz-version-id", "x-amzn-requestid", + "x-api-key", # for API Gateway or AppSync GraphQL request + # LocalStack specific headers "x-localstack-target", # for AWS SDK v3 "amz-sdk-invocation-id",
Add `x-api-key` to the list of allowed CORS headers, to enable passing this header from the LocalStack Web app. This header is used, among others, for API Gateway and AppSync GraphQL APIs. For AppSync, we've recently extended our resource browser to make requests directly from the Web app. Currently this requires starting LS with `EXTRA_CORS_ALLOWED_HEADERS=x-api-key` configured, and with the changes in this PR it works out of the box. This PR also contains a minor fix for a logging statement (missing parameter for placeholder in format string), which was detected during testing.
https://api.github.com/repos/localstack/localstack/pulls/8769
2023-07-28T16:40:07Z
2023-07-29T12:15:31Z
2023-07-29T12:15:31Z
2023-07-29T12:15:34Z
400
localstack/localstack
29,388
Update conversation.py
diff --git a/fastchat/conversation.py b/fastchat/conversation.py index 33387ad69e..c66ee18127 100644 --- a/fastchat/conversation.py +++ b/fastchat/conversation.py @@ -228,8 +228,7 @@ def to_gradio_chatbot(self): def to_openai_api_messages(self): """Convert the conversation to OpenAI chat completion format.""" - system_prompt = self.system_template.format(system_message=self.system_message) - ret = [{"role": "system", "content": system_prompt}] + ret = [{"role": "system", "content": self.system_message}] for i, (_, msg) in enumerate(self.messages[self.offset :]): if i % 2 == 0:
OpenAI chat completion API does NOT require the prompt at the start of the `system` message (e.g., do not include `<|im_start|>system\n`) - just the system message itself. Docs: https://platform.openai.com/docs/guides/gpt/chat-completions-api <!-- Thank you for your contribution! --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> ## Related issue number (if applicable) <!-- For example: "Closes #1234" --> ## Checks - [x] I've run `format.sh` to lint the changes in this PR. - [x] I've included any doc changes needed. - [x] I've made sure the relevant tests are passing (if applicable).
https://api.github.com/repos/lm-sys/FastChat/pulls/2306
2023-08-24T21:44:05Z
2023-08-25T01:52:50Z
2023-08-25T01:52:50Z
2023-08-25T01:52:50Z
164
lm-sys/FastChat
41,463
Remove a test per @kennethreitz in IRC
diff --git a/test_requests.py b/test_requests.py index 425e357094..e3f8627f77 100644 --- a/test_requests.py +++ b/test_requests.py @@ -76,10 +76,6 @@ def test_params_are_added_before_fragment(self): self.assertEqual(request.url, "http://example.com/path?key=value&a=b#fragment") - def test_HTTP_200_OK_GET(self): - r = requests.get(httpbin('get')) - self.assertEqual(r.status_code, 200) - def test_HTTP_200_OK_GET_ALTERNATIVE(self): r = requests.Request('GET', httpbin('get')) s = requests.Session()
The test suite is moving from the httpbin pattern (which hits the network) to depending on the request.prepare method (which doesn't). Here's a start ... :rocket:
https://api.github.com/repos/psf/requests/pulls/1137
2013-01-24T01:36:17Z
2013-01-24T01:37:13Z
2013-01-24T01:37:13Z
2021-09-08T18:01:26Z
149
psf/requests
32,052
Adding Manning Publication's books and one course to respective lists
diff --git a/books.md b/books.md index f78cc02a..d888ebfb 100644 --- a/books.md +++ b/books.md @@ -31,10 +31,14 @@ The following is a list of free, open source books on machine learning, statisti * [Bayesian Reasoning and Machine Learning](http://web4.cs.ucl.ac.uk/staff/D.Barber/pmwiki/pmwiki.php?n=Brml.HomePage) Book+MatlabToolBox * [R Programming for Data Science](https://leanpub.com/rprogramming) * [Data Mining - Practical Machine Learning Tools and Techniques](http://cs.du.edu/~mitchell/mario_books/Data_Mining:_Practical_Machine_Learning_Tools_and_Techniques_-_2e_-_Witten_&_Frank.pdf) Book +* [Machine Learning with TensorFlow](https://www.manning.com/books/machine-learning-with-tensorflow) Early access book +* [Reactive Machine Learning Systems](https://www.manning.com/books/reactive-machine-learning-systems) Early access book ## Deep-Learning * [Deep Learning - An MIT Press book](http://www.deeplearningbook.org/) +* [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python) Early access book +* [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) Early access book ## Natural Language Processing @@ -42,6 +46,7 @@ The following is a list of free, open source books on machine learning, statisti * [NLTK](http://www.nltk.org/book/) * [NLP w/ Python](http://victoria.lviv.ua/html/fl5/NaturalLanguageProcessingWithPython.pdf) * [Foundations of Statistical Natural Language Processing](http://nlp.stanford.edu/fsnlp/promo/) +* [Natural Language Processing in Action](https://www.manning.com/books/natural-language-processing-in-action) Early access book ## Information Retrieval diff --git a/courses.md b/courses.md index 4ca92faa..c5eb843b 100644 --- a/courses.md +++ b/courses.md @@ -13,3 +13,4 @@ The following is a list of free or paid online courses on machine learning, stat * [Intro to Machine Learning](https://www.udacity.com/course/intro-to-machine-learning--ud120) - free * [Probabilistic Graphical Models (by Prof. Daphne Koller, Stanford)](https://www.coursera.org/specializations/probabilistic-graphical-models) Coursera Specialization or [this Youtube playlist](https://www.youtube.com/watch?v=WPSQfOkb1M8&list=PL50E6E80E8525B59C) if you can't afford the enrollment fee. * [Reinforcement Learning Course (by David Silver, DeepMind)](https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLzuuYNsE1EZAXYR4FJ75jcJseBmo4KQ9-) - YouTube playlist and [lecture slides](http://www0.cs.ucl.ac.uk/staff/d.silver/web/Teaching.html). +* [Keras in Motion](https://www.manning.com/livevideo/keras-in-motion) $
Hello, I was wondering if you would consent to adding the books and course that I have added to the respective resource lists on your page. Thanks for your consideration.
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/433
2017-10-08T07:21:01Z
2017-10-12T14:13:12Z
2017-10-12T14:13:12Z
2017-10-12T16:12:46Z
736
josephmisiti/awesome-machine-learning
52,105
[3.8] bpo-46811: Make test suite support Expat >=2.4.5 (GH-31453)
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 29810c4df1cc5b..88aa37d4318285 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -95,7 +95,7 @@ jobs: build_win32: name: 'Windows (x86)' - runs-on: windows-latest + runs-on: windows-2019 needs: check_source if: needs.check_source.outputs.run_tests == 'true' steps: @@ -109,7 +109,7 @@ jobs: build_win_amd64: name: 'Windows (x64)' - runs-on: windows-latest + runs-on: windows-2019 needs: check_source if: needs.check_source.outputs.run_tests == 'true' steps: @@ -184,7 +184,7 @@ jobs: strategy: fail-fast: false matrix: - openssl_ver: [1.0.2u, 1.1.0l, 1.1.1l, 3.0.0-beta1] + openssl_ver: [1.0.2u, 1.1.0l, 1.1.1l] env: OPENSSL_VER: ${{ matrix.openssl_ver }} MULTISSL_DIR: ${{ github.workspace }}/multissl diff --git a/Lib/test/test_minidom.py b/Lib/test/test_minidom.py index 70965854ed1b1c..06c91079abdd99 100644 --- a/Lib/test/test_minidom.py +++ b/Lib/test/test_minidom.py @@ -6,10 +6,12 @@ from test import support import unittest +import pyexpat import xml.dom.minidom from xml.dom.minidom import parse, Node, Document, parseString from xml.dom.minidom import getDOMImplementation +from xml.parsers.expat import ExpatError tstfile = support.findfile("test.xml", subdir="xmltestdata") @@ -1147,7 +1149,13 @@ def testEncodings(self): # Verify that character decoding errors raise exceptions instead # of crashing - self.assertRaises(UnicodeDecodeError, parseString, + if pyexpat.version_info >= (2, 4, 5): + self.assertRaises(ExpatError, parseString, + b'<fran\xe7ais></fran\xe7ais>') + self.assertRaises(ExpatError, parseString, + b'<franais>Comment \xe7a va ? Tr\xe8s bien ?</franais>') + else: + self.assertRaises(UnicodeDecodeError, parseString, b'<fran\xe7ais>Comment \xe7a va ? Tr\xe8s bien ?</fran\xe7ais>') doc.unlink() @@ -1593,7 +1601,12 @@ def testEmptyXMLNSValue(self): self.confirm(doc2.namespaceURI == xml.dom.EMPTY_NAMESPACE) def testExceptionOnSpacesInXMLNSValue(self): - with self.assertRaisesRegex(ValueError, 'Unsupported syntax'): + if pyexpat.version_info >= (2, 4, 5): + context = self.assertRaisesRegex(ExpatError, 'syntax error') + else: + context = self.assertRaisesRegex(ValueError, 'Unsupported syntax') + + with context: parseString('<element xmlns:abc="http:abc.com/de f g/hi/j k"><abc:foo /></element>') def testDocRemoveChild(self): diff --git a/Lib/test/test_xml_etree.py b/Lib/test/test_xml_etree.py index d41ff4fd077e65..0a788477fc1559 100644 --- a/Lib/test/test_xml_etree.py +++ b/Lib/test/test_xml_etree.py @@ -1968,12 +1968,6 @@ def test_issue6233(self): b"<?xml version='1.0' encoding='ascii'?>\n" b'<body>t&#227;g</body>') - def test_issue3151(self): - e = ET.XML('<prefix:localname xmlns:prefix="${stuff}"/>') - self.assertEqual(e.tag, '{${stuff}}localname') - t = ET.ElementTree(e) - self.assertEqual(ET.tostring(e), b'<ns0:localname xmlns:ns0="${stuff}" />') - def test_issue6565(self): elem = ET.XML("<body><tag/></body>") self.assertEqual(summarize_list(elem), ['tag']) diff --git a/Misc/NEWS.d/next/Library/2022-02-20-21-03-31.bpo-46811.8BxgdQ.rst b/Misc/NEWS.d/next/Library/2022-02-20-21-03-31.bpo-46811.8BxgdQ.rst new file mode 100644 index 00000000000000..6969bd1898f658 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2022-02-20-21-03-31.bpo-46811.8BxgdQ.rst @@ -0,0 +1 @@ +Make test suite support Expat >=2.4.5
Curly brackets were never allowed in namespace URIs according to RFC 3986, and so-called namespace-validating XML parsers have the right to reject them a invalid URIs. libexpat >=2.4.5 has become strcter in that regard due to related security issues; with ET.XML instantiating a namespace-aware parser under the hood, this test has no future in CPython. References: - https://datatracker.ietf.org/doc/html/rfc3968 - https://www.w3.org/TR/xml-names/ Also, test_minidom.py: Support Expat >=2.4.5 (cherry picked from commit 2cae93832f46b245847bdc252456ddf7742ef45e) Co-authored-by: Sebastian Pipping <sebastian@pipping.org> <!-- issue-number: [bpo-46811](https://bugs.python.org/issue46811) --> https://bugs.python.org/issue46811 <!-- /issue-number -->
https://api.github.com/repos/python/cpython/pulls/31470
2022-02-21T14:48:59Z
2022-02-22T20:57:53Z
2022-02-22T20:57:53Z
2022-02-22T20:58:07Z
1,187
python/cpython
4,024
Fix publish docker image
diff --git a/.github/workflows/publish-workflow.yaml b/.github/workflows/publish-workflow.yaml index 8b3aae9d86..634aef5385 100644 --- a/.github/workflows/publish-workflow.yaml +++ b/.github/workflows/publish-workflow.yaml @@ -28,6 +28,7 @@ jobs: uses: docker/build-push-action@v5 with: context: . + file: docker/Dockerfile push: true tags: ${{ steps.metadata.outputs.tags }} labels: ${{ steps.metadata.outputs.labels }} diff --git a/docker/Dockerfile b/docker/Dockerfile index 0c52940d5e..905efc4bdb 100644 --- a/docker/Dockerfile +++ b/docker/Dockerfile @@ -1,6 +1,6 @@ FROM selenium/node-chrome -ENV SE_SCREEN_WIDTH 1920 +ENV SE_SCREEN_WIDTH 1850 ENV G4F_LOGIN_URL http://localhost:7900/?autoconnect=1&resize=scale&password=secret ENV PYTHONUNBUFFERED 1
https://api.github.com/repos/xtekky/gpt4free/pulls/1310
2023-12-06T11:20:03Z
2023-12-06T11:20:19Z
2023-12-06T11:20:19Z
2023-12-06T11:20:19Z
248
xtekky/gpt4free
38,145
Fix facebook extractor
diff --git a/src/you_get/common.py b/src/you_get/common.py index c14026dcd3..0a8e160c34 100644 --- a/src/you_get/common.py +++ b/src/you_get/common.py @@ -900,7 +900,7 @@ def script_main(script_name, download, download_playlist = None): sys.exit(1) def url_to_module(url): - from .extractors import netease, w56, acfun, baidu, baomihua, bilibili, blip, catfun, cntv, cbs, coursera, dailymotion, dongting, douban, douyutv, ehow, facebook, freesound, google, sina, ifeng, alive, instagram, iqiyi, joy, jpopsuki, khan, ku6, kugou, kuwo, letv, lizhi, magisto, miomio, mixcloud, mtv81, nicovideo, pptv, qq, sohu, songtaste, soundcloud, ted, theplatform, tudou, tucao, tumblr, vid48, videobam, vimeo, vine, vk, xiami, yinyuetai, youku, youtube, zhanqi + from .extractors import netease, w56, acfun, baidu, baomihua, bilibili, blip, catfun, cntv, cbs, coursera, dailymotion, dongting, douban, douyutv, ehow, facebook, freesound, google, sina, ifeng, alive, instagram, iqiyi, joy, jpopsuki, khan, ku6, kugou, kuwo, letv, lizhi, magisto, miomio, mixcloud, mtv81, nicovideo, pptv, qq, sohu, songtaste, soundcloud, ted, theplatform, tudou, tucao, tumblr, vid48, videobam, vidto, vimeo, vine, vk, xiami, yinyuetai, youku, youtube, zhanqi video_host = r1(r'https?://([^/]+)/', url) video_url = r1(r'https?://[^/]+(.*)', url) @@ -965,6 +965,7 @@ def url_to_module(url): 'tumblr': tumblr, 'vid48': vid48, 'videobam': videobam, + 'vidto': vidto, 'vimeo': vimeo, 'vine': vine, 'vk': vk, diff --git a/src/you_get/extractors/facebook.py b/src/you_get/extractors/facebook.py index edbbb6717f..c0610a175f 100644 --- a/src/you_get/extractors/facebook.py +++ b/src/you_get/extractors/facebook.py @@ -3,22 +3,26 @@ __all__ = ['facebook_download'] from ..common import * +import json -def facebook_download(url, output_dir = '.', merge = True, info_only = False): + +def facebook_download(url, output_dir='.', merge=True, info_only=False): html = get_html(url) - + title = r1(r'<title id="pageTitle">(.+) \| Facebook</title>', html) - + s2 = parse.unquote(unicodize(r1(r'\["params","([^"]*)"\]', html))) + data = json.loads(s2) + video_data = data["video_data"][0] for fmt in ["hd_src", "sd_src"]: - src= re.sub(r'\\/', r'/', r1(r'"' + fmt + '":"([^"]*)"', parse.unquote(unicodize(r1(r'\["params","([^"]*)"\]', html))))) + src = video_data[fmt] if src: break - - type, ext, size = url_info(src) - + + type, ext, size = url_info(src, True) + print_info(site_info, title, type, size) if not info_only: - download_urls([src], title, ext, size, output_dir, merge = merge) + download_urls([src], title, ext, size, output_dir, merge=merge) site_info = "Facebook.com" download = facebook_download diff --git a/src/you_get/extractors/vidto.py b/src/you_get/extractors/vidto.py new file mode 100644 index 0000000000..999c3aa6d9 --- /dev/null +++ b/src/you_get/extractors/vidto.py @@ -0,0 +1,40 @@ +#!/usr/bin/env python + +__all__ = ['vidto_download'] + +from ..common import * +import pdb +import time + + +def vidto_download(url, output_dir='.', merge=True, info_only=False): + html = get_content(url) + params = {} + r = re.findall( + r'type="(?:hidden|submit)?"(?:.*?)name="(.+?)"\s* value="?(.+?)">', html) + for name, value in r: + params[name] = value + data = parse.urlencode(params).encode('utf-8') + req = request.Request(url) + print("Please wait for 6 seconds...") + time.sleep(6) + print("Starting") + new_html = request.urlopen(req, data).read().decode('utf-8', 'replace') + new_stff = re.search('lnk_download" href="(.*?)">', new_html) + if(new_stff): + url = new_stff.group(1) + title = params['fname'] + type = "" + ext = "" + a, b, size = url_info(url) + print_info(site_info, title, type, size) + if not info_only: + download_urls([url], title, ext, size, output_dir, merge=merge) + else: + print("cannot find link, please review") + pdb.set_trace() + + +site_info = "vidto.me" +download = vidto_download +download_playlist = playlist_not_supported('vidto')
Previously would throw error if there is no hd_src Modified to use json parser and check if hd_src is None <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/soimort/you-get/514) <!-- Reviewable:end -->
https://api.github.com/repos/soimort/you-get/pulls/514
2015-04-12T14:00:22Z
2015-05-13T15:04:39Z
2015-05-13T15:04:39Z
2015-05-13T15:05:09Z
1,384
soimort/you-get
21,112
[autoparallel] fix parameters sharding bug
diff --git a/colossalai/auto_parallel/passes/runtime_preparation_pass.py b/colossalai/auto_parallel/passes/runtime_preparation_pass.py index bb419be35e55..e63bfdfe730c 100644 --- a/colossalai/auto_parallel/passes/runtime_preparation_pass.py +++ b/colossalai/auto_parallel/passes/runtime_preparation_pass.py @@ -426,8 +426,9 @@ def _shard_param(param, target_sharding_spec): # we could use .data here, because all the operations just happen before the real training # loop, so we don't need to track these operations in the autograd graph. param = torch.nn.Parameter( - shape_consistency_manager.apply_for_autoparallel_runtime(param.data, param.sharding_spec, - target_sharding_spec).detach().clone()) + shape_consistency_manager.apply_for_autoparallel_runtime(param.data, param.sharding_spec, + target_sharding_spec).detach().clone()) + return param for node in nodes: if node.op == 'call_module': @@ -438,7 +439,7 @@ def _shard_param(param, target_sharding_spec): setattr(target_module, 'processed', True) for name, param in target_module.named_parameters(): target_sharding_spec = node.best_strategy.get_sharding_spec_by_name(name) - _shard_param(param, target_sharding_spec) + param = _shard_param(param, target_sharding_spec) setattr(target_module, name, param) _add_hook_for_grad_communication(node, param) @@ -469,7 +470,7 @@ def _shard_param(param, target_sharding_spec): target = getattr(target_module, atoms[-1]) target_sharding_spec = node.sharding_spec - _shard_param(target, target_sharding_spec) + target = _shard_param(target, target_sharding_spec) assert hasattr(target_module, atoms[-1]) setattr(target_module, atoms[-1], target)
## 📌 Checklist before creating the PR - [ ] I have created an issue for this PR for traceability - [ ] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description` - [ ] I have added relevant tags if possible for us to better distinguish different PRs ## 🚨 Issue number > Link this PR to your issue with words like fixed to automatically close the linked issue upon merge > > e.g. `fixed #1234`, `closed #1234`, `resolved #1234` ## 📝 What does this PR do? > Summarize your work here. > if you have any plots/diagrams/screenshots/tables, please attach them here. ## 💥 Checklist before requesting a review - [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)) - [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible - [ ] I have performed a self-review of my code - [ ] I have added thorough tests. - [ ] I have added docstrings for all the functions/methods I implemented ## ⭐️ Do you enjoy contributing to Colossal-AI? - [ ] 🌝 Yes, I do. - [ ] 🌚 No, I don't. Tell us more if you don't enjoy contributing to Colossal-AI.
https://api.github.com/repos/hpcaitech/ColossalAI/pulls/2716
2023-02-15T04:21:00Z
2023-02-15T04:25:50Z
2023-02-15T04:25:50Z
2023-02-15T04:25:50Z
440
hpcaitech/ColossalAI
11,362
check that the message content is a string before calling strip()
diff --git a/interpreter/core/llm/utils/convert_to_openai_messages.py b/interpreter/core/llm/utils/convert_to_openai_messages.py index bc3962704..5775a185c 100644 --- a/interpreter/core/llm/utils/convert_to_openai_messages.py +++ b/interpreter/core/llm/utils/convert_to_openai_messages.py @@ -170,7 +170,8 @@ def convert_to_openai_messages( else: raise Exception(f"Unable to convert this message type: {message}") - new_message["content"] = new_message["content"].strip() + if isinstance(new_message["content"], str): + new_message["content"] = new_message["content"].strip() new_messages.append(new_message)
### Describe the changes you have made: Checks the new message content's type before attempting to call `strip()`. In cases where a screenshot is being sent, `content` will be a list, not a string. ### Reference any relevant issues (e.g. "Fixes #000"): - Fixes #1116 ### Pre-Submission Checklist (optional but appreciated): - [X] I have included relevant documentation updates (stored in /docs) - [x] I have read `docs/CONTRIBUTING.md` - [x] I have read `docs/ROADMAP.md` ### OS Tests (optional but appreciated): - [ ] Tested on Windows - [x] Tested on MacOS - [ ] Tested on Linux
https://api.github.com/repos/OpenInterpreter/open-interpreter/pulls/1117
2024-03-22T21:09:18Z
2024-03-24T07:47:09Z
2024-03-24T07:47:09Z
2024-03-29T20:02:58Z
170
OpenInterpreter/open-interpreter
40,693
Add rope-scaling parameters to export_model.py
diff --git a/model/model_training/tools/export_model.py b/model/model_training/tools/export_model.py index d41d56fb6f..8ce456085e 100644 --- a/model/model_training/tools/export_model.py +++ b/model/model_training/tools/export_model.py @@ -17,6 +17,10 @@ def parse_args(): parser.add_argument("--cache_dir", type=str) parser.add_argument("--reward_model", action="store_true", default=False) parser.add_argument("--rl_checkpoint", type=str, help="load RL fine-tuning checkpoint") + parser.add_argument( + "--rope_scaling_type", type=str, help="set rope scaling type (linear, dynamic)", default="linear" + ) + parser.add_argument("--rope_scaling_factor", type=float, help="set rope scaling factor (float >1.0)") parser.add_argument( "--trust_remote_code", action="store_true", @@ -85,6 +89,13 @@ def main(): print("Model architecture:") print(model) + if args.rope_scaling_type is not None and args.rope_scaling_factor is not None: + assert args.rope_scaling_type in ("linear", "dynamic") + assert args.rope_scaling_factor >= 1.0 + rope_scaling = {"type": args.rope_scaling_type, "factor": args.rope_scaling_factor} + print(f"setting new rope_scaling config: {rope_scaling} (old: {model.config.rope_scaling})") + model.config.rope_scaling = rope_scaling + if args.output_folder: print(f"Saving model to: {args.output_folder}") model.save_pretrained(args.output_folder, max_shard_size=args.max_shard_size)
Add two new command line parameters which when present override the model's rope-scaling configuration: `--rope_scaling_type`: linear, dynamic (default="linear") `--rope_scaling_factor`: set rope scaling factor (float >1.0)
https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/3618
2023-07-31T07:47:58Z
2023-08-08T10:02:53Z
2023-08-08T10:02:53Z
2023-08-08T10:02:54Z
373
LAION-AI/Open-Assistant
36,868
Fix #2897 - Add `extra_files` option to `flask run` CLI
diff --git a/CHANGES.rst b/CHANGES.rst index 6a179bdff1..2a6641c65d 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -49,6 +49,9 @@ Unreleased not installed, or if the given path isn't a file. :issue:`2937` - Signaling support has a stub for the ``connect_via`` method when the Blinker library is not installed. :pr:`3208` +- Add an ``--extra-files`` option to the ``flask run`` CLI command to + specify extra files that will trigger the reloader on change. + :issue:`2897` .. _#2935: https://github.com/pallets/flask/issues/2935 .. _#2957: https://github.com/pallets/flask/issues/2957 @@ -87,7 +90,6 @@ Released 2019-05-17 .. _#2933: https://github.com/pallets/flask/issues/2933 .. _#2986: https://github.com/pallets/flask/pull/2986 - Version 1.0.2 ------------- diff --git a/docs/cli.rst b/docs/cli.rst index 5835d74bc3..5a05be9f99 100644 --- a/docs/cli.rst +++ b/docs/cli.rst @@ -146,6 +146,25 @@ reloader. * Debugger PIN: 223-456-919 +Watch Extra Files with the Reloader +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When using development mode, the reloader will trigger whenever your +Python code or imported modules change. The reloader can watch +additional files with the ``--extra-files`` option, or the +``FLASK_RUN_EXTRA_FILES`` environment variable. Multiple paths are +separated with ``:``, or ``;`` on Windows. + +.. code-block:: none + + $ flask run --extra-files file1:dirA/file2:dirB/ + # or + $ export FLASK_RUN_EXTRA_FILES=file1:dirA/file2:dirB/ + $ flask run + * Running on http://127.0.0.1:8000/ + * Detected change in '/path/to/file1', reloading + + Debug Mode ---------- diff --git a/flask/cli.py b/flask/cli.py index 7d8829024d..fdd0b7659c 100644 --- a/flask/cli.py +++ b/flask/cli.py @@ -742,6 +742,18 @@ def _validate_key(ctx, param, value): return value +class SeparatedPathType(click.Path): + """Click option type that accepts a list of values separated by the + OS's path separator (``:``, ``;`` on Windows). Each value is + validated as a :class:`click.Path` type. + """ + + def convert(self, value, param, ctx): + items = self.split_envvar_value(value) + super_convert = super(SeparatedPathType, self).convert + return [super_convert(item, param, ctx) for item in items] + + @click.command("run", short_help="Run a development server.") @click.option("--host", "-h", default="127.0.0.1", help="The interface to bind to.") @click.option("--port", "-p", default=5000, help="The port to bind to.") @@ -778,8 +790,19 @@ def _validate_key(ctx, param, value): default=True, help="Enable or disable multithreading.", ) +@click.option( + "--extra-files", + default=None, + type=SeparatedPathType(), + help=( + "Extra files that trigger a reload on change. Multiple paths" + " are separated by '{}'.".format(os.path.pathsep) + ), +) @pass_script_info -def run_command(info, host, port, reload, debugger, eager_loading, with_threads, cert): +def run_command( + info, host, port, reload, debugger, eager_loading, with_threads, cert, extra_files +): """Run a local development server. This server is for development purposes only. It does not provide @@ -812,6 +835,7 @@ def run_command(info, host, port, reload, debugger, eager_loading, with_threads, use_debugger=debugger, threaded=with_threads, ssl_context=cert, + extra_files=extra_files, )
Fix #2897 To define a list of files the reloader should watch additionally to the modules as in ``extra_files`` argument used in the ``app.run`` and ``werkzeug.serving.run_simple`` you can either use the ``--extra-files`` (or multiple ``-f``) option or define the ``FLASK_RUN_EXTRA_FILES`` environment variable. ```bash # on windows use ``;`` instead of ``:`` to separate paths export FLASK_RUN_EXTRA_FILES=/path/to/file1:/path/to/file2 flask run * Running on http://127.0.0.1:8000/ * Detected change in '/path/to/file1', reloading ``` On command line the same can be achieved with ``flask run -f /path/to/file1 -f /path/to/file2``.
https://api.github.com/repos/pallets/flask/pulls/2898
2018-08-31T22:11:54Z
2019-05-24T14:32:46Z
2019-05-24T14:32:46Z
2020-11-14T02:09:35Z
1,024
pallets/flask
20,718
Add lambda-ml
diff --git a/README.md b/README.md index ec6dca22..d06f3736 100644 --- a/README.md +++ b/README.md @@ -233,6 +233,7 @@ For a list of free machine learning books available for download, go [here](http * [clortex](https://github.com/nupic-community/clortex) - General Machine Learning library using Numenta’s Cortical Learning Algorithm * [comportex](https://github.com/nupic-community/comportex) - Functionally composable Machine Learning library using Numenta’s Cortical Learning Algorithm * [cortex](https://github.com/thinktopic/cortex) - Neural networks, regression and feature learning in Clojure. +* [lambda-ml](https://github.com/cloudkj/lambda-ml) - Simple, concise implementations of machine learning techniques and utilities in Clojure. <a name="clojure-data-analysis" /> #### Data Analysis / Data Visualization
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/325
2016-10-26T19:03:20Z
2016-10-27T14:51:18Z
2016-10-27T14:51:18Z
2016-10-27T14:51:21Z
214
josephmisiti/awesome-machine-learning
51,869
Using uppercase C to 'clear' display mode, because lowercase 'c' is used for css
diff --git a/libmproxy/console/flowview.py b/libmproxy/console/flowview.py index b2c4614754..bf0070fc6b 100644 --- a/libmproxy/console/flowview.py +++ b/libmproxy/console/flowview.py @@ -637,11 +637,10 @@ def view_prev_flow(self, flow): return self._view_nextprev_flow("prev", flow) def change_this_display_mode(self, t): - self.state.add_flow_setting( - self.flow, - (self.state.view_flow_mode, "prettyview"), - contentview.get_by_shortcut(t) - ) + key = (self.state.view_flow_mode, "prettyview") + value = contentview.get_by_shortcut(t) + if value: + self.state.add_flow_setting(self.flow, key, value) self.master.refresh_flow(self.flow) def delete_body(self, t): @@ -749,7 +748,7 @@ def keypress(self, size, key): self.master.statusbar.message("") elif key == "m": p = list(contentview.view_prompts) - p.insert(0, ("clear", "c")) + p.insert(0, ("Clear", "C")) self.master.prompt_onekey( "Display mode", p,
Ran 319 tests in 30.566s OK
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/403
2014-11-07T16:03:44Z
2014-11-10T16:32:06Z
2014-11-10T16:32:06Z
2014-11-10T16:37:32Z
291
mitmproxy/mitmproxy
28,322
Fix bug identified in #1817, comment 17340049
diff --git a/sklearn/hmm.py b/sklearn/hmm.py index 2b3c86d326d4c..3e60a85fc0645 100644 --- a/sklearn/hmm.py +++ b/sklearn/hmm.py @@ -560,12 +560,15 @@ def _accumulate_sufficient_statistics(self, stats, seq, framelogprob, stats['start'] += posteriors[0] if 't' in params: n_observations, n_components = framelogprob.shape - lneta = np.zeros((n_observations - 1, n_components, n_components)) - lnP = logsumexp(fwdlattice[-1]) - _hmmc._compute_lneta(n_observations, n_components, fwdlattice, - self._log_transmat, bwdlattice, framelogprob, - lnP, lneta) - stats["trans"] += np.exp(logsumexp(lneta, 0)) + # when the sample is of length 1, it contains no transitions + # so there is no reason to update our trans. matrix estimate + if n_observations > 1: + lneta = np.zeros((n_observations - 1, n_components, n_components)) + lnP = logsumexp(fwdlattice[-1]) + _hmmc._compute_lneta(n_observations, n_components, fwdlattice, + self._log_transmat, bwdlattice, framelogprob, + lnP, lneta) + stats["trans"] += np.exp(logsumexp(lneta, 0)) def _do_mstep(self, stats, params): # Based on Huang, Acero, Hon, "Spoken Language Processing", diff --git a/sklearn/tests/test_hmm.py b/sklearn/tests/test_hmm.py index a3aaea5a6adeb..2869bc456ea5c 100644 --- a/sklearn/tests/test_hmm.py +++ b/sklearn/tests/test_hmm.py @@ -312,6 +312,16 @@ def test_fit_works_on_sequences_of_different_length(self): # ValueError: setting an array element with a sequence. h.fit(obs) + def test_fit_with_length_one_signal(self): + obs = [self.prng.rand(10, self.n_features), + self.prng.rand(8, self.n_features), + self.prng.rand(1, self.n_features)] + h = hmm.GaussianHMM(self.n_components, self.covariance_type) + # This shouldn't raise + # ValueError: zero-size array to reduction operation maximum which has no identity + h.fit(obs) + + def test_fit_with_priors(self, params='stmc', n_iter=5, verbose=False): startprob_prior = 10 * self.startprob + 2.0 transmat_prior = 10 * self.transmat + 2.0
Here's the bug report: https://github.com/scikit-learn/scikit-learn/issues/1817#issuecomment-17340048
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/2524
2013-10-15T19:02:53Z
2013-10-25T08:06:44Z
2013-10-25T08:06:44Z
2014-06-13T11:38:25Z
662
scikit-learn/scikit-learn
46,184
[MRG] IterativeImputer reorder estimator
diff --git a/sklearn/impute.py b/sklearn/impute.py index b2459a003b16f..3bb0bdd9eff15 100644 --- a/sklearn/impute.py +++ b/sklearn/impute.py @@ -433,15 +433,15 @@ class IterativeImputer(BaseEstimator, TransformerMixin): Parameters ---------- - missing_values : int, np.nan, optional (default=np.nan) - The placeholder for the missing values. All occurrences of - ``missing_values`` will be imputed. - estimator : estimator object, default=BayesianRidge() The estimator to use at each step of the round-robin imputation. If ``sample_posterior`` is True, the estimator must support ``return_std`` in its ``predict`` method. + missing_values : int, np.nan, optional (default=np.nan) + The placeholder for the missing values. All occurrences of + ``missing_values`` will be imputed. + sample_posterior : boolean, default=False Whether to sample from the (Gaussian) predictive posterior of the fitted estimator for each imputation. Estimator must support @@ -559,8 +559,8 @@ class IterativeImputer(BaseEstimator, TransformerMixin): """ def __init__(self, - missing_values=np.nan, estimator=None, + missing_values=np.nan, sample_posterior=False, max_iter=10, tol=1e-3, @@ -572,8 +572,8 @@ def __init__(self, verbose=0, random_state=None): - self.missing_values = missing_values self.estimator = estimator + self.missing_values = missing_values self.sample_posterior = sample_posterior self.max_iter = max_iter self.tol = tol
As discussed in #11977, we want the `estimator` to be first to make the UI less confusing. Open to other changes here. Paging @amueller and @jnothman
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/13153
2019-02-13T03:07:20Z
2019-02-13T05:42:04Z
2019-02-13T05:42:04Z
2019-02-13T05:52:36Z
412
scikit-learn/scikit-learn
46,599
Upd Brazilian Vehicles and Prices: CORS from Unknown to No
diff --git a/README.md b/README.md index d29b693757..92abdd5442 100644 --- a/README.md +++ b/README.md @@ -1682,7 +1682,7 @@ API | Description | Auth | HTTPS | CORS | ### Vehicle API | Description | Auth | HTTPS | CORS | |---|---|---|---|---| -| [Brazilian Vehicles and Prices](https://deividfortuna.github.io/fipe/) | Vehicles information from Fundação Instituto de Pesquisas Econômicas - Fipe | No | Yes | Unknown | +| [Brazilian Vehicles and Prices](https://deividfortuna.github.io/fipe/) | Vehicles information from Fundação Instituto de Pesquisas Econômicas - Fipe | No | Yes | No | | [Helipaddy sites](https://helipaddy.com/api/) | Helicopter and passenger drone landing site directory, Helipaddy data and much more | `apiKey` | Yes | Unknown | | [Kelley Blue Book](http://developer.kbb.com/#!/data/1-Default) | Vehicle info, pricing, configuration, plus much more | `apiKey` | Yes | No | | [Mercedes-Benz](https://developer.mercedes-benz.com/apis) | Telematics data, remotely access vehicle functions, car configurator, locate service dealers | `apiKey` | Yes | No |
Updated cors to no
https://api.github.com/repos/public-apis/public-apis/pulls/2976
2021-12-25T03:13:18Z
2021-12-25T18:59:58Z
2021-12-25T18:59:58Z
2021-12-25T18:59:58Z
300
public-apis/public-apis
35,580
Lexicon v3 compatibility
diff --git a/certbot-dns-cloudxns/certbot_dns_cloudxns/dns_cloudxns.py b/certbot-dns-cloudxns/certbot_dns_cloudxns/dns_cloudxns.py index 674194fee68..658db6072f5 100644 --- a/certbot-dns-cloudxns/certbot_dns_cloudxns/dns_cloudxns.py +++ b/certbot-dns-cloudxns/certbot_dns_cloudxns/dns_cloudxns.py @@ -70,6 +70,7 @@ def __init__(self, api_key, secret_key, ttl): super(_CloudXNSLexiconClient, self).__init__() self.provider = cloudxns.Provider({ + 'provider_name': 'cloudxns', 'auth_username': api_key, 'auth_token': secret_key, 'ttl': ttl, diff --git a/certbot-dns-dnsimple/certbot_dns_dnsimple/dns_dnsimple.py b/certbot-dns-dnsimple/certbot_dns_dnsimple/dns_dnsimple.py index f3a98567e5e..3eb56e37ca7 100644 --- a/certbot-dns-dnsimple/certbot_dns_dnsimple/dns_dnsimple.py +++ b/certbot-dns-dnsimple/certbot_dns_dnsimple/dns_dnsimple.py @@ -66,6 +66,7 @@ def __init__(self, token, ttl): super(_DNSimpleLexiconClient, self).__init__() self.provider = dnsimple.Provider({ + 'provider_name': 'dnssimple', 'auth_token': token, 'ttl': ttl, }) diff --git a/certbot-dns-dnsmadeeasy/certbot_dns_dnsmadeeasy/dns_dnsmadeeasy.py b/certbot-dns-dnsmadeeasy/certbot_dns_dnsmadeeasy/dns_dnsmadeeasy.py index 982edfdd3e9..4236ce37a66 100644 --- a/certbot-dns-dnsmadeeasy/certbot_dns_dnsmadeeasy/dns_dnsmadeeasy.py +++ b/certbot-dns-dnsmadeeasy/certbot_dns_dnsmadeeasy/dns_dnsmadeeasy.py @@ -72,6 +72,7 @@ def __init__(self, api_key, secret_key, ttl): super(_DNSMadeEasyLexiconClient, self).__init__() self.provider = dnsmadeeasy.Provider({ + 'provider_name': 'dnsmadeeasy', 'auth_username': api_key, 'auth_token': secret_key, 'ttl': ttl, diff --git a/certbot-dns-gehirn/certbot_dns_gehirn/dns_gehirn.py b/certbot-dns-gehirn/certbot_dns_gehirn/dns_gehirn.py index 50bfce1ae12..9c35e72ab1f 100644 --- a/certbot-dns-gehirn/certbot_dns_gehirn/dns_gehirn.py +++ b/certbot-dns-gehirn/certbot_dns_gehirn/dns_gehirn.py @@ -73,6 +73,7 @@ def __init__(self, api_token, api_secret, ttl): super(_GehirnLexiconClient, self).__init__() self.provider = gehirn.Provider({ + 'provider_name': 'gehirn', 'auth_token': api_token, 'auth_secret': api_secret, 'ttl': ttl, diff --git a/certbot-dns-linode/certbot_dns_linode/dns_linode.py b/certbot-dns-linode/certbot_dns_linode/dns_linode.py index cc29ce842e5..01da2cf604f 100644 --- a/certbot-dns-linode/certbot_dns_linode/dns_linode.py +++ b/certbot-dns-linode/certbot_dns_linode/dns_linode.py @@ -62,6 +62,7 @@ class _LinodeLexiconClient(dns_common_lexicon.LexiconClient): def __init__(self, api_key): super(_LinodeLexiconClient, self).__init__() self.provider = linode.Provider({ + 'provider_name': 'linode', 'auth_token': api_key }) diff --git a/certbot-dns-luadns/certbot_dns_luadns/dns_luadns.py b/certbot-dns-luadns/certbot_dns_luadns/dns_luadns.py index 00b62e6e1a4..bd6a16f69db 100644 --- a/certbot-dns-luadns/certbot_dns_luadns/dns_luadns.py +++ b/certbot-dns-luadns/certbot_dns_luadns/dns_luadns.py @@ -69,6 +69,7 @@ def __init__(self, email, token, ttl): super(_LuaDNSLexiconClient, self).__init__() self.provider = luadns.Provider({ + 'provider_name': 'luadns', 'auth_username': email, 'auth_token': token, 'ttl': ttl, diff --git a/certbot-dns-nsone/certbot_dns_nsone/dns_nsone.py b/certbot-dns-nsone/certbot_dns_nsone/dns_nsone.py index 28db126c141..5f33efbba05 100644 --- a/certbot-dns-nsone/certbot_dns_nsone/dns_nsone.py +++ b/certbot-dns-nsone/certbot_dns_nsone/dns_nsone.py @@ -66,6 +66,7 @@ def __init__(self, api_key, ttl): super(_NS1LexiconClient, self).__init__() self.provider = nsone.Provider({ + 'provider_name': 'nsone', 'auth_token': api_key, 'ttl': ttl, }) diff --git a/certbot-dns-ovh/certbot_dns_ovh/dns_ovh.py b/certbot-dns-ovh/certbot_dns_ovh/dns_ovh.py index c4ded77482d..578ee8e8914 100644 --- a/certbot-dns-ovh/certbot_dns_ovh/dns_ovh.py +++ b/certbot-dns-ovh/certbot_dns_ovh/dns_ovh.py @@ -78,6 +78,7 @@ def __init__(self, endpoint, application_key, application_secret, consumer_key, super(_OVHLexiconClient, self).__init__() self.provider = ovh.Provider({ + 'provider_name': 'ovh', 'auth_entrypoint': endpoint, 'auth_application_key': application_key, 'auth_application_secret': application_secret, diff --git a/certbot-dns-sakuracloud/certbot_dns_sakuracloud/dns_sakuracloud.py b/certbot-dns-sakuracloud/certbot_dns_sakuracloud/dns_sakuracloud.py index 6f1c74b6803..b892330f5d0 100644 --- a/certbot-dns-sakuracloud/certbot_dns_sakuracloud/dns_sakuracloud.py +++ b/certbot-dns-sakuracloud/certbot_dns_sakuracloud/dns_sakuracloud.py @@ -76,6 +76,7 @@ def __init__(self, api_token, api_secret, ttl): super(_SakuraCloudLexiconClient, self).__init__() self.provider = sakuracloud.Provider({ + 'provider_name': 'sakuracloud', 'auth_token': api_token, 'auth_secret': api_secret, 'ttl': ttl, diff --git a/certbot/plugins/dns_common_lexicon.py b/certbot/plugins/dns_common_lexicon.py index 7a97fc950c3..f9610b8162e 100644 --- a/certbot/plugins/dns_common_lexicon.py +++ b/certbot/plugins/dns_common_lexicon.py @@ -68,7 +68,12 @@ def _find_domain_id(self, domain): for domain_name in domain_name_guesses: try: - self.provider.options['domain'] = domain_name + if hasattr(self.provider, 'options'): + # For Lexicon 2.x + self.provider.options['domain'] = domain_name + else: + # For Lexicon 3.x + self.provider.domain = domain_name self.provider.authenticate()
Lexicon will soon move to a new major version after my PR AnalogJ/lexicon#302 is merged. As discussed in #6472, this implies breaking changes that needs to be integrated in dns plugins that are backed by Lexicon. This PR ensures that theses plugins will work both if Lexicon 2.x and Lexicon 3.x. Tested on live API with OVH plugin on both Lexicon versions.
https://api.github.com/repos/certbot/certbot/pulls/6474
2018-11-05T21:23:57Z
2018-11-05T22:07:10Z
2018-11-05T22:07:10Z
2018-11-05T22:10:31Z
1,935
certbot/certbot
469
Added Convolutional Autoencoders for data assimilation reference
diff --git a/README.md b/README.md index fb9f16fe..d4863604 100644 --- a/README.md +++ b/README.md @@ -1236,6 +1236,7 @@ be * [MiniGrad](https://github.com/kennysong/minigrad) – A minimal, educational, Pythonic implementation of autograd (~100 loc). * [Map/Reduce implementations of common ML algorithms](https://github.com/Yannael/BigDataAnalytics_INFOH515): Jupyter notebooks that cover how to implement from scratch different ML algorithms (ordinary least squares, gradient descent, k-means, alternating least squares), using Python NumPy, and how to then make these implementations scalable using Map/Reduce and Spark. * [BioPy](https://github.com/jaredthecoder/BioPy) - Biologically-Inspired and Machine Learning Algorithms in Python. **[Deprecated]** +* [CAEs for Data Assimilation](https://github.com/julianmack/Data_Assimilation) - Convolutional autoencoders for 3D image/field compression applied to reduced order [Data Assimilation](https://en.wikipedia.org/wiki/Data_assimilation). * [SVM Explorer](https://github.com/plotly/dash-svm) - Interactive SVM Explorer, using Dash and scikit-learn * [pattern_classification](https://github.com/rasbt/pattern_classification) * [thinking stats 2](https://github.com/Wavelets/ThinkStats2)
Adds link to application of deep learning to Numerical modelling problem of [Data Assimilation](https://en.wikipedia.org/wiki/Data_assimilation). Recently published [paper](https://arxiv.org/abs/2101.02121) accompanies the code.
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/767
2021-01-07T10:17:55Z
2021-01-13T14:44:33Z
2021-01-13T14:44:33Z
2021-01-13T14:44:33Z
325
josephmisiti/awesome-machine-learning
52,400
fixed the variable types in mailsender documentation
diff --git a/docs/topics/email.rst b/docs/topics/email.rst index e73c7475360..d995894138e 100644 --- a/docs/topics/email.rst +++ b/docs/topics/email.rst @@ -63,10 +63,10 @@ uses `Twisted non-blocking IO`_, like the rest of the framework. :type smtpport: int :param smtptls: enforce using SMTP STARTTLS - :type smtpport: boolean + :type smtptls: boolean :param smtpssl: enforce using a secure SSL connection - :type smtpport: boolean + :type smtpssl: boolean .. classmethod:: from_settings(settings)
The MailSender documentation had a minor error: `smtpport` was displayed as being boolean and `smtptls` and `smtpssl` did not have a type given.
https://api.github.com/repos/scrapy/scrapy/pulls/976
2014-12-10T21:49:59Z
2014-12-10T22:02:34Z
2014-12-10T22:02:34Z
2014-12-10T22:02:39Z
159
scrapy/scrapy
34,453
feat(dashboards): Add dashboard widget resizing flag
diff --git a/src/sentry/conf/server.py b/src/sentry/conf/server.py index 9b3a3b26044e5..8e8bf8022c41c 100644 --- a/src/sentry/conf/server.py +++ b/src/sentry/conf/server.py @@ -1000,6 +1000,8 @@ def create_partitioned_queues(name): "organizations:sentry-app-debugging": False, # Enable data forwarding functionality for organizations. "organizations:data-forwarding": True, + # Enable widget resizing in dashboards + "organizations:dashboard-widget-resizing": False, # Enable readonly dashboards "organizations:dashboards-basic": True, # Enable custom editable dashboards diff --git a/src/sentry/features/__init__.py b/src/sentry/features/__init__.py index 4aa046c7d578b..cc56ae79e96a4 100644 --- a/src/sentry/features/__init__.py +++ b/src/sentry/features/__init__.py @@ -64,6 +64,7 @@ default_manager.add("organizations:crash-rate-alerts", OrganizationFeature, True) default_manager.add("organizations:custom-event-title", OrganizationFeature) default_manager.add("organizations:custom-symbol-sources", OrganizationFeature) +default_manager.add("organizations:dashboard-widget-resizing", OrganizationFeature, True) default_manager.add("organizations:dashboards-basic", OrganizationFeature) default_manager.add("organizations:dashboards-edit", OrganizationFeature) default_manager.add("organizations:widget-library", OrganizationFeature, True)
https://api.github.com/repos/getsentry/sentry/pulls/30000
2021-11-12T20:50:29Z
2021-11-15T15:12:11Z
2021-11-15T15:12:11Z
2021-12-01T00:01:49Z
336
getsentry/sentry
44,206
adding `_scheme` parameter to `url_for`
diff --git a/flask/helpers.py b/flask/helpers.py index 6aea45c61a..a1c09cc57a 100644 --- a/flask/helpers.py +++ b/flask/helpers.py @@ -229,6 +229,9 @@ def external_url_handler(error, endpoint, **values): that this is for building URLs outside the current application, and not for handling 404 NotFound errors. + .. versionadded:: 0.10 + The `_scheme` parameter was added. + .. versionadded:: 0.9 The `_anchor` and `_method` parameters were added. @@ -241,6 +244,8 @@ def external_url_handler(error, endpoint, **values): :param _external: if set to `True`, an absolute URL is generated. Server address can be changed via `SERVER_NAME` configuration variable which defaults to `localhost`. + :param _scheme: a string specifying the desired URL scheme. The `_external` + parameter must be set to `True` or a `ValueError` is raised. :param _anchor: if provided this is added as anchor to the URL. :param _method: if provided this explicitly specifies an HTTP method. """ @@ -283,7 +288,14 @@ def external_url_handler(error, endpoint, **values): anchor = values.pop('_anchor', None) method = values.pop('_method', None) + scheme = values.pop('_scheme', None) appctx.app.inject_url_defaults(endpoint, values) + + if scheme is not None: + if not external: + raise ValueError('When specifying _scheme, _external must be True') + url_adapter.url_scheme = scheme + try: rv = url_adapter.build(endpoint, values, method=method, force_external=external) diff --git a/flask/testsuite/helpers.py b/flask/testsuite/helpers.py index 31f0dcb44a..fdf2d89fd6 100644 --- a/flask/testsuite/helpers.py +++ b/flask/testsuite/helpers.py @@ -397,6 +397,28 @@ def index(): self.assert_equal(flask.url_for('index', _anchor='x y'), '/#x%20y') + def test_url_for_with_scheme(self): + app = flask.Flask(__name__) + @app.route('/') + def index(): + return '42' + with app.test_request_context(): + self.assert_equal(flask.url_for('index', + _external=True, + _scheme='https'), + 'https://localhost/') + + def test_url_for_with_scheme_not_external(self): + app = flask.Flask(__name__) + @app.route('/') + def index(): + return '42' + with app.test_request_context(): + self.assert_raises(ValueError, + flask.url_for, + 'index', + _scheme='https') + def test_url_with_method(self): from flask.views import MethodView app = flask.Flask(__name__)
In order to better facilitate generation of URLs that make use of an HTTPS URL scheme this patch adds a parameter with this specific purpose in mind. To achieve this we explicitly pass in a param, `_scheme='https'`, and then set the `url_scheme` attribute of our `MapAdapter` instance appropriately. Importantly, `_external=True` must be set in order for this to work properly. As such, failure to do so results in a `ValueError` being raised.
https://api.github.com/repos/pallets/flask/pulls/667
2013-01-17T23:13:13Z
2013-01-25T02:57:45Z
2013-01-25T02:57:45Z
2020-11-14T07:18:41Z
672
pallets/flask
20,082
website: Fix Header.stories.js rendering add FlagsProvider
diff --git a/website/src/components/Header/Header.stories.jsx b/website/src/components/Header/Header.stories.jsx index c3c6101828..5570f6fa74 100644 --- a/website/src/components/Header/Header.stories.jsx +++ b/website/src/components/Header/Header.stories.jsx @@ -1,5 +1,6 @@ import { SessionContext } from "next-auth/react"; import React from "react"; +import { FlagsProvider } from "react-feature-flags"; import { Header } from "./Header"; @@ -16,7 +17,9 @@ const Template = (args) => { var { session } = args; return ( <SessionContext.Provider value={session}> - <Header {...args} /> + <FlagsProvider value={[{ name: "flagTest", isActive: false }]}> + <Header {...args} /> + </FlagsProvider> </SessionContext.Provider> ); };
Hi! The story at `src/components/Header/Header.stories.jsx` is not loading correctly in when running storybook. It needs to be wrapped in the `FlagsProvider` as is using the Flags component. If the `Flags` component is going to be used in more places it would be advisable to write a storybook decorator for it. Please let me know if you have any questions or concerns. Thank you,
https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/688
2023-01-13T17:47:16Z
2023-01-13T23:15:00Z
2023-01-13T23:15:00Z
2023-01-13T23:15:05Z
205
LAION-AI/Open-Assistant
37,463
modified to avoid type's mismatch and make like rest of the code
diff --git a/g4f/Provider/Bing.py b/g4f/Provider/Bing.py index f8b06dd1f8..aa1b37b0b9 100644 --- a/g4f/Provider/Bing.py +++ b/g4f/Provider/Bing.py @@ -462,7 +462,7 @@ async def stream_generate( response_txt = card.get('text') if message.get('messageType') and "inlines" in card: inline_txt = card['inlines'][0].get('text') - response_txt += inline_txt + '\n' + response_txt += f"{inline_txt}\n" elif message.get('contentType') == "IMAGE": prompt = message.get('text') try:
https://api.github.com/repos/xtekky/gpt4free/pulls/1744
2024-03-22T17:36:23Z
2024-03-23T10:44:00Z
2024-03-23T10:44:00Z
2024-03-23T10:44:12Z
167
xtekky/gpt4free
37,848
fix download_loader
diff --git a/CHANGELOG.md b/CHANGELOG.md index 442d6b10c722f..908c950ccb8bb 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,11 @@ # ChangeLog +## Unreleased + +### Bug Fixes / Nits + +- Fixed errors about "no host supplied" with `download_loader` (#8723) + ## [0.8.62.post1] - 2023-11-05 ### Breaking Changes diff --git a/llama_index/download/download_utils.py b/llama_index/download/download_utils.py index 9b050a965274c..e30eaf29069d5 100644 --- a/llama_index/download/download_utils.py +++ b/llama_index/download/download_utils.py @@ -110,8 +110,6 @@ def get_module_info( """Get module info.""" if isinstance(local_dir_path, str): local_dir_path = Path(local_dir_path) - if isinstance(remote_dir_path, str): - remote_dir_path = Path(remote_dir_path) local_library_path = f"{local_dir_path}/{library_path}]" module_id = None # e.g. `web/simple_web` @@ -161,8 +159,6 @@ def download_module_and_reqs( """Load module.""" if isinstance(local_dir_path, str): local_dir_path = Path(local_dir_path) - if isinstance(remote_dir_path, str): - remote_dir_path = Path(remote_dir_path) module_path = f"{local_dir_path}/{module_id}" if refresh_cache or not os.path.exists(module_path):
`download_loader` should not convert the remote path to a `Path` object, it was converting `https://....` to `https:/...`
https://api.github.com/repos/run-llama/llama_index/pulls/8723
2023-11-07T03:26:32Z
2023-11-07T03:35:09Z
2023-11-07T03:35:09Z
2023-11-07T03:35:09Z
367
run-llama/llama_index
6,358
Changed name of directory to save file to
diff --git a/docs/intro/tutorial.rst b/docs/intro/tutorial.rst index ff92d701f66..42d6d0d2297 100644 --- a/docs/intro/tutorial.rst +++ b/docs/intro/tutorial.rst @@ -121,7 +121,7 @@ define the three main, mandatory, attributes: objects) and more URLs to follow (as :class:`~scrapy.http.Request` objects). This is the code for our first Spider; save it in a file named -``dmoz_spider.py`` under the ``dmoz/spiders`` directory:: +``dmoz_spider.py`` under the ``tutorial/spiders`` directory:: from scrapy.spider import BaseSpider
The tutorial says to save to `dmoz/spiders` but this folder is non-existent. I have changed it to `tutorial/spiders` which makes sense in the tutorial.
https://api.github.com/repos/scrapy/scrapy/pulls/375
2013-08-27T07:41:22Z
2013-08-27T13:46:34Z
2013-08-27T13:46:34Z
2014-07-08T04:08:34Z
158
scrapy/scrapy
34,616
Fix help typo
diff --git a/letsencrypt/cli.py b/letsencrypt/cli.py index 58250d75efd..2dc3de3d3ea 100644 --- a/letsencrypt/cli.py +++ b/letsencrypt/cli.py @@ -652,7 +652,7 @@ def prepare_and_parse_args(plugins, args, detect_defaults=False): helpful.add( "automation", "-q", "--quiet", dest="quiet", action="store_true", help="Silence all output except errors. Useful for automation via cron." - "Implies --non-interactive.") + " Implies --non-interactive.") helpful.add_group( "testing", description="The following flags are meant for "
https://api.github.com/repos/certbot/certbot/pulls/2774
2016-04-06T02:01:57Z
2016-04-06T21:18:29Z
2016-04-06T21:18:29Z
2016-05-06T19:22:33Z
155
certbot/certbot
3,200
Clean up mountain car environment
diff --git a/gym/envs/classic_control/mountain_car.py b/gym/envs/classic_control/mountain_car.py index 5dc7aa47be0..43421e5d937 100644 --- a/gym/envs/classic_control/mountain_car.py +++ b/gym/envs/classic_control/mountain_car.py @@ -2,7 +2,6 @@ http://incompleteideas.net/sutton/MountainCar/MountainCar1.cp permalink: https://perma.cc/6Z2N-PFWC """ - import math import numpy as np @@ -11,6 +10,7 @@ from gym import spaces from gym.utils import seeding + class MountainCarEnv(gym.Env): """ Description: @@ -47,33 +47,39 @@ class MountainCarEnv(gym.Env): [-0.6 , -0.4]. The starting velocity of the car is always assigned to 0. - Episode Termination: + Episode Termination: The car position is more than 0.5 Episode length is greater than 200 - """ - + """ + metadata = { 'render.modes': ['human', 'rgb_array'], 'video.frames_per_second': 30 } - def __init__(self, goal_velocity = 0): + def __init__(self, goal_velocity=0): self.min_position = -1.2 self.max_position = 0.6 self.max_speed = 0.07 self.goal_position = 0.5 self.goal_velocity = goal_velocity - - self.force=0.001 - self.gravity=0.0025 - self.low = np.array([self.min_position, -self.max_speed], dtype=np.float32) - self.high = np.array([self.max_position, self.max_speed], dtype=np.float32) + self.force = 0.001 + self.gravity = 0.0025 + + self.low = np.array( + [self.min_position, -self.max_speed], dtype=np.float32 + ) + self.high = np.array( + [self.max_position, self.max_speed], dtype=np.float32 + ) self.viewer = None self.action_space = spaces.Discrete(3) - self.observation_space = spaces.Box(self.low, self.high, dtype=np.float32) + self.observation_space = spaces.Box( + self.low, self.high, dtype=np.float32 + ) self.seed() @@ -85,13 +91,16 @@ def step(self, action): assert self.action_space.contains(action), "%r (%s) invalid" % (action, type(action)) position, velocity = self.state - velocity += (action-1)*self.force + math.cos(3*position)*(-self.gravity) + velocity += (action - 1) * self.force + math.cos(3 * position) * (-self.gravity) velocity = np.clip(velocity, -self.max_speed, self.max_speed) position += velocity position = np.clip(position, self.min_position, self.max_position) - if (position==self.min_position and velocity<0): velocity = 0 + if (position == self.min_position and velocity < 0): + velocity = 0 - done = bool(position >= self.goal_position and velocity >= self.goal_velocity) + done = bool( + position >= self.goal_position and velocity >= self.goal_velocity + ) reward = -1.0 self.state = (position, velocity) @@ -102,24 +111,23 @@ def reset(self): return np.array(self.state) def _height(self, xs): - return np.sin(3 * xs)*.45+.55 + return np.sin(3 * xs) * .45 + .55 def render(self, mode='human'): screen_width = 600 screen_height = 400 world_width = self.max_position - self.min_position - scale = screen_width/world_width - carwidth=40 - carheight=20 - + scale = screen_width / world_width + carwidth = 40 + carheight = 20 if self.viewer is None: from gym.envs.classic_control import rendering self.viewer = rendering.Viewer(screen_width, screen_height) xs = np.linspace(self.min_position, self.max_position, 100) ys = self._height(xs) - xys = list(zip((xs-self.min_position)*scale, ys*scale)) + xys = list(zip((xs - self.min_position) * scale, ys * scale)) self.track = rendering.make_polyline(xys) self.track.set_linewidth(4) @@ -127,40 +135,49 @@ def render(self, mode='human'): clearance = 10 - l,r,t,b = -carwidth/2, carwidth/2, carheight, 0 - car = rendering.FilledPolygon([(l,b), (l,t), (r,t), (r,b)]) + l, r, t, b = -carwidth / 2, carwidth / 2, carheight, 0 + car = rendering.FilledPolygon([(l, b), (l, t), (r, t), (r, b)]) car.add_attr(rendering.Transform(translation=(0, clearance))) self.cartrans = rendering.Transform() car.add_attr(self.cartrans) self.viewer.add_geom(car) - frontwheel = rendering.make_circle(carheight/2.5) + frontwheel = rendering.make_circle(carheight / 2.5) frontwheel.set_color(.5, .5, .5) - frontwheel.add_attr(rendering.Transform(translation=(carwidth/4,clearance))) + frontwheel.add_attr( + rendering.Transform(translation=(carwidth / 4, clearance)) + ) frontwheel.add_attr(self.cartrans) self.viewer.add_geom(frontwheel) - backwheel = rendering.make_circle(carheight/2.5) - backwheel.add_attr(rendering.Transform(translation=(-carwidth/4,clearance))) + backwheel = rendering.make_circle(carheight / 2.5) + backwheel.add_attr( + rendering.Transform(translation=(-carwidth / 4, clearance)) + ) backwheel.add_attr(self.cartrans) backwheel.set_color(.5, .5, .5) self.viewer.add_geom(backwheel) - flagx = (self.goal_position-self.min_position)*scale - flagy1 = self._height(self.goal_position)*scale + flagx = (self.goal_position-self.min_position) * scale + flagy1 = self._height(self.goal_position) * scale flagy2 = flagy1 + 50 flagpole = rendering.Line((flagx, flagy1), (flagx, flagy2)) self.viewer.add_geom(flagpole) - flag = rendering.FilledPolygon([(flagx, flagy2), (flagx, flagy2-10), (flagx+25, flagy2-5)]) - flag.set_color(.8,.8,0) + flag = rendering.FilledPolygon( + [(flagx, flagy2), (flagx, flagy2 - 10), (flagx + 25, flagy2 - 5)] + ) + flag.set_color(.8, .8, 0) self.viewer.add_geom(flag) pos = self.state[0] - self.cartrans.set_translation((pos-self.min_position)*scale, self._height(pos)*scale) + self.cartrans.set_translation( + (pos-self.min_position) * scale, self._height(pos) * scale + ) self.cartrans.set_rotation(math.cos(3 * pos)) - return self.viewer.render(return_rgb_array = mode=='rgb_array') - + return self.viewer.render(return_rgb_array=mode == 'rgb_array') + def get_keys_to_action(self): - return {():1,(276,):0,(275,):2,(275,276):1} #control with left and right arrow keys - + # Control with left and right arrow keys. + return {(): 1, (276,): 0, (275,): 2, (275, 276): 1} + def close(self): if self.viewer: self.viewer.close()
https://api.github.com/repos/openai/gym/pulls/1961
2020-06-17T07:53:31Z
2020-06-19T21:13:16Z
2020-06-19T21:13:16Z
2020-06-19T21:13:16Z
1,872
openai/gym
5,600
clear stream info
diff --git a/src/you_get/extractors/bilibili.py b/src/you_get/extractors/bilibili.py index 49334d5bd7..c61a0567d0 100644 --- a/src/you_get/extractors/bilibili.py +++ b/src/you_get/extractors/bilibili.py @@ -141,6 +141,8 @@ def url_size(url, faker=False, headers={},err_value=0): def prepare(self, **kwargs): self.stream_qualities = {s['quality']: s for s in self.stream_types} + self.streams.clear() + self.dash_streams.clear() try: html_content = get_content(self.url, headers=self.bilibili_headers(referer=self.url))
When download video consecutively, like ```you-get -I download.txt ...```, previous stream info is not clear, may cause download wrong video. 连续下载,例如使用-I模式时,前一个视频的信息未被清除,导致下载视频错误
https://api.github.com/repos/soimort/you-get/pulls/2956
2022-04-07T19:11:30Z
2022-04-19T14:58:25Z
2022-04-19T14:58:25Z
2022-04-20T02:05:28Z
170
soimort/you-get
21,276
Support dynamic port on Lambda remote debugging
diff --git a/localstack/utils/container_utils/container_client.py b/localstack/utils/container_utils/container_client.py index 9cdb3d6c700a8..d0e5830444b88 100644 --- a/localstack/utils/container_utils/container_client.py +++ b/localstack/utils/container_utils/container_client.py @@ -166,7 +166,7 @@ def add( else: self.add(port[0] + i, mapped, protocol) return - if port is None or int(port) <= 0: + if port is None or int(port) < 0: raise Exception(f"Unable to add mapping for invalid port: {port}") if self.contains(port, protocol): return @@ -242,6 +242,14 @@ def entry(k, v): def to_dict(self) -> Dict[str, Union[Tuple[str, Union[int, List[int]]], int]]: bind_address = self.bind_host or "" + def bind_port(bind_address, host_port): + if host_port == 0: + return None + elif bind_address: + return (bind_address, host_port) + else: + return host_port + def entry(k, v): from_range, protocol = k to_range = v @@ -258,7 +266,7 @@ def entry(k, v): return [ ( f"{container_port}{protocol_suffix}", - (bind_address, host_port) if bind_address else host_port, + bind_port(bind_address, host_port), ) for container_port, host_port in zip( range(to_range[0], to_range[1] + 1), range(from_range[0], from_range[1] + 1) diff --git a/tests/integration/docker_utils/test_docker.py b/tests/integration/docker_utils/test_docker.py index 064904a6ddfdf..eb9eb85a6f697 100644 --- a/tests/integration/docker_utils/test_docker.py +++ b/tests/integration/docker_utils/test_docker.py @@ -1415,6 +1415,23 @@ def test_run_with_additional_arguments_add_dns( result = docker_client.inspect_container(container_name) assert set(result["HostConfig"]["Dns"]) == {"1.2.3.4", "5.6.7.8"} + def test_run_with_additional_arguments_random_port( + self, docker_client: ContainerClient, create_container + ): + container = create_container( + "alpine", + command=["sh", "-c", "while true; do sleep 1; done"], + additional_flags="-p 0:80", + ) + docker_client.start_container(container.container_id) + inspect_result = docker_client.inspect_container( + container_name_or_id=container.container_id + ) + automatic_host_port = int( + inspect_result["NetworkSettings"]["Ports"]["80/tcp"][0]["HostPort"] + ) + assert automatic_host_port > 0 + class TestDockerImages: def test_commit_creates_image_from_running_container(self, docker_client: ContainerClient): diff --git a/tests/unit/test_dockerclient.py b/tests/unit/test_dockerclient.py index 00df5a069d601..f7a4bb78fe16c 100644 --- a/tests/unit/test_dockerclient.py +++ b/tests/unit/test_dockerclient.py @@ -230,6 +230,13 @@ def test_windows_paths(self): flags = Util.parse_additional_flags(argument_string) assert flags.mounts == [("/var/test", "/var/task")] + def test_random_ports(self): + argument_string = r"-p 0:80" + ports = PortMappings() + Util.parse_additional_flags(argument_string, ports=ports) + assert ports.to_str() == "-p 0:80" + assert ports.to_dict() == {"80/tcp": None} + def list_in(a, b): return len(a) <= len(b) and any( diff --git a/tests/unit/test_misc.py b/tests/unit/test_misc.py index 5ba65475afaa5..98ff9a7aceafa 100644 --- a/tests/unit/test_misc.py +++ b/tests/unit/test_misc.py @@ -63,6 +63,10 @@ def test_port_mappings(self): map.add([234, 237], [345, 348]) self.assertEqual("-p 123-124:123-124 -p 234-237:345-348", map.to_str()) + map = PortMappings() + map.add(0, 123) + self.assertEqual("-p 0:123", map.to_str()) + def test_port_mappings_single_protocol(self): map = PortMappings() map.add(port=53, protocol="udp") @@ -110,6 +114,15 @@ def test_port_mappings_dict(self): map.to_dict(), ) + map = PortMappings() + map.add(port=0, mapped=123, protocol="tcp") + self.assertEqual( + { + "123/tcp": None, + }, + map.to_dict(), + ) + def test_port_mappings_list(self): map = PortMappings() map.add(port=[122, 124], protocol="tcp") @@ -119,6 +132,10 @@ def test_port_mappings_list(self): map.add(port=[124, 126], protocol="udp") self.assertEqual(["-p", "122-125:122-125", "-p", "123-126:123-126/udp"], map.to_list()) + map = PortMappings() + map.add(port=0, mapped=123, protocol="tcp") + self.assertEqual(["-p", "0:123"], map.to_list()) + def test_update_config_variable(self): config_listener.update_config_variable("foo", "bar") self.assertEqual("bar", config.foo)
<!-- Please refer to the contribution guidelines before raising a PR: https://github.com/localstack/localstack/blob/master/CONTRIBUTING.md --> <!-- Why am I raising this PR? Add context such as related issues, PRs, or documentation. --> ## Motivation This is solution for #9419 The differences in behavior are as follows. ### before 1. Expose fixed port with `LAMBDA_DOCKER_FLAGS` setting. ``` LAMBDA_DOCKER_FLAGS=-e NODE_OPTIONS=--inspect=0.0.0.0:9229 -p 9229:9229 ``` 2. The first function execution succeeds and the container exposes the fixed port. ![image](https://github.com/localstack/localstack/assets/77612853/9ff41af3-8e88-4515-aae9-b9ceb25213b1) 3. The second function execution fails due to an error in the bind to the same port. ![image](https://github.com/localstack/localstack/assets/77612853/e9ef3b0c-122a-4c74-9914-824e17013b8a) ### after 1. Expose dynamic port with `LAMBDA_DOCKER_FLAGS` setting. ``` LAMBDA_DOCKER_FLAGS=-e NODE_OPTIONS=--inspect=0.0.0.0:9229 -p 0:9229 ``` 2. The first function execution succeeds and the container exposes the dynamic port. ![image](https://github.com/localstack/localstack/assets/77612853/bf37c588-8c2b-4ea2-af07-2ae3468cf20a) 3. The second function execution also succeeds and the container exposes another dynamic port. ![image](https://github.com/localstack/localstack/assets/77612853/7cd8e073-6c8d-4acb-8812-df00e7be78e1) <!-- What notable changes does this PR make? --> ## Changes It is possible to run multiple Lambda functions with remote debugging configured. <!-- The following sections are optional, but can be useful! ## Testing Description of how to test the changes ## TODO What's left to do: - [ ] ... - [ ] ... -->
https://api.github.com/repos/localstack/localstack/pulls/9420
2023-10-20T10:09:54Z
2024-01-11T12:48:49Z
2024-01-11T12:48:49Z
2024-01-14T06:45:27Z
1,292
localstack/localstack
29,120
Added flow marking functionality in the console
diff --git a/libmproxy/console/__init__.py b/libmproxy/console/__init__.py index 052ac7ddaa..3d20947b9a 100644 --- a/libmproxy/console/__init__.py +++ b/libmproxy/console/__init__.py @@ -48,6 +48,7 @@ def add_flow(self, f): self.set_focus(0) elif self.follow_focus: self.set_focus(len(self.view) - 1) + self.set_flow_marked(f, False) return f def update_flow(self, f): @@ -100,9 +101,29 @@ def delete_flow(self, f): return ret def clear(self): - self.focus = None + marked_flows = [] + for f in self.flows: + if self.flow_marked(f): + marked_flows.append(f) + super(ConsoleState, self).clear() - + + for f in marked_flows: + self.add_flow(f) + self.set_flow_marked(f, True) + + if len(self.flows.views) == 0: + self.focus = None + else: + self.focus = 0 + self.set_focus(self.focus) + + def flow_marked(self, flow): + return self.get_flow_setting(flow, "marked", False) + + def set_flow_marked(self, flow, marked): + self.add_flow_setting(flow, "marked", marked) + class Options(object): attributes = [ @@ -591,6 +612,13 @@ def save_one_flow(self, path, flow): def save_flows(self, path): return self._write_flows(path, self.state.view) + + def save_marked_flows(self, path): + marked_flows = [] + for f in self.state.view: + if self.state.flow_marked(f): + marked_flows.append(f) + return self._write_flows(path, marked_flows) def load_flows_callback(self, path): if not path: diff --git a/libmproxy/console/common.py b/libmproxy/console/common.py index e5bebf7f27..90bccfe74e 100644 --- a/libmproxy/console/common.py +++ b/libmproxy/console/common.py @@ -115,9 +115,11 @@ def fcol(s, attr): if urwid.util.detected_encoding: SYMBOL_REPLAY = u"\u21ba" SYMBOL_RETURN = u"\u2190" + SYMBOL_MARK = u"\u25cf" else: SYMBOL_REPLAY = u"[r]" SYMBOL_RETURN = u"<-" + SYMBOL_MARK = "[m]" def raw_format_flow(f, focus, extended, padding): @@ -133,6 +135,10 @@ def raw_format_flow(f, focus, extended, padding): ) else: req.append(fcol(">>" if focus else " ", "focus")) + + if f["marked"]: + req.append(fcol(SYMBOL_MARK, "mark")) + if f["req_is_replay"]: req.append(fcol(SYMBOL_REPLAY, "replay")) req.append(fcol(f["req_method"], "method")) @@ -372,7 +378,8 @@ def ask_save_body(part, master, state, flow): flowcache = utils.LRUCache(800) -def format_flow(f, focus, extended=False, hostheader=False, padding=2): +def format_flow(f, focus, extended=False, hostheader=False, padding=2, + marked=False): d = dict( intercepted = f.intercepted, acked = f.reply.acked, @@ -384,6 +391,8 @@ def format_flow(f, focus, extended=False, hostheader=False, padding=2): err_msg = f.error.msg if f.error else None, resp_code = f.response.code if f.response else None, + + marked = marked, ) if f.response: if f.response.content: diff --git a/libmproxy/console/flowlist.py b/libmproxy/console/flowlist.py index 3924598444..bb23df75a8 100644 --- a/libmproxy/console/flowlist.py +++ b/libmproxy/console/flowlist.py @@ -17,9 +17,11 @@ def _mkhelp(): ("F", "toggle follow flow list"), ("l", "set limit filter pattern"), ("L", "load saved flows"), + ("m", "toggle flow mark"), ("n", "create a new request"), ("P", "copy flow to clipboard"), ("r", "replay request"), + ("U", "unmark all marked flows"), ("V", "revert changes to request"), ("w", "save flows "), ("W", "stream flows to file"), @@ -108,7 +110,8 @@ def get_text(self): return common.format_flow( self.flow, self.f, - hostheader = self.master.showhost + hostheader = self.master.showhost, + marked=self.state.flow_marked(self.flow) ) def selectable(self): @@ -120,6 +123,11 @@ def save_flows_prompt(self, k): prompt = "Save all flows to", callback = self.master.save_flows ) + elif k == "m": + signals.status_prompt_path.send( + prompt = "Save marked flows to", + callback = self.master.save_marked_flows + ) else: signals.status_prompt_path.send( prompt = "Save this flow to", @@ -177,6 +185,12 @@ def keypress(self, xxx_todo_changeme, key): elif key == "D": f = self.master.duplicate_flow(self.flow) self.master.view_flow(f) + elif key == "m": + if self.state.flow_marked(self.flow): + self.state.set_flow_marked(self.flow, False) + else: + self.state.set_flow_marked(self.flow, True) + signals.flowlist_change.send(self) elif key == "r": r = self.master.replay_request(self.flow) if r: @@ -202,6 +216,10 @@ def keypress(self, xxx_todo_changeme, key): ), callback = self.stop_server_playback_prompt, ) + elif key == "U": + for f in self.state.flows: + self.state.set_flow_marked(f, False) + signals.flowlist_change.send(self) elif key == "V": if not self.flow.modified(): signals.status_message.send(message="Flow not modified.") @@ -216,6 +234,7 @@ def keypress(self, xxx_todo_changeme, key): keys = ( ("all flows", "a"), ("this flow", "t"), + ("marked flows", "m"), ), callback = self.save_flows_prompt, ) diff --git a/libmproxy/console/palettes.py b/libmproxy/console/palettes.py index ea3d1b6277..d897a0a288 100644 --- a/libmproxy/console/palettes.py +++ b/libmproxy/console/palettes.py @@ -24,7 +24,7 @@ class Palette: 'method', 'focus', 'code_200', 'code_300', 'code_400', 'code_500', 'code_other', 'error', - 'header', 'highlight', 'intercept', 'replay', + 'header', 'highlight', 'intercept', 'replay', 'mark', # Hex view 'offset', @@ -104,6 +104,7 @@ class LowDark(Palette): highlight = ('white,bold', 'default'), intercept = ('brown', 'default'), replay = ('light green', 'default'), + mark = ('light red', 'default'), # Hex view offset = ('dark cyan', 'default'), @@ -167,6 +168,7 @@ class LowLight(Palette): highlight = ('black,bold', 'default'), intercept = ('brown', 'default'), replay = ('dark green', 'default'), + mark = ('dark red', 'default'), # Hex view offset = ('dark blue', 'default'),
Some functionality in https://github.com/mitmproxy/mitmproxy/issues/623 was actually fairly trivial to implement. The following features are included In the flow list view: - Press 'm' to toggle an easily visible mark on the current flow. - When the flow list is cleared with 'C', marked flows are not deleted. - If all flows in the list are marked, marked flows are deleted with 'C' This means that pressing C twice will clear all flows, including marked ones. The mark is made large to stand out while scrolling, but that can be changed by changing SYMBOL_MARK in libmproxy/console/common.py.
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/624
2015-06-11T17:31:06Z
2015-06-12T01:53:07Z
2015-06-12T01:53:07Z
2015-06-12T02:01:30Z
1,844
mitmproxy/mitmproxy
27,441
Update proxysql_manage_config.py
diff --git a/lib/ansible/modules/database/proxysql/proxysql_manage_config.py b/lib/ansible/modules/database/proxysql/proxysql_manage_config.py index 45d80f6e9b5496..858d7546bf7546 100644 --- a/lib/ansible/modules/database/proxysql/proxysql_manage_config.py +++ b/lib/ansible/modules/database/proxysql/proxysql_manage_config.py @@ -61,7 +61,7 @@ # This example saves the mysql users config from memory to disk. It uses # supplied credentials to connect to the proxysql admin interface. -- proxysql_global_variables: +- proxysql_manage_config: login_user: 'admin' login_password: 'admin' action: "SAVE" @@ -72,7 +72,7 @@ # This example loads the mysql query rules config from memory to to runtime. It # uses supplied credentials to connect to the proxysql admin interface. -- proxysql_global_variables: +- proxysql_manage_config: config_file: '~/proxysql.cnf' action: "LOAD" config_settings: "MYSQL QUERY RULES"
Incorrect example - wrong module used (proxysql_global_variables instead of proxysql_manage_config) +label: docsite_pr ##### SUMMARY <!--- Describe the change below, including rationale and design decisions --> <!--- HINT: Include "Fixes #nnn" if you are fixing an existing issue --> ##### ISSUE TYPE <!--- Pick one below and delete the rest --> - Bugfix Pull Request - Docs Pull Request - Feature Pull Request - New Module Pull Request ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below --> ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### ADDITIONAL INFORMATION <!--- Include additional information to help people understand the change here --> <!--- A step-by-step reproduction of the problem is helpful if there is no related issue --> <!--- Paste verbatim command output below, e.g. before and after your change --> ```paste below ```
https://api.github.com/repos/ansible/ansible/pulls/47613
2018-10-25T11:26:42Z
2018-10-26T01:35:02Z
2018-10-26T01:35:02Z
2019-07-22T17:04:54Z
243
ansible/ansible
49,013
fix misformatting of floats with leading zeros
diff --git a/black.py b/black.py index c899bde36db..d81c3548703 100644 --- a/black.py +++ b/black.py @@ -2564,7 +2564,9 @@ def format_int_string(text: str, allow_underscores: bool) -> str: # No underscores for numbers <= 6 digits long. return text - return format(int(text), "3_") + # Avoid removing leading zeros, which are important if we're formatting + # part of a number like "0.001". + return format(int("1" + text), "3_")[1:].lstrip("_") def normalize_invisible_parens(node: Node, parens_after: Set[str]) -> None: diff --git a/tests/data/numeric_literals.py b/tests/data/numeric_literals.py index 8999b9d4038..2dc64c75c8b 100644 --- a/tests/data/numeric_literals.py +++ b/tests/data/numeric_literals.py @@ -14,6 +14,7 @@ x = 0XB1ACC x = 0B1011 x = 0O777 +x = 0.000000006 # output @@ -34,3 +35,4 @@ x = 0xb1acc x = 0b1011 x = 0o777 +x = 0.000_000_006
This is a bug in my previous code: it ended up turning "0.06" into "0.6". The solution in this diff is a bit hacky, but it has the virtue of being correct, unlike the previous code. It might be worth thinking about this part of the style some more: Should we even put underscores in the part of floats behind the decimal point? If so, should we count the groups from the end, like we do now, or from the beginning? (The difference between `0.12_345_678` and `0.123_456_78`.)
https://api.github.com/repos/psf/black/pulls/464
2018-08-20T04:14:23Z
2018-08-20T15:19:25Z
2018-08-20T15:19:25Z
2018-08-20T15:19:29Z
315
psf/black
23,654
[Doc][KubeRay] Improve Kueue/KubeRay priority scheduling doc
diff --git a/.vale/styles/Vocab/General/accept.txt b/.vale/styles/Vocab/General/accept.txt index 208355bd18fb4..bf935d68f6987 100644 --- a/.vale/styles/Vocab/General/accept.txt +++ b/.vale/styles/Vocab/General/accept.txt @@ -12,3 +12,6 @@ URI[s] [Pp]arallelization [Pp]erformant [Ss]ubclassing +Kueue +GKE +namespace diff --git a/doc/source/cluster/kubernetes/examples/rayjob-kueue-priority-scheduling.md b/doc/source/cluster/kubernetes/examples/rayjob-kueue-priority-scheduling.md index ade131df9fee9..cc24929f2ed49 100644 --- a/doc/source/cluster/kubernetes/examples/rayjob-kueue-priority-scheduling.md +++ b/doc/source/cluster/kubernetes/examples/rayjob-kueue-priority-scheduling.md @@ -6,7 +6,7 @@ This guide shows how to run [Fine-tune a PyTorch Lightning Text Classifier with ## What's Kueue? -[Kueue](https://kueue.sigs.k8s.io/) is a Kubernetes-native system that manages quotas +[Kueue](https://kueue.sigs.k8s.io/) is a Kubernetes-native job queueing system that manages quotas and how jobs consume them. Kueue decides when: * To make a job wait * To admit a job to start, meaning that Kubernetes creates pods. @@ -16,68 +16,32 @@ Kueue has native support for some KubeRay APIs. Specifically, you can use Kueue to manage resources consumed by RayJob and RayCluster. See the [Kueue documentation](https://kueue.sigs.k8s.io/docs/overview/) to learn more. -## Create a Kubernetes cluster on GKE +## Step 0: Create a Kubernetes cluster on GKE (Optional) If you already have a Kubernetes cluster with GPUs, you can skip this step. +Otherwise, follow [Start Google Cloud GKE Cluster with GPUs for KubeRay](kuberay-gke-gpu-cluster-setup) to set up a Kubernetes cluster on GKE. -Create a GKE cluster with autoscaling enabled: -```bash -gcloud container clusters create kuberay-gpu-cluster \ - --num-nodes=1 --min-nodes 0 --max-nodes 1 --enable-autoscaling \ - --zone=us-west1-b --machine-type e2-standard-4 -``` - -Create a GPU node pool: -```bash -gcloud container node-pools create gpu-node-pool \ - --accelerator type=nvidia-l4,count=1,gpu-driver-version=latest \ - --zone us-west1-b \ - --cluster kuberay-gpu-cluster \ - --num-nodes 1 \ - --min-nodes 1 \ - --max-nodes 10 \ - --enable-autoscaling \ - --machine-type g2-standard-4 -``` - -Configure `kubectl` to connect with your cluster: -```bash -gcloud container clusters get-credentials kuberay-gpu-cluster --zone us-west1-b -``` - -See [Start Google Cloud GKE Cluster with GPUs for KubeRay](kuberay-gke-gpu-cluster-setup) for more details on setting up a Kubernetes cluster on GKE. - -## Install the KubeRay operator +## Step 1: Install the KubeRay operator Follow [Deploy a KubeRay operator](kuberay-operator-deploy) to install the latest stable KubeRay operator from the Helm repository. The KubeRay operator Pod must be on the CPU node if you set up the taint for the GPU node pool correctly. -## Install Kueue +## Step 2: Install Kueue -``` -VERSION=v0.6.0-rc.1 +```bash +VERSION=v0.6.0 kubectl apply --server-side -f https://github.com/kubernetes-sigs/kueue/releases/download/$VERSION/manifests.yaml ``` See [Kueue Installation](https://kueue.sigs.k8s.io/docs/installation/#install-a-released-version) for more details on installing Kueue. -## Configure Kueue with priority scheduling - -Next, configure Kueue for a single fine-tuning RayJob to run at a time. -Use priority classes to allow RayJobs with higher priority to preempt those with lower priority. +## Step 3: Configure Kueue with priority scheduling -### Create Kueue resources - -This manifest creates the following resources: -* [ClusterQueue](https://kueue.sigs.k8s.io/docs/concepts/cluster_queue/): defines quotas and fair sharing rules -* [LocalQueue](https://kueue.sigs.k8s.io/docs/concepts/local_queue/): a namespaced queue (belonging to a tenant) that references a ClusterQueue -* [ResourceFlavor](https://kueue.sigs.k8s.io/docs/concepts/resource_flavor/): defines what resources are available in the cluster (typically from Nodes) -* [WorkloadPriorityClass](https://kueue.sigs.k8s.io/docs/concepts/workload_priority_class/): defines priority for workloads - -This example configures Kueue with just enough quotas to run a single fine-tuning RayJob. -There are two priority classes `dev-priority` and `prod-priority`. RayJobs using the -`prod-priority` priority class should take precedence and preempt any running RayJobs from the `dev-priority` -priority class. +To understand this tutorial, it's important to understand the following Kueue concepts: +* [ResourceFlavor](https://kueue.sigs.k8s.io/docs/concepts/resource_flavor/) +* [ClusterQueue](https://kueue.sigs.k8s.io/docs/concepts/cluster_queue/) +* [LocalQueue](https://kueue.sigs.k8s.io/docs/concepts/local_queue/) +* [WorkloadPriorityClass](https://kueue.sigs.k8s.io/docs/concepts/workload_priority_class/) ```yaml # kueue-resources.yaml @@ -92,7 +56,6 @@ metadata: name: "cluster-queue" spec: preemption: - reclaimWithinCohort: Any withinClusterQueue: LowerPriority namespaceSelector: {} # Match all namespaces. resourceGroups: @@ -130,12 +93,24 @@ value: 100 description: "Priority class for development jobs" ``` +The YAML manifest configures: + +* **ResourceFlavor** + * The ResourceFlavor `default-flavor` is an empty ResourceFlavor because the compute resources in the Kubernetes cluster are homogeneous. In other words, users can request 1 GPU without considering whether it's an NVIDIA A100 or a T4 GPU. +* **ClusterQueue** + * The ClusterQueue `cluster-queue` only has 1 ResourceFlavor `default-flavor` with quotas for 2 CPUs, 8G memory, and 1 GPU. It exactly matches the resources requested by 1 RayJob custom resource. ***Hence, only 1 RayJob can run at a time.*** + * The ClusterQueue `cluster-queue` has a preemption policy `withinClusterQueue: LowerPriority`. This policy allows the pending RayJob that doesn’t fit within the nominal quota for its ClusterQueue to preempt active RayJob custom resources in the ClusterQueue that have lower priority. +* **LocalQueue** + * The LocalQueue `user-queue` is a namespaced object in the `default` namespace which belongs to a ClusterQueue. A typical practice is to assign a namespace to a tenant, team or user, of an organization. Users submit jobs to a LocalQueue, instead of to a ClusterQueue directly. +* **WorkloadPriorityClass** + * The WorkloadPriorityClass `prod-priority` has a higher value than the WorkloadPriorityClass `dev-priority`. This means that RayJob custom resources with the `prod-priority` priority class take precedence over RayJob custom resources with the `dev-priority` priority class. + Create the Kueue resources: ```bash kubectl apply -f kueue-resources.yaml ``` -## Deploy a RayJob +## Step 4: Deploy a RayJob Download the RayJob that executes all the steps documented in [Fine-tune a PyTorch Lightning Text Classifier](https://docs.ray.io/en/master/train/examples/lightning/lightning_cola_advanced.html). The [source code](https://github.com/ray-project/kuberay/tree/master/ray-operator/config/samples/pytorch-text-classifier) is also in the KubeRay repository. @@ -144,9 +119,6 @@ curl -LO https://raw.githubusercontent.com/ray-project/kuberay/master/ray-operat ``` Before creating the RayJob, modify the RayJob metadata with: -* A label to assign the RayJob to the LocalQueue created earlier. -* A label to assign the RayJob with the `dev-priority` priority class. -* A modified name to indicate that this job is for development. ```yaml metadata: @@ -156,7 +128,11 @@ metadata: kueue.x-k8s.io/priority-class: dev-priority ``` -Also note the resources required for this RayJob by looking at the resources requested by the nodes: +* `kueue.x-k8s.io/queue-name: user-queue`: As the previous step mentioned, users submit jobs to a LocalQueue instead of directly to a ClusterQueue. +* `kueue.x-k8s.io/priority-class: dev-priority`: Assign the RayJob with the `dev-priority` WorkloadPriorityClass. +* A modified name to indicate that this job is for development. + +Also note the resources required for this RayJob by looking at the resources that the Ray head Pod requests: ```yaml resources: limits: @@ -178,11 +154,11 @@ Verify that the RayCluster and the submitter Kubernetes Job are running: ```bash $ kubectl get pod NAME READY STATUS RESTARTS AGE -dev-pytorch-text-classifier-r6d4p-4nczg 1/1 Running 0 4s -torch-text-classifier-r6d4p-raycluster-br45j-head-8bbwt 1/1 Running 0 34s +dev-pytorch-text-classifier-r6d4p-4nczg 1/1 Running 0 4s # Submitter Kubernetes Job +torch-text-classifier-r6d4p-raycluster-br45j-head-8bbwt 1/1 Running 0 34s # Ray head Pod ``` -Delete the RayJob after verifying that the job has completed successfully: +Delete the RayJob after verifying that the job has completed successfully. ```bash $ kubectl get rayjobs.ray.io dev-pytorch-text-classifier-r6d4p -o jsonpath='{.status.jobStatus}' SUCCEEDED @@ -192,9 +168,10 @@ $ kubectl delete rayjob dev-pytorch-text-classifier-r6d4p rayjob.ray.io "dev-pytorch-text-classifier-r6d4p" deleted ``` -## Queuing multiple RayJob resources +## Step 5: Queuing multiple RayJob resources + +Create 3 RayJob custom resources to see how Kueue interacts with KubeRay to implement job queueing. -Create 3 RayJobs to see how Kueue interacts with KubeRay to implement job queueing. ```bash $ kubectl create -f ray-job.pytorch-distributed-training.yaml rayjob.ray.io/dev-pytorch-text-classifier-8vg2c created @@ -212,7 +189,7 @@ You can also inspect the `ClusterQueue` to see available and used quotas: $ kubectl get clusterqueue NAME COHORT PENDING WORKLOADS cluster-queue 2 -$ kubectl get clusterqueue cluster-queue -o yaml +$ kubectl get clusterqueue cluster-queue -o yaml apiVersion: kueue.x-k8s.io/v1beta1 kind: ClusterQueue ... @@ -248,14 +225,11 @@ status: reservingWorkloads: 1 # Running workloads that are using quotas. ``` -## Deploy a RayJob with higher priority +## Step 6: Deploy a RayJob with higher priority At this point there are multiple RayJob custom resources queued up but only enough quota to run a single RayJob. Now you can create a new RayJob with higher priority to preempt the already queued RayJob resources. Modify the RayJob with: -* A label to assign the RayJob to the LocalQueue created earlier. -* A label to assign the RayJob with the `prod-priority` priority class. -* A modified name to indicate that this job is for production. ```yaml metadata: @@ -265,6 +239,10 @@ metadata: kueue.x-k8s.io/priority-class: prod-priority ``` +* `kueue.x-k8s.io/queue-name: user-queue`: As the previous step mentioned, users submit jobs to a LocalQueue instead of directly to a ClusterQueue. +* `kueue.x-k8s.io/priority-class: dev-priority`: Assign the RayJob with the `prod-priority` WorkloadPriorityClass. +* A modified name to indicate that this job is for production. + Create the new RayJob: ```sh $ kubectl create -f ray-job.pytorch-distributed-training.yaml @@ -278,50 +256,3 @@ NAME READY STATUS REST prod-pytorch-text-classifier-gkp9b-r9k5r 1/1 Running 0 5s torch-text-classifier-gkp9b-raycluster-s2f65-head-hfvht 1/1 Running 0 35s ``` - -## Increasing cluster quotas - -As a final step, you can control the quotas available by modifying the ClusterQueue. -Increasing the quotas signals to Kueue that it can admit more RayJob resources. -If no resources are available to schedule new RayJobs, the Kubernetes node autoscaler, -such as GKE Autopilot, adds additional nodes if enabled. You must enable the autoscaler -to have this behavior. -```yaml -apiVersion: kueue.x-k8s.io/v1beta1 -kind: ClusterQueue -metadata: - name: "cluster-queue" -spec: - preemption: - reclaimWithinCohort: Any - withinClusterQueue: LowerPriority - namespaceSelector: {} # Match all namespaces. - resourceGroups: - - coveredResources: ["cpu", "memory", "nvidia.com/gpu"] - flavors: - - name: "default-flavor" - resources: - - name: "cpu" - nominalQuota: 100 - - name: "memory" - nominalQuota: 100Gi - - name: "nvidia.com/gpu" - nominalQuota: 10 # Add more GPU quota. -``` - -RayJob resources queued from the previous steps are now admitted by Kueue: -```bash -$ kubectl get pods -NAME READY STATUS RESTARTS AGE -dev-pytorch-text-classifier-7494p-snvm9 1/1 Running 0 35s -dev-pytorch-text-classifier-nv9h9-gnqvw 1/1 Running 0 23s -dev-pytorch-text-classifier-r8xwd-ksscd 1/1 Running 0 35s -dev-pytorch-text-classifier-vwnwk-qmgxg 1/1 Running 0 35s -prod-pytorch-text-classifier-gkp9b-r9k5r 1/1 Running 0 4m53s -torch-text-classifier-4vd7c-raycluster-jwh8p-head-l29kc 1/1 Running 0 66s -torch-text-classifier-7494p-raycluster-c9xcs-head-4jdqm 1/1 Running 0 66s -torch-text-classifier-gkp9b-raycluster-s2f65-head-hfvht 1/1 Running 0 5m23s -torch-text-classifier-nv9h9-raycluster-z6zk4-head-llwkr 1/1 Running 0 36s -torch-text-classifier-r8xwd-raycluster-l45f2-head-bvf6v 1/1 Running 0 66s -torch-text-classifier-vwnwk-raycluster-xxffj-head-gbslc 1/1 Running 0 66s -```
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? Improve the Kueue/KubeRay priority scheduling doc. ## Related issue number <!-- For example: "Closes #1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :(
https://api.github.com/repos/ray-project/ray/pulls/43215
2024-02-16T01:17:45Z
2024-02-23T19:32:14Z
2024-02-23T19:32:14Z
2024-02-23T19:32:14Z
3,731
ray-project/ray
19,039
Bitbank :: fix fetchMyTrades
diff --git a/js/bitbank.js b/js/bitbank.js index 391060ecf803..fd5765a1b0e7 100644 --- a/js/bitbank.js +++ b/js/bitbank.js @@ -723,8 +723,8 @@ module.exports = class bitbank extends Exchange { const request = {}; let market = undefined; if (symbol !== undefined) { - request['pair'] = market['id']; market = this.market (symbol); + request['pair'] = market['id']; } if (limit !== undefined) { request['count'] = limit;
- fixes #14196
https://api.github.com/repos/ccxt/ccxt/pulls/14197
2022-07-02T10:41:00Z
2022-07-02T10:42:17Z
2022-07-02T10:42:17Z
2022-07-02T10:42:17Z
136
ccxt/ccxt
13,118
zero_to_fp32.py: Handle a case where shape doesn't have numel attr
diff --git a/deepspeed/utils/zero_to_fp32.py b/deepspeed/utils/zero_to_fp32.py index c98caae31534..49b846633d6e 100755 --- a/deepspeed/utils/zero_to_fp32.py +++ b/deepspeed/utils/zero_to_fp32.py @@ -248,6 +248,11 @@ def _zero2_merge_frozen_params(state_dict, zero_model_states): print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements") +def _has_callable(obj, fn): + attr = getattr(obj, fn, None) + return callable(attr) + + def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states): param_shapes = zero_model_states[0].param_shapes @@ -287,7 +292,7 @@ def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero avail_numel = full_single_fp32_vector.numel() for name, shape in shapes.items(): - unpartitioned_numel = shape.numel() + unpartitioned_numel = shape.numel() if _has_callable(shape, 'numel') else math.prod(shape) total_numel += unpartitioned_numel total_params += 1
https://api.github.com/repos/microsoft/DeepSpeed/pulls/4842
2023-12-20T06:34:18Z
2024-01-03T21:21:18Z
2024-01-03T21:21:18Z
2024-02-04T11:58:33Z
297
microsoft/DeepSpeed
10,181
Use climate enums in lookin
diff --git a/homeassistant/components/lookin/climate.py b/homeassistant/components/lookin/climate.py index dd6e7c8cffaf4b..b8042cef72a302 100644 --- a/homeassistant/components/lookin/climate.py +++ b/homeassistant/components/lookin/climate.py @@ -7,21 +7,17 @@ from aiolookin import Climate, MeteoSensor from aiolookin.models import UDPCommandType, UDPEvent -from homeassistant.components.climate import ClimateEntity, ClimateEntityFeature +from homeassistant.components.climate import ClimateEntity from homeassistant.components.climate.const import ( ATTR_HVAC_MODE, FAN_AUTO, FAN_HIGH, FAN_LOW, FAN_MIDDLE, - HVAC_MODE_AUTO, - HVAC_MODE_COOL, - HVAC_MODE_DRY, - HVAC_MODE_FAN_ONLY, - HVAC_MODE_HEAT, - HVAC_MODE_OFF, SWING_BOTH, SWING_OFF, + ClimateEntityFeature, + HVACMode, ) from homeassistant.config_entries import ConfigEntry from homeassistant.const import ( @@ -41,12 +37,12 @@ LOOKIN_FAN_MODE_IDX_TO_HASS: Final = [FAN_AUTO, FAN_LOW, FAN_MIDDLE, FAN_HIGH] LOOKIN_SWING_MODE_IDX_TO_HASS: Final = [SWING_OFF, SWING_BOTH] LOOKIN_HVAC_MODE_IDX_TO_HASS: Final = [ - HVAC_MODE_OFF, - HVAC_MODE_AUTO, - HVAC_MODE_COOL, - HVAC_MODE_HEAT, - HVAC_MODE_DRY, - HVAC_MODE_FAN_ONLY, + HVACMode.OFF, + HVACMode.AUTO, + HVACMode.COOL, + HVACMode.HEAT, + HVACMode.DRY, + HVACMode.FAN_ONLY, ] HASS_TO_LOOKIN_HVAC_MODE: dict[str, int] = { @@ -104,7 +100,7 @@ class ConditionerEntity(LookinCoordinatorEntity, ClimateEntity): ) _attr_fan_modes: list[str] = LOOKIN_FAN_MODE_IDX_TO_HASS _attr_swing_modes: list[str] = LOOKIN_SWING_MODE_IDX_TO_HASS - _attr_hvac_modes: list[str] = LOOKIN_HVAC_MODE_IDX_TO_HASS + _attr_hvac_modes: list[HVACMode] = LOOKIN_HVAC_MODE_IDX_TO_HASS _attr_min_temp = MIN_TEMP _attr_max_temp = MAX_TEMP _attr_target_temperature_step = PRECISION_WHOLE @@ -124,7 +120,7 @@ def __init__( def _climate(self) -> Climate: return cast(Climate, self.coordinator.data) - async def async_set_hvac_mode(self, hvac_mode: str) -> None: + async def async_set_hvac_mode(self, hvac_mode: HVACMode) -> None: """Set the hvac mode of the device.""" if (mode := HASS_TO_LOOKIN_HVAC_MODE.get(hvac_mode)) is None: return @@ -139,7 +135,7 @@ async def async_set_temperature(self, **kwargs: Any) -> None: lookin_index = LOOKIN_HVAC_MODE_IDX_TO_HASS if hvac_mode := kwargs.get(ATTR_HVAC_MODE): self._climate.hvac_mode = HASS_TO_LOOKIN_HVAC_MODE[hvac_mode] - elif self._climate.hvac_mode == lookin_index.index(HVAC_MODE_OFF): + elif self._climate.hvac_mode == lookin_index.index(HVACMode.OFF): # # If the device is off, and the user didn't specify an HVAC mode # (which is the default when using the HA UI), the device won't turn @@ -152,11 +148,11 @@ async def async_set_temperature(self, **kwargs: Any) -> None: # meteo_data: MeteoSensor = self._meteo_coordinator.data if not (current_temp := meteo_data.temperature): - self._climate.hvac_mode = lookin_index.index(HVAC_MODE_AUTO) + self._climate.hvac_mode = lookin_index.index(HVACMode.AUTO) elif current_temp >= self._climate.temp_celsius: - self._climate.hvac_mode = lookin_index.index(HVAC_MODE_COOL) + self._climate.hvac_mode = lookin_index.index(HVACMode.COOL) else: - self._climate.hvac_mode = lookin_index.index(HVAC_MODE_HEAT) + self._climate.hvac_mode = lookin_index.index(HVACMode.HEAT) await self._async_update_conditioner() async def async_set_fan_mode(self, fan_mode: str) -> None:
## Proposed change <!-- Describe the big picture of your changes here to communicate to the maintainers why we should accept this pull request. If it fixes a bug or resolves a feature request, be sure to link to that issue in the additional information section. --> As follow-up to #70319 / #70286 ## Type of change <!-- What type of change does your PR introduce to Home Assistant? NOTE: Please, check only 1! box! If your PR requires multiple boxes to be checked, you'll most likely need to split it into multiple PRs. This makes things easier and faster to code review. --> - [ ] Dependency upgrade - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New integration (thank you!) - [ ] New feature (which adds functionality to an existing integration) - [ ] Breaking change (fix/feature causing existing functionality to break) - [x] Code quality improvements to existing code or addition of tests ## Additional information <!-- Details are important, and help maintainers processing your PR. Please be sure to fill out additional details, if applicable. --> - This PR fixes or closes issue: fixes # - This PR is related to issue: - Link to documentation pull request: ## Checklist <!-- Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code. --> - [ ] The code change is tested and works locally. - [ ] Local tests pass. **Your PR cannot be merged unless tests pass** - [ ] There is no commented out code in this PR. - [ ] I have followed the [development checklist][dev-checklist] - [ ] The code has been formatted using Black (`black --fast homeassistant tests`) - [ ] Tests have been added to verify that the new code works. If user exposed functionality or configuration variables are added/changed: - [ ] Documentation added/updated for [www.home-assistant.io][docs-repository] If the code communicates with devices, web services, or third-party tools: - [ ] The [manifest file][manifest-docs] has all fields filled out correctly. Updated and included derived files by running: `python3 -m script.hassfest`. - [ ] New or updated dependencies have been added to `requirements_all.txt`. Updated by running `python3 -m script.gen_requirements_all`. - [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description. - [ ] Untested files have been added to `.coveragerc`. The integration reached or maintains the following [Integration Quality Scale][quality-scale]: <!-- The Integration Quality Scale scores an integration on the code quality and user experience. Each level of the quality scale consists of a list of requirements. We highly recommend getting your integration scored! --> - [ ] No score or internal - [ ] 🥈 Silver - [ ] 🥇 Gold - [ ] 🏆 Platinum <!-- This project is very active and we have a high turnover of pull requests. Unfortunately, the number of incoming pull requests is higher than what our reviewers can review and merge so there is a long backlog of pull requests waiting for review. You can help here! By reviewing another pull request, you will help raise the code quality of that pull request and the final review will be faster. This way the general pace of pull request reviews will go up and your wait time will go down. When picking a pull request to review, try to choose one that hasn't yet been reviewed. Thanks for helping out! --> To help with the load of incoming pull requests: - [ ] I have reviewed two other [open pull requests][prs] in this repository. [prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure <!-- Thank you for contributing <3 Below, some useful links you could explore: --> [dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html [manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html [quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html [docs-repository]: https://github.com/home-assistant/home-assistant.io
https://api.github.com/repos/home-assistant/core/pulls/70681
2022-04-25T13:19:48Z
2022-04-26T04:29:37Z
2022-04-26T04:29:37Z
2022-04-27T05:11:10Z
1,055
home-assistant/core
39,409
[Bot] Synchronize Submodule References
diff --git a/inference b/inference index 4fba9a739e78..b71d8787ce3a 160000 --- a/inference +++ b/inference @@ -1 +1 @@ -Subproject commit 4fba9a739e786d65f6b9dba19e5fd6bfe8502c19 +Subproject commit b71d8787ce3a3982978f41ed597a41aad75bfd35
Automated PR to update submodule commits
https://api.github.com/repos/hpcaitech/ColossalAI/pulls/1499
2022-08-26T00:01:28Z
2022-08-26T03:17:31Z
2022-08-26T03:17:31Z
2022-08-26T03:17:31Z
108
hpcaitech/ColossalAI
11,399
Fixed #34243 -- Fixed timesince() crash with timezone-aware dates and interval longer than 1 month.
diff --git a/django/utils/timesince.py b/django/utils/timesince.py index 701c49bab9424..f582d0e4f24a7 100644 --- a/django/utils/timesince.py +++ b/django/utils/timesince.py @@ -97,6 +97,7 @@ def timesince(d, now=None, reversed=False, time_strings=None, depth=2): d.hour, d.minute, d.second, + tzinfo=d.tzinfo, ) else: pivot = d diff --git a/tests/utils_tests/test_timesince.py b/tests/utils_tests/test_timesince.py index bf05f32f5eea5..242e582f9ef75 100644 --- a/tests/utils_tests/test_timesince.py +++ b/tests/utils_tests/test_timesince.py @@ -1,7 +1,7 @@ import datetime from django.test import TestCase -from django.test.utils import requires_tz_support +from django.test.utils import override_settings, requires_tz_support from django.utils import timezone, translation from django.utils.timesince import timesince, timeuntil from django.utils.translation import npgettext_lazy @@ -171,7 +171,7 @@ def utcoffset(self, dt): self.assertEqual(timeuntil(past), "0\xa0minutes") def test_thousand_years_ago(self): - t = datetime.datetime(1007, 8, 14, 13, 46, 0) + t = self.t.replace(year=self.t.year - 1000) self.assertEqual(timesince(t, self.t), "1000\xa0years") self.assertEqual(timeuntil(self.t, t), "1000\xa0years") @@ -240,3 +240,11 @@ def test_depth_invalid(self): msg = "depth must be greater than 0." with self.assertRaisesMessage(ValueError, msg): timesince(self.t, self.t, depth=0) + + +@requires_tz_support +@override_settings(USE_TZ=True) +class TZAwareTimesinceTests(TimesinceTests): + def setUp(self): + super().setUp() + self.t = timezone.make_aware(self.t, timezone.get_default_timezone())
Fixes ticket-34243.
https://api.github.com/repos/django/django/pulls/16429
2023-01-05T11:41:37Z
2023-01-05T15:38:20Z
2023-01-05T15:38:20Z
2023-01-05T15:38:20Z
487
django/django
51,674
Add ansible cli options --ask-vault-password and --vault-pass-file
diff --git a/changelogs/fragments/63782-add-ansible-ask-vault-password-and-vault-password-file-options.yaml b/changelogs/fragments/63782-add-ansible-ask-vault-password-and-vault-password-file-options.yaml new file mode 100644 index 00000000000000..6a4ea9413b57d3 --- /dev/null +++ b/changelogs/fragments/63782-add-ansible-ask-vault-password-and-vault-password-file-options.yaml @@ -0,0 +1,3 @@ +minor_changes: +- Add --ask-vault-password and --vault-pass-file options to ansible cli commands +- Change order of arguments in ansible cli to use --ask-vault-password and --vault-password-file by default diff --git a/lib/ansible/cli/arguments/option_helpers.py b/lib/ansible/cli/arguments/option_helpers.py index cf521a4891d5b3..945a76011b4ca1 100644 --- a/lib/ansible/cli/arguments/option_helpers.py +++ b/lib/ansible/cli/arguments/option_helpers.py @@ -363,7 +363,7 @@ def add_vault_options(parser): parser.add_argument('--vault-id', default=[], dest='vault_ids', action='append', type=str, help='the vault identity to use') base_group = parser.add_mutually_exclusive_group() - base_group.add_argument('--ask-vault-pass', default=C.DEFAULT_ASK_VAULT_PASS, dest='ask_vault_pass', action='store_true', + base_group.add_argument('--ask-vault-password', '--ask-vault-pass', default=C.DEFAULT_ASK_VAULT_PASS, dest='ask_vault_pass', action='store_true', help='ask for vault password') - base_group.add_argument('--vault-password-file', default=[], dest='vault_password_files', + base_group.add_argument('--vault-password-file', '--vault-pass-file', default=[], dest='vault_password_files', help="vault password file", type=unfrack_path(), action='append') diff --git a/test/integration/targets/vault/runme.sh b/test/integration/targets/vault/runme.sh index 0f1de3bd4224d4..c4d17dbd26e20c 100755 --- a/test/integration/targets/vault/runme.sh +++ b/test/integration/targets/vault/runme.sh @@ -106,6 +106,14 @@ if [ -x "$(command -v setsid)" ]; then setsid sh -c 'tty; echo test-vault-password|ansible-vault view --ask-vault-pass -vvvvv vaulted.inventory' < /dev/null > log 2>&1 echo $? cat log + + # test using --ask-vault-password option + CMD='ansible-playbook -i ../../inventory -vvvvv --ask-vault-password test_vault.yml' + setsid sh -c "echo test-vault-password|${CMD}" < /dev/null > log 2>&1 && : + WRONG_RC=$? + cat log + echo "rc was $WRONG_RC (0 is expected)" + [ $WRONG_RC -eq 0 ] fi ansible-vault view "$@" --vault-password-file vault-password-wrong format_1_1_AES256.yml && : @@ -410,6 +418,8 @@ ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-pass ansible-playbook test_vaulted_inventory.yml -i vaulted.inventory -v "$@" --vault-password-file vault-password ansible-playbook test_vaulted_template.yml -i ../../inventory -v "$@" --vault-password-file vault-password +# test using --vault-pass-file option +ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-pass-file vault-password # install TOML for parse toml inventory # test playbooks using vaulted files(toml)
##### SUMMARY The ansible-playbook command provides 2 options to specify a vault password: --ask-vault-pass --vault-password-file One option use 'pass', the other uses 'password'. I propose to use either pass or password for both option. For now I added the following options: -- ask-vault-password --vault-pass-file ##### ISSUE TYPE - Feature Pull Request ##### COMPONENT NAME ansible cli ##### ADDITIONAL INFORMATION before: ```bash ansible-playbook --ask-vault-pass .... ansible-playbook --vault-password-file=<filename> .... ``` after: ```bash ansible-playbook --ask-vault-password .... ansible-playbook --vault-password-file=<filename> .... ``` or ```bash ansible-playbook --ask-vault-pass .... ansible-playbook --vault-pass-file=<filename> .... ```
https://api.github.com/repos/ansible/ansible/pulls/63782
2019-10-22T11:51:51Z
2019-12-19T17:07:26Z
2019-12-19T17:07:26Z
2020-01-21T14:37:38Z
866
ansible/ansible
48,839
Fix qq video error; Fix #1778
diff --git a/src/you_get/extractors/qq.py b/src/you_get/extractors/qq.py index f2c3d9ece1..c92b730193 100644 --- a/src/you_get/extractors/qq.py +++ b/src/you_get/extractors/qq.py @@ -14,6 +14,8 @@ def qq_download_by_vid(vid, title, output_dir='.', merge=True, info_only=False): parts_ti = video_json['vl']['vi'][0]['ti'] parts_prefix = video_json['vl']['vi'][0]['ul']['ui'][0]['url'] parts_formats = video_json['fl']['fi'] + if parts_prefix.endswith('/'): + parts_prefix = parts_prefix[:-1] # find best quality # only looking for fhd(1080p) and shd(720p) here. # 480p usually come with a single file, will be downloaded as fallback. @@ -38,7 +40,7 @@ def qq_download_by_vid(vid, title, output_dir='.', merge=True, info_only=False): # For fhd(1080p), every part is about 100M and 6 minutes # try 100 parts here limited download longest single video of 10 hours. for part in range(1,100): - filename = vid + '.p' + str(part_format_id % 1000) + '.' + str(part) + '.mp4' + filename = vid + '.p' + str(part_format_id % 10000) + '.' + str(part) + '.mp4' key_api = "http://vv.video.qq.com/getkey?otype=json&platform=11&format=%s&vid=%s&filename=%s" % (part_format_id, parts_vid, filename) #print(filename) #print(key_api) @@ -59,7 +61,9 @@ def qq_download_by_vid(vid, title, output_dir='.', merge=True, info_only=False): fvkey = video_json['vl']['vi'][0]['fvkey'] mp4 = video_json['vl']['vi'][0]['cl'].get('ci', None) if mp4: - mp4 = mp4[0]['keyid'].replace('.10', '.p') + '.mp4' + old_id = mp4[0]['keyid'].split('.')[1] + new_id = 'p' + str(int(old_id) % 10000) + mp4 = mp4[0]['keyid'].replace(old_id, new_id) + '.mp4' else: mp4 = video_json['vl']['vi'][0]['fn'] url = '%s/%s?vkey=%s' % ( parts_prefix, mp4, fvkey )
Enable you-get to deal with videos with keyid that > 11000 <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/soimort/you-get/1785) <!-- Reviewable:end -->
https://api.github.com/repos/soimort/you-get/pulls/1785
2017-03-22T11:20:28Z
2017-04-01T19:47:04Z
2017-04-01T19:47:04Z
2017-04-01T19:47:30Z
608
soimort/you-get
21,498
cli: allow --dry-run to be combined with --server
diff --git a/CHANGELOG.md b/CHANGELOG.md index 102eaf4bb53..fa8ca23793c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -11,6 +11,8 @@ Certbot adheres to [Semantic Versioning](https://semver.org/). ### Changed * Removed `--fast` flag from the test farm tests +* `--server` may now be combined with `--dry-run`. Certbot will, as before, use the + staging server instead of the live server when `--dry-run` is used. ### Fixed diff --git a/certbot/cli.py b/certbot/cli.py index d22a9a52466..6715dfd9c36 100644 --- a/certbot/cli.py +++ b/certbot/cli.py @@ -649,13 +649,20 @@ def parse_args(self): def set_test_server(self, parsed_args): """We have --staging/--dry-run; perform sanity check and set config.server""" - if parsed_args.server not in (flag_default("server"), constants.STAGING_URI): - conflicts = ["--staging"] if parsed_args.staging else [] - conflicts += ["--dry-run"] if parsed_args.dry_run else [] - raise errors.Error("--server value conflicts with {0}".format( - " and ".join(conflicts))) + # Flag combinations should produce these results: + # | --staging | --dry-run | + # ------------------------------------------------------------ + # | --server acme-v02 | Use staging | Use staging | + # | --server acme-staging-v02 | Use staging | Use staging | + # | --server <other> | Conflict error | Use <other> | - parsed_args.server = constants.STAGING_URI + default_servers = (flag_default("server"), constants.STAGING_URI) + + if parsed_args.staging and parsed_args.server not in default_servers: + raise errors.Error("--server value conflicts with --staging") + + if parsed_args.server in default_servers: + parsed_args.server = constants.STAGING_URI if parsed_args.dry_run: if self.verb not in ["certonly", "renew"]: diff --git a/certbot/tests/cli_test.py b/certbot/tests/cli_test.py index 87b074a819f..166559040b0 100644 --- a/certbot/tests/cli_test.py +++ b/certbot/tests/cli_test.py @@ -333,16 +333,26 @@ def test_dry_run_flag(self): self._assert_dry_run_flag_worked(self.parse(short_args + ['auth']), True) self._assert_dry_run_flag_worked(self.parse(short_args + ['renew']), True) + self._assert_dry_run_flag_worked(self.parse(short_args + ['certonly']), True) + short_args += ['certonly'] - self._assert_dry_run_flag_worked(self.parse(short_args), True) - short_args += '--server example.com'.split() - conflicts = ['--dry-run'] - self._check_server_conflict_message(short_args, '--dry-run') + # `--dry-run --server example.com` should emit example.com + self.assertEqual(self.parse(short_args + ['--server', 'example.com']).server, + 'example.com') + + # `--dry-run --server STAGING_URI` should emit STAGING_URI + self.assertEqual(self.parse(short_args + ['--server', constants.STAGING_URI]).server, + constants.STAGING_URI) + + # `--dry-run --server LIVE` should emit STAGING_URI + self.assertEqual(self.parse(short_args + ['--server', cli.flag_default("server")]).server, + constants.STAGING_URI) - short_args += ['--staging'] - conflicts += ['--staging'] - self._check_server_conflict_message(short_args, conflicts) + # `--dry-run --server example.com --staging` should emit an error + conflicts = ['--staging'] + self._check_server_conflict_message(short_args + ['--server', 'example.com', '--staging'], + conflicts) def test_option_was_set(self): key_size_option = 'rsa_key_size'
The value of --server will now be respected, except when it is the default value, in which case it will be changed to the staging server, preserving Certbot's existing behavior. This change is a prerequisite to dry-run authz deactivation as suggested by @bmw in https://github.com/certbot/certbot/pull/7266#issuecomment-526403040 ## Pull Request Checklist - [x] Edit the `master` section of `CHANGELOG.md` to include a description of the change being made. - [ ] Add [mypy type annotations](https://certbot.eff.org/docs/contributing.html#mypy-type-annotations) for any functions that were added or modified. - [x] Include your name in `AUTHORS.md` if you like. NB: I did not add a mypy annotation to `set_test_server` because `parsed_args` is an `OrderedDict`, but I could not work out a way to make mypy happy given how the attributes are being dereferenced (e.g. `parsed_args.server` vs `parsed_args['server']`).
https://api.github.com/repos/certbot/certbot/pulls/7436
2019-10-09T02:43:19Z
2019-10-09T22:09:26Z
2019-10-09T22:09:26Z
2019-10-09T22:09:27Z
954
certbot/certbot
1,682
[toggo] Fix _VALID_URL to support toggolino
diff --git a/yt_dlp/extractor/toggo.py b/yt_dlp/extractor/toggo.py index 4c03d1dc0b6..9f98cfaf0c2 100644 --- a/yt_dlp/extractor/toggo.py +++ b/yt_dlp/extractor/toggo.py @@ -4,7 +4,7 @@ class ToggoIE(InfoExtractor): IE_NAME = 'toggo' - _VALID_URL = r'https?://(?:www\.)?toggo\.de/[^/?#]+/folge/(?P<id>[^/?#]+)' + _VALID_URL = r'https?://(?:www\.)?toggo\.de/(?:toggolino/)?[^/?#]+/folge/(?P<id>[^/?#]+)' _TESTS = [{ 'url': 'https://www.toggo.de/weihnachtsmann--co-kg/folge/ein-geschenk-fuer-zwei', 'info_dict': { @@ -30,6 +30,9 @@ class ToggoIE(InfoExtractor): }, { 'url': 'https://www.toggo.de/grizzy--die-lemminge/folge/ab-durch-die-wand-vogelfrei-rock\'n\'lemming', 'only_matching': True, + }, { + 'url': 'https://www.toggo.de/toggolino/paw-patrol/folge/der-wetter-zeppelin-der-chili-kochwettbewerb', + 'only_matching': True, }] def _real_extract(self, url):
adding support for "toggolino" videos <!-- # Please follow the guide below - You will be asked some questions, please read them **carefully** and answer honestly - Put an `x` into all the boxes `[ ]` relevant to your *pull request* (like [x]) - Use *Preview* tab to see how your *pull request* will actually look like --> ### Before submitting a *pull request* make sure you have: - [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions) - [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: - [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) ### What is the purpose of your *pull request*? - [x] Fix or improvement to an extractor (Make sure to add/update tests) - [ ] New extractor ([Piracy websites will not be accepted](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy)) - [ ] Core bug fix/improvement - [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes)) --- ### Description of your *pull request* and other information Explanation of your *pull request* in arbitrary form goes here. Please **make sure the description explains the purpose and effect** of your *pull request* and is worded well enough to be understood. Provide as much **context and examples** as possible.
https://api.github.com/repos/yt-dlp/yt-dlp/pulls/3689
2022-05-09T08:55:34Z
2022-05-09T11:42:22Z
2022-05-09T11:42:22Z
2022-05-09T11:42:23Z
374
yt-dlp/yt-dlp
8,134
[MRG] Tackle remaining comments in #3411
diff --git a/sklearn/linear_model/omp.py b/sklearn/linear_model/omp.py index e80e2e47c6ce9..63e15e5d56368 100644 --- a/sklearn/linear_model/omp.py +++ b/sklearn/linear_model/omp.py @@ -569,15 +569,9 @@ def fit(self, X, y): y = np.asarray(y) n_features = X.shape[1] - precompute = self.precompute - - copy_Gram = True - copy_X = True - Xy = None - X, y, X_mean, y_mean, X_std, Gram, Xy = \ - _pre_fit(X, y, Xy, precompute, self.normalize, self.fit_intercept, - copy=copy_X) + _pre_fit(X, y, None, self.precompute, self.normalize, + self.fit_intercept, copy=True) if y.ndim == 1: y = y[:, np.newaxis] @@ -591,12 +585,14 @@ def fit(self, X, y): if Gram is False: self.coef_ = orthogonal_mp(X, y, self.n_nonzero_coefs_, self.tol, - precompute=False, copy_X=copy_X).T + precompute=False, copy_X=True).T else: norms_sq = np.sum(y ** 2, axis=0) if self.tol is not None else None - self.coef_ = orthogonal_mp_gram(Gram, Xy, self.n_nonzero_coefs_, - self.tol, norms_sq, - copy_Gram, True).T + + self.coef_ = orthogonal_mp_gram( + Gram, Xy=Xy, n_nonzero_coefs=self.n_nonzero_coefs_, + tol=self.tol, norms_squared=norms_sq, + copy_Gram=True, copy_Xy=True).T self._set_intercept(X_mean, y_mean, X_std) return self
A few comments were not tackled before #3411 was merged. Now is the pull request to do that.
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/3414
2014-07-17T15:41:06Z
2014-07-17T15:57:37Z
2014-07-17T15:57:37Z
2014-07-19T09:10:48Z
462
scikit-learn/scikit-learn
46,847
#4071 Mixin to prevent setting return_value after initializing certain Mock objects
diff --git a/certbot-apache/certbot_apache/tests/configurator_test.py b/certbot-apache/certbot_apache/tests/configurator_test.py index db04bfcd168..23242d091dc 100644 --- a/certbot-apache/certbot_apache/tests/configurator_test.py +++ b/certbot-apache/certbot_apache/tests/configurator_test.py @@ -121,7 +121,8 @@ def test_add_parser_arguments(self): # pylint: disable=no-self-use @certbot_util.patch_get_utility() def test_get_all_names(self, mock_getutility): - mock_getutility.notification = mock.MagicMock(return_value=True) + mock_utility = mock_getutility() + mock_utility.notification = mock.MagicMock(return_value=True) names = self.config.get_all_names() self.assertEqual(names, set( ["certbot.demo", "ocspvhost.com", "encryption-example.demo"] @@ -131,9 +132,8 @@ def test_get_all_names(self, mock_getutility): @mock.patch("certbot_apache.configurator.socket.gethostbyaddr") def test_get_all_names_addrs(self, mock_gethost, mock_getutility): mock_gethost.side_effect = [("google.com", "", ""), socket.error] - notification = mock.Mock() - notification.notification = mock.Mock(return_value=True) - mock_getutility.return_value = notification + mock_utility = mock_getutility() + mock_utility.notification.return_value = True vhost = obj.VirtualHost( "fp", "ap", set([obj.Addr(("8.8.8.8", "443")), diff --git a/certbot/plugins/standalone_test.py b/certbot/plugins/standalone_test.py index 2a55c516fd5..1ae731e429e 100644 --- a/certbot/plugins/standalone_test.py +++ b/certbot/plugins/standalone_test.py @@ -158,10 +158,11 @@ def test_perform(self): @test_util.patch_get_utility() def test_perform_eaddrinuse_retry(self, mock_get_utility): + mock_utility = mock_get_utility() errno = socket.errno.EADDRINUSE error = errors.StandaloneBindError(mock.MagicMock(errno=errno), -1) self.auth.servers.run.side_effect = [error] + 2 * [mock.MagicMock()] - mock_yesno = mock_get_utility.return_value.yesno + mock_yesno = mock_utility.yesno mock_yesno.return_value = True self.test_perform() @@ -169,7 +170,8 @@ def test_perform_eaddrinuse_retry(self, mock_get_utility): @test_util.patch_get_utility() def test_perform_eaddrinuse_no_retry(self, mock_get_utility): - mock_yesno = mock_get_utility.return_value.yesno + mock_utility = mock_get_utility() + mock_yesno = mock_utility.yesno mock_yesno.return_value = False errno = socket.errno.EADDRINUSE diff --git a/certbot/tests/cert_manager_test.py b/certbot/tests/cert_manager_test.py index 6c43eb16804..6585644cf0a 100644 --- a/certbot/tests/cert_manager_test.py +++ b/certbot/tests/cert_manager_test.py @@ -346,9 +346,8 @@ def test_no_certname(self, mock_get_utility, mock_renewal_conf_files): self.assertRaises(errors.Error, self._call, self.config) mock_renewal_conf_files.return_value = ["one.conf"] - util_mock = mock.Mock() + util_mock = mock_get_utility() util_mock.menu.return_value = (display_util.CANCEL, 0) - mock_get_utility.return_value = util_mock self.assertRaises(errors.Error, self._call, self.config) util_mock.menu.return_value = (display_util.OK, -1) @@ -359,14 +358,11 @@ def test_no_new_certname(self, mock_get_utility): self.config.certname = "one" self.config.new_certname = None - util_mock = mock.Mock() + util_mock = mock_get_utility() util_mock.input.return_value = (display_util.CANCEL, "name") - mock_get_utility.return_value = util_mock self.assertRaises(errors.Error, self._call, self.config) - util_mock = mock.Mock() util_mock.input.return_value = (display_util.OK, None) - mock_get_utility.return_value = util_mock self.assertRaises(errors.Error, self._call, self.config) @test_util.patch_get_utility() @@ -393,9 +389,8 @@ def test_rename_cert(self, mock_check, unused_get_utility): def test_rename_cert_interactive_certname(self, mock_check, mock_get_utility): mock_check.return_value = True self.config.certname = None - util_mock = mock.Mock() + util_mock = mock_get_utility() util_mock.menu.return_value = (display_util.OK, 0) - mock_get_utility.return_value = util_mock self._call(self.config) from certbot import cert_manager updated_lineage = cert_manager.lineage_for_certname(self.config, self.config.new_certname) diff --git a/certbot/tests/display/ops_test.py b/certbot/tests/display/ops_test.py index 306b958415e..cb0fb32e35f 100644 --- a/certbot/tests/display/ops_test.py +++ b/certbot/tests/display/ops_test.py @@ -310,10 +310,11 @@ def test_get_valid_domains(self): @test_util.patch_get_utility("certbot.display.ops.z_util") def test_choose_manually(self, mock_util): from certbot.display.ops import _choose_names_manually + utility_mock = mock_util() # No retry - mock_util().yesno.return_value = False + utility_mock.yesno.return_value = False # IDN and no retry - mock_util().input.return_value = (display_util.OK, + utility_mock.input.return_value = (display_util.OK, "uniçodé.com") self.assertEqual(_choose_names_manually(), []) # IDN exception with previous mocks @@ -324,7 +325,7 @@ def test_choose_manually(self, mock_util): mock_sli.side_effect = unicode_error self.assertEqual(_choose_names_manually(), []) # Valid domains - mock_util().input.return_value = (display_util.OK, + utility_mock.input.return_value = (display_util.OK, ("example.com," "under_score.example.com," "justtld," @@ -332,14 +333,17 @@ def test_choose_manually(self, mock_util): self.assertEqual(_choose_names_manually(), ["example.com", "under_score.example.com", "justtld", "valid.example.com"]) + + @test_util.patch_get_utility("certbot.display.ops.z_util") + def test_choose_manually_retry(self, mock_util): + from certbot.display.ops import _choose_names_manually + utility_mock = mock_util() # Three iterations - mock_util().input.return_value = (display_util.OK, + utility_mock.input.return_value = (display_util.OK, "uniçodé.com") - yn = mock.MagicMock() - yn.side_effect = [True, True, False] - mock_util().yesno = yn + utility_mock.yesno.side_effect = [True, True, False] _choose_names_manually() - self.assertEqual(mock_util().yesno.call_count, 3) + self.assertEqual(utility_mock.yesno.call_count, 3) class SuccessInstallationTest(unittest.TestCase): diff --git a/certbot/tests/main_test.py b/certbot/tests/main_test.py index 4abd344f247..51f0697f4ab 100644 --- a/certbot/tests/main_test.py +++ b/certbot/tests/main_test.py @@ -164,9 +164,7 @@ def test_find_lineage_for_domains_and_certname(self, mock_report_cert, self.assertTrue(mock_report_cert.call_count == 2) # error in _ask_user_to_confirm_new_names - util_mock = mock.Mock() - util_mock.yesno.return_value = False - self.mock_get_utility.return_value = util_mock + self.mock_get_utility().yesno.return_value = False self.assertRaises(errors.ConfigurationError, self._call, ('certonly --webroot -d example.com -d test.com --cert-name example.com').split()) @@ -1115,7 +1113,7 @@ def tearDown(self): def test_abort_unregister(self): self.mocks['account'].AccountFileStorage.return_value = mock.Mock() - util_mock = self.mocks['get_utility'].return_value + util_mock = self.mocks['get_utility']() util_mock.yesno.return_value = False config = mock.Mock() diff --git a/certbot/tests/util.py b/certbot/tests/util.py index 4ecddc34f01..698962516e1 100644 --- a/certbot/tests/util.py +++ b/certbot/tests/util.py @@ -173,8 +173,8 @@ class FreezableMock(object): """Mock object with the ability to freeze attributes. This class works like a regular mock.MagicMock object, except - attributes and behavior can be set and frozen so they cannot be - changed during tests. + attributes and behavior set before the object is frozen cannot + be changed during tests. If a func argument is provided to the constructor, this function is called first when an instance of FreezableMock is called, @@ -182,10 +182,12 @@ class FreezableMock(object): value of func is ignored. """ - def __init__(self, frozen=False, func=None): + def __init__(self, frozen=False, func=None, return_value=mock.sentinel.DEFAULT): self._frozen_set = set() if frozen else set(('freeze',)) self._func = func self._mock = mock.MagicMock() + if return_value != mock.sentinel.DEFAULT: + self.return_value = return_value self._frozen = frozen def freeze(self): @@ -203,17 +205,38 @@ def __getattribute__(self, name): return object.__getattribute__(self, name) except AttributeError: return False + elif name in ('return_value', 'side_effect',): + return getattr(object.__getattribute__(self, '_mock'), name) elif name == '_frozen_set' or name in self._frozen_set: return object.__getattribute__(self, name) else: return getattr(object.__getattribute__(self, '_mock'), name) def __setattr__(self, name, value): + """ Before it is frozen, attributes are set on the FreezableMock + instance and added to the _frozen_set. Attributes in the _frozen_set + cannot be changed after the FreezableMock is frozen. In this case, + they are set on the underlying _mock. + + In cases of return_value and side_effect, these attributes are always + passed through to the instance's _mock and added to the _frozen_set + before the object is frozen. + + """ if self._frozen: - return setattr(self._mock, name, value) - elif name != '_frozen_set': + if name in self._frozen_set: + raise AttributeError('Cannot change frozen attribute ' + name) + else: + return setattr(self._mock, name, value) + + if name != '_frozen_set': self._frozen_set.add(name) - return object.__setattr__(self, name, value) + + if name in ('return_value', 'side_effect'): + return setattr(self._mock, name, value) + + else: + return object.__setattr__(self, name, value) def _create_get_utility_mock(): @@ -223,7 +246,7 @@ def _create_get_utility_mock(): frozen_mock = FreezableMock(frozen=True, func=_assert_valid_call) setattr(display, name, frozen_mock) display.freeze() - return mock.MagicMock(return_value=display) + return FreezableMock(frozen=True, return_value=display) def _assert_valid_call(*args, **kwargs):
Second of two candidate solutions to #4071. This involves an `ImmutableReturnMixin` to short-circuit attempts to set "return_value" on certain Mock objects which would otherwise bypass validity checks. This covers more than the first candidate because the Mixin can be inherited alongside `mock.MagicMock` to create a true Mock object which could be used as the patched `getUtility`. This would cover the original case outlined in #4071 that PR #4962 would not. However, inheriting from Mock or overriding is `__setattr__` gets weird fast and I've been trying to grok the unittest.Mock source to extend FreezableMock with limited success. One case of weird behavior is that calling a `FreezableMock`'s _mock attribute twice changes the returned value's `__str__` to an `ImmutableReturnMock`. I don't yet understand why.
https://api.github.com/repos/certbot/certbot/pulls/4963
2017-07-26T07:17:48Z
2017-08-30T16:52:46Z
2017-08-30T16:52:46Z
2017-08-30T16:53:05Z
2,802
certbot/certbot
1,479
Fix sglang worker
diff --git a/fastchat/model/model_adapter.py b/fastchat/model/model_adapter.py index acdab09bd7..e519e66fbd 100644 --- a/fastchat/model/model_adapter.py +++ b/fastchat/model/model_adapter.py @@ -2198,7 +2198,7 @@ def match(self, model_path: str): def get_default_conv_template(self, model_path: str) -> Conversation: return get_conv_template("vicuna_v1.1") - + class YuanAdapter(BaseModelAdapter): """The model adapter for Yuan""" diff --git a/fastchat/serve/sglang_worker.py b/fastchat/serve/sglang_worker.py index 18c4be3612..1938210d9a 100644 --- a/fastchat/serve/sglang_worker.py +++ b/fastchat/serve/sglang_worker.py @@ -1,5 +1,8 @@ """ A model worker that executes the model based on SGLANG. + +Usage: +python3 -m fastchat.serve.sglang_worker --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --port 30000 --worker-address http://localhost:30000 """ import argparse @@ -10,16 +13,7 @@ from fastapi import FastAPI, Request, BackgroundTasks from fastapi.responses import StreamingResponse, JSONResponse import uvicorn -from sglang import ( - function, - image, - system, - user, - assistant, - gen, - set_default_backend, - Runtime, -) +import sglang as sgl from sglang.srt.hf_transformers_utils import get_tokenizer, get_config from sglang.srt.utils import load_image @@ -33,14 +27,14 @@ app = FastAPI() -@function +@sgl.function def pipeline(s, prompt, max_tokens): for p in prompt: if isinstance(p, str): s += p else: - s += image(p) - s += gen("response", max_tokens=max_tokens) + s += sgl.image(p) + s += sgl.gen("response", max_tokens=max_tokens) class SGLWorker(BaseModelWorker): @@ -55,7 +49,7 @@ def __init__( limit_worker_concurrency: int, no_register: bool, conv_template: str, - runtime: Runtime, + runtime: sgl.Runtime, trust_remote_code: bool, ): super().__init__( @@ -270,14 +264,15 @@ async def api_model_details(request: Request): args.model_path if args.tokenizer_path == "" else args.tokenizer_path ) - runtime = Runtime( + runtime = sgl.Runtime( model_path=args.model_path, tokenizer_path=args.tokenizer_path, trust_remote_code=args.trust_remote_code, mem_fraction_static=args.mem_fraction_static, tp_size=args.tp_size, + log_level="info", ) - set_default_backend(runtime) + sgl.set_default_backend(runtime) worker = SGLWorker( args.controller_address, diff --git a/tests/test_openai_vision_api.py b/tests/test_openai_vision_api.py index b1eb0ac8b5..a54d7d5756 100644 --- a/tests/test_openai_vision_api.py +++ b/tests/test_openai_vision_api.py @@ -1,5 +1,6 @@ """ Test the OpenAI compatible server + Launch: python3 launch_openai_api_test_server.py --multimodal """
https://api.github.com/repos/lm-sys/FastChat/pulls/2953
2024-01-24T09:10:08Z
2024-01-24T10:24:16Z
2024-01-24T10:24:16Z
2024-01-24T10:24:19Z
807
lm-sys/FastChat
41,020
remove cocalc; all non-authenticated access has been disabled
diff --git a/gpt4free/README.md b/gpt4free/README.md index f3ba27ab70..73e7fa09f1 100644 --- a/gpt4free/README.md +++ b/gpt4free/README.md @@ -42,9 +42,6 @@ print(f'END') response = gpt4free.Completion.create(Provider.Theb, prompt='Write a poem on Lionel Messi') print(response) -# usage cocalc -response = gpt4free.Completion.create(Provider.CoCalc, prompt='Write a poem on Lionel Messi', cookie_input='') -print(response) ``` @@ -73,8 +70,6 @@ Some of the keyword arguments are optional, while others are required. - Theb: (no keyword arguments required) -- CoCalc: - - `cookie_input`: str - this needs to be provided by user #### Token generation of quora ```python diff --git a/gpt4free/__init__.py b/gpt4free/__init__.py index 6df778e34d..1e65289772 100644 --- a/gpt4free/__init__.py +++ b/gpt4free/__init__.py @@ -1,6 +1,5 @@ from enum import Enum -from gpt4free import cocalc from gpt4free import forefront from gpt4free import quora from gpt4free import theb @@ -15,7 +14,6 @@ class Provider(Enum): Poe = 'poe' ForeFront = 'fore_front' Theb = 'theb' - CoCalc = 'cocalc' UseLess = 'useless' @@ -40,8 +38,6 @@ def create(provider: Provider, prompt: str, **kwargs) -> str: return Completion.__fore_front_service(prompt, **kwargs) elif provider == Provider.Theb: return Completion.__theb_service(prompt, **kwargs) - elif provider == Provider.CoCalc: - return Completion.__cocalc_service(prompt, **kwargs) elif provider == Provider.UseLess: return Completion.__useless_service(prompt, **kwargs) else: @@ -67,6 +63,3 @@ def __fore_front_service(prompt: str, **kwargs) -> str: def __theb_service(prompt: str, **kwargs): return ''.join(theb.Completion.create(prompt=prompt)) - @staticmethod - def __cocalc_service(prompt: str, **kwargs): - return cocalc.Completion.create(prompt, cookie_input=kwargs.get('cookie_input', '')).text diff --git a/gpt4free/cocalc/__init__.py b/gpt4free/cocalc/__init__.py deleted file mode 100644 index e122051aca..0000000000 --- a/gpt4free/cocalc/__init__.py +++ /dev/null @@ -1,67 +0,0 @@ -import requests -from fake_useragent import UserAgent -from pydantic import BaseModel - - -class CoCalcResponse(BaseModel): - text: str - status: bool - - -class Completion: - """A class for generating text completions using CoCalc's GPT-based chatbot.""" - - API_ENDPOINT = "https://cocalc.com/api/v2/openai/chatgpt" - DEFAULT_SYSTEM_PROMPT = "ASSUME I HAVE FULL ACCESS TO COCALC. " - - @staticmethod - def create(prompt: str, cookie_input: str) -> CoCalcResponse: - """ - Generate a text completion for the given prompt using CoCalc's GPT-based chatbot. - - Args: - prompt: The text prompt to complete. - cookie_input: The cookie required to authenticate the chatbot API request. - - Returns: - A CoCalcResponse object containing the text completion and a boolean indicating - whether the request was successful. - """ - - # Initialize a session with custom headers - session = Completion._initialize_session(cookie_input) - - # Set the data that will be submitted - payload = Completion._create_payload(prompt, Completion.DEFAULT_SYSTEM_PROMPT) - - try: - # Submit the request and return the results - response = session.post(Completion.API_ENDPOINT, json=payload).json() - return CoCalcResponse(text=response['output'], status=response['success']) - except requests.exceptions.RequestException as e: - # Handle exceptions that may occur during the request - print(f"Error: {e}") - return CoCalcResponse(text="", status=False) - - @classmethod - def _initialize_session(cls, conversation_cookie: str) -> requests.Session: - """Initialize a session with custom headers for the request.""" - - session = requests.Session() - headers = { - "Accept": "*/*", - "Accept-Language": "en-US,en;q=0.5", - "Origin": "https://cocalc.com", - "Referer": "https://cocalc.com/api/v2/openai/chatgpt", - "Cookie": conversation_cookie, - "User-Agent": UserAgent().random, - } - session.headers.update(headers) - - return session - - @staticmethod - def _create_payload(prompt: str, system_prompt: str) -> dict: - """Create the payload for the API request.""" - - return {"input": prompt, "system": system_prompt, "tag": "next:index"} diff --git a/gpt4free/cocalc/readme.md b/gpt4free/cocalc/readme.md deleted file mode 100644 index f091115554..0000000000 --- a/gpt4free/cocalc/readme.md +++ /dev/null @@ -1,19 +0,0 @@ -### Example: `cocalc` <a name="example-cocalc"></a> - -```python -# import library -from gpt4free import cocalc - -cocalc.Completion.create(prompt="How are you!", cookie_input="cookieinput") ## Tutorial -``` - -### How to grab cookie input -```js -// input this into ur developer tools console and the exact response u get from this u put into ur cookieInput! -var cookies = document.cookie.split("; "); -var cookieString = ""; -for (var i = 0; i < cookies.length; i++) { - cookieString += cookies[i] + "; "; -} -console.log(cookieString); -```
Hi Gpt4free devs, - cocalc is my website - i have disabled *all* non-authenticated access, so gpt4free will no longer work with cocalc. - it's better to remove it so you don't confuse your users, who might expect cocalc to work with gpt4free Thanks! It wasn't my intention to have random use of GPT through our API, though I realize that is not spelled out explicitly in our terms of service.
https://api.github.com/repos/xtekky/gpt4free/pulls/461
2023-05-05T05:12:12Z
2023-05-07T20:00:00Z
2023-05-07T20:00:00Z
2023-05-07T20:24:56Z
1,438
xtekky/gpt4free
38,154
Update stored spec object with kwargs
diff --git a/gym/envs/registration.py b/gym/envs/registration.py index 197d47d9edc..11466a43cc7 100644 --- a/gym/envs/registration.py +++ b/gym/envs/registration.py @@ -1,4 +1,5 @@ import re +import copy import importlib import warnings @@ -59,7 +60,9 @@ def make(self, **kwargs): env = cls(**_kwargs) # Make the environment aware of which spec it came from. - env.unwrapped.spec = self + spec = copy.deepcopy(self) + spec._kwargs = _kwargs + env.unwrapped.spec = spec return env diff --git a/gym/envs/tests/test_registration.py b/gym/envs/tests/test_registration.py index e8e60331f37..913719da25b 100644 --- a/gym/envs/tests/test_registration.py +++ b/gym/envs/tests/test_registration.py @@ -44,6 +44,11 @@ def test_spec(): spec = envs.spec('CartPole-v0') assert spec.id == 'CartPole-v0' +def test_spec_with_kwargs(): + map_name_value = '8x8' + env = gym.make('FrozenLake-v0', map_name=map_name_value) + assert env.spec._kwargs['map_name'] == map_name_value + def test_missing_lookup(): registry = registration.EnvRegistry() registry.register(id='Test-v0', entry_point=None)
Let's consider a registered environment with kwargs: https://github.com/openai/gym/blob/54f22cf4db2e43063093a1b15d968a57a32b6e90/gym/envs/__init__.py#L150-L156 When users create environment with `gym.make`, they can pass extra options that override the values of the default kwargs: ```python >>> import gym >>> env = gym.make('FrozenLake-v0', map_name='8x8') ``` The environment is constructed with the updated kwargs, however the current logic [1] does not update the `env.spec` attribute of the resulting `env` object. In fact: ```python >>> env.spec._kwargs {'map_name': '4x4'} ``` This PR updates the logic of the `EnvSpec.make` method to store the `spec` object that was actually used for constructing the environment. --- [1] Current logic: the `_kwargs` variable modified with the `**kwargs` option is not part of the `self` instance that is stored in the `spec` attribute of the environment. https://github.com/openai/gym/blob/54f22cf4db2e43063093a1b15d968a57a32b6e90/gym/envs/registration.py#L49-L64
https://api.github.com/repos/openai/gym/pulls/1866
2020-04-06T19:57:37Z
2020-04-24T21:49:42Z
2020-04-24T21:49:42Z
2020-04-24T22:38:44Z
344
openai/gym
5,331
Update MSSQL Injection.md
diff --git a/SQL Injection/MSSQL Injection.md b/SQL Injection/MSSQL Injection.md index 35836c8675..28b870d43c 100644 --- a/SQL Injection/MSSQL Injection.md +++ b/SQL Injection/MSSQL Injection.md @@ -236,8 +236,7 @@ EXECUTE('EXECUTE(''sp_addsrvrolemember ''''hacker'''' , ''''sysadmin'''' '') AT ## References * [Pentest Monkey - mssql-sql-injection-cheat-sheet](http://pentestmonkey.net/cheat-sheet/sql-injection/mssql-sql-injection-cheat-sheet) -* [Sqlinjectionwiki - MSSQL](http://www.sqlinjectionwiki.com/categories/1/mssql-sql-injection-cheat-sheet/) * [Error Based - SQL Injection ](https://github.com/incredibleindishell/exploit-code-by-me/blob/master/MSSQL%20Error-Based%20SQL%20Injection%20Order%20by%20clause/Error%20based%20SQL%20Injection%20in%20“Order%20By”%20clause%20(MSSQL).pdf) * [MSSQL Trusted Links - HackTricks.xyz](https://book.hacktricks.xyz/windows/active-directory-methodology/mssql-trusted-links) * [SQL Server – Link… Link… Link… and Shell: How to Hack Database Links in SQL Server! - Antti Rantasaari - June 6th, 2013](https://blog.netspi.com/how-to-hack-database-links-in-sql-server/) -* [DAFT: Database Audit Framework & Toolkit - NetSPI](https://github.com/NetSPI/DAFT) \ No newline at end of file +* [DAFT: Database Audit Framework & Toolkit - NetSPI](https://github.com/NetSPI/DAFT)
Broken link [Sqlinjectionwiki - MSSQL](http://www.sqlinjectionwiki.com/categories/1/mssql-sql-injection-cheat-sheet/) .
https://api.github.com/repos/swisskyrepo/PayloadsAllTheThings/pulls/260
2020-10-09T07:23:43Z
2020-10-09T08:31:48Z
2020-10-09T08:31:48Z
2020-10-09T08:31:48Z
404
swisskyrepo/PayloadsAllTheThings
8,315
process replay: add Ford Bronco Sport
diff --git a/selfdrive/test/process_replay/ref_commit b/selfdrive/test/process_replay/ref_commit index 63db584c872a72..dcd2f7b224c881 100644 --- a/selfdrive/test/process_replay/ref_commit +++ b/selfdrive/test/process_replay/ref_commit @@ -1 +1 @@ -634d4ff195345a4a2508e497744aa08addec9237 +ec92fe65806256cb5180cfd5ec60895efcc08da2 \ No newline at end of file diff --git a/selfdrive/test/process_replay/test_processes.py b/selfdrive/test/process_replay/test_processes.py index c58909bf7fa6fc..569090f606c2cf 100755 --- a/selfdrive/test/process_replay/test_processes.py +++ b/selfdrive/test/process_replay/test_processes.py @@ -29,6 +29,7 @@ ("SUBARU", "341dccd5359e3c97|2022-09-12--10-35-33--3"), # SUBARU.OUTBACK ("GM", "0c58b6a25109da2b|2021-02-23--16-35-50--11"), # GM.VOLT ("GM2", "376bf99325883932|2022-10-27--13-41-22--1"), # GM.BOLT_EUV + ("FORD", "54827bf84c38b14f|2023-01-26--21-59-07--4"), # FORD.BRONCO_SPORT_MK1 ("NISSAN", "35336926920f3571|2021-02-12--18-38-48--46"), # NISSAN.XTRAIL ("VOLKSWAGEN", "de9592456ad7d144|2021-06-29--11-00-15--6"), # VOLKSWAGEN.GOLF ("MAZDA", "bd6a637565e91581|2021-10-30--15-14-53--4"), # MAZDA.CX9_2021 @@ -52,6 +53,7 @@ ("SUBARU", "regen1E72BBDCED5|2022-09-27--15-55-31--0"), ("GM", "regen45B05A80EF6|2022-09-27--15-57-22--0"), ("GM2", "376bf99325883932|2022-10-27--13-41-22--1"), + ("FORD", "54827bf84c38b14f|2023-01-26--21-59-07--4"), ("NISSAN", "regenC19D899B46D|2022-09-27--15-59-13--0"), ("VOLKSWAGEN", "regenD8F7AC4BD0D|2022-09-27--16-41-45--0"), ("MAZDA", "regenFC3F9ECBB64|2022-09-27--16-03-09--0"),
https://api.github.com/repos/commaai/openpilot/pulls/27112
2023-01-27T06:33:16Z
2023-01-27T22:14:04Z
2023-01-27T22:14:04Z
2023-01-27T22:14:05Z
723
commaai/openpilot
8,888
[3.9] Use the zero argument form of super() in examples for Python3 docs. (GH-22314)
diff --git a/Doc/howto/logging-cookbook.rst b/Doc/howto/logging-cookbook.rst index de0f834551f5dd..5777a4c5031f85 100644 --- a/Doc/howto/logging-cookbook.rst +++ b/Doc/howto/logging-cookbook.rst @@ -1188,7 +1188,7 @@ to the above, as in the following example:: class StyleAdapter(logging.LoggerAdapter): def __init__(self, logger, extra=None): - super(StyleAdapter, self).__init__(logger, extra or {}) + super().__init__(logger, extra or {}) def log(self, level, msg, /, *args, **kwargs): if self.isEnabledFor(level): @@ -1783,7 +1783,7 @@ as in the following complete example:: return tuple(o) elif isinstance(o, unicode): return o.encode('unicode_escape').decode('ascii') - return super(Encoder, self).default(o) + return super().default(o) class StructuredMessage: def __init__(self, message, /, **kwargs): @@ -2175,11 +2175,11 @@ class, as shown in the following example:: """ Format an exception so that it prints on a single line. """ - result = super(OneLineExceptionFormatter, self).formatException(exc_info) + result = super().formatException(exc_info) return repr(result) # or format into one line however you want to def format(self, record): - s = super(OneLineExceptionFormatter, self).format(record) + s = super().format(record) if record.exc_text: s = s.replace('\n', '') + '|' return s @@ -2813,7 +2813,7 @@ refer to the comments in the code snippet for more detailed information. # class QtHandler(logging.Handler): def __init__(self, slotfunc, *args, **kwargs): - super(QtHandler, self).__init__(*args, **kwargs) + super().__init__(*args, **kwargs) self.signaller = Signaller() self.signaller.signal.connect(slotfunc) @@ -2883,7 +2883,7 @@ refer to the comments in the code snippet for more detailed information. } def __init__(self, app): - super(Window, self).__init__() + super().__init__() self.app = app self.textedit = te = QtWidgets.QPlainTextEdit(self) # Set whatever the default monospace font is for the platform diff --git a/Doc/library/argparse.rst b/Doc/library/argparse.rst index 75e083a2d90724..aa4713e75cd471 100644 --- a/Doc/library/argparse.rst +++ b/Doc/library/argparse.rst @@ -863,7 +863,7 @@ An example of a custom action:: ... def __init__(self, option_strings, dest, nargs=None, **kwargs): ... if nargs is not None: ... raise ValueError("nargs not allowed") - ... super(FooAction, self).__init__(option_strings, dest, **kwargs) + ... super().__init__(option_strings, dest, **kwargs) ... def __call__(self, parser, namespace, values, option_string=None): ... print('%r %r %r' % (namespace, values, option_string)) ... setattr(namespace, self.dest, values) diff --git a/Doc/library/contextlib.rst b/Doc/library/contextlib.rst index 0aa4ad76523480..4c6c520713178c 100644 --- a/Doc/library/contextlib.rst +++ b/Doc/library/contextlib.rst @@ -638,7 +638,7 @@ even further by means of a small helper class:: class Callback(ExitStack): def __init__(self, callback, /, *args, **kwds): - super(Callback, self).__init__() + super().__init__() self.callback(callback, *args, **kwds) def cancel(self): diff --git a/Doc/library/multiprocessing.rst b/Doc/library/multiprocessing.rst index 352f48f513df99..def27bf07a03e4 100644 --- a/Doc/library/multiprocessing.rst +++ b/Doc/library/multiprocessing.rst @@ -1926,7 +1926,7 @@ client to access it remotely:: >>> class Worker(Process): ... def __init__(self, q): ... self.q = q - ... super(Worker, self).__init__() + ... super().__init__() ... def run(self): ... self.q.put('local hello') ... diff --git a/Doc/library/unittest.mock-examples.rst b/Doc/library/unittest.mock-examples.rst index e650bb1e23e03e..24a18c68484686 100644 --- a/Doc/library/unittest.mock-examples.rst +++ b/Doc/library/unittest.mock-examples.rst @@ -893,7 +893,7 @@ Here's an example implementation: ... def __call__(self, /, *args, **kwargs): ... args = deepcopy(args) ... kwargs = deepcopy(kwargs) - ... return super(CopyingMock, self).__call__(*args, **kwargs) + ... return super().__call__(*args, **kwargs) ... >>> c = CopyingMock(return_value=None) >>> arg = set() diff --git a/Doc/library/weakref.rst b/Doc/library/weakref.rst index d3c3a070f38af0..b88543e4453723 100644 --- a/Doc/library/weakref.rst +++ b/Doc/library/weakref.rst @@ -382,7 +382,7 @@ the referent is accessed:: class ExtendedRef(weakref.ref): def __init__(self, ob, callback=None, /, **annotations): - super(ExtendedRef, self).__init__(ob, callback) + super().__init__(ob, callback) self.__counter = 0 for k, v in annotations.items(): setattr(self, k, v) @@ -391,7 +391,7 @@ the referent is accessed:: """Return a pair containing the referent and the number of times the reference has been called. """ - ob = super(ExtendedRef, self).__call__() + ob = super().__call__() if ob is not None: self.__counter += 1 ob = (ob, self.__counter)
(cherry picked from commit 52cd6d5e1b2bece0d8efb58b1af41071c914ebe6) Co-authored-by: Andre Delfino <adelfino@gmail.com>
https://api.github.com/repos/python/cpython/pulls/25638
2021-04-26T22:14:11Z
2021-04-26T22:16:20Z
2021-04-26T22:16:20Z
2021-04-26T22:16:24Z
1,494
python/cpython
4,461
FRENCH FIXES
diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md index 397fb8f09..fe88b7893 100644 --- a/CONTRIBUTORS.md +++ b/CONTRIBUTORS.md @@ -68,4 +68,4 @@ The following people have contributed to the development of Rich: - [Ke Sun](https://github.com/ksun212) - [Qiming Xu](https://github.com/xqm32) - [James Addison](https://github.com/jayaddison) - +- [Pierro](https://github.com/xpierroz) \ No newline at end of file diff --git a/README.fr.md b/README.fr.md index db205a85a..52db2893e 100644 --- a/README.fr.md +++ b/README.fr.md @@ -26,7 +26,7 @@ Rich est une bibliothèque Python pour le _rich_ texte et la mise en forme dans le terminal. -L'[API Rich](https://rich.readthedocs.io/en/latest/) permet d'ajouter facilement de la couleur et du style sur la sortie du terminal. Rich peut également rendre de jolis tableaux, des barres de progression, du markdown, du code source avec de la coloration syntaxique, des traçeurs d'erreurs et bien d'autres choses encore, et ce dès le départ. +L'[API Rich](https://rich.readthedocs.io/en/latest/) permet d'ajouter facilement de la couleur et du style sur le texte du terminal. Rich peut également rendre de jolis tableaux, des barres de progression, du markdown, du code source avec de la coloration syntaxique, des messages d'erreurs et bien d'autres choses encore, et ce dès le départ. ![Features](https://github.com/textualize/rich/raw/master/imgs/features.png) @@ -68,7 +68,7 @@ print("Hello, [bold magenta]World[/bold magenta]!", ":vampire:", locals()) ## Rich REPL -Rich peut être installé dans le REPL de Python, de sorte que toutes les structures de données seront joliment affichées et mises en évidence. +Rich peut être installé dans le REPL de Python, de sorte que toutes les structures de données soient joliment affichées et mises en évidence. ```python >>> from rich import pretty @@ -79,7 +79,7 @@ Rich peut être installé dans le REPL de Python, de sorte que toutes les struct ## Utilisation de Console -Pour mieux contrôler le contenu rich du terminal, importez et construisez un objet [Console](https://rich.readthedocs.io/en/latest/reference/console.html#rich.console.Console) +Pour mieux contrôler le contenu rich du terminal, importez et construisez une classe [Console](https://rich.readthedocs.io/en/latest/reference/console.html#rich.console.Console) ```python from rich.console import Console @@ -87,7 +87,7 @@ from rich.console import Console console = Console() ``` -L'objet Console possède une méthode `print` dont l'interface est intentionnellement similaire à celle de la fonction `print` native. Voici un exemple d'utilisation : +La classe Console possède une méthode `print` dont l'interface est intentionnellement similaire à celle de la fonction `print` native. Voici un exemple d'utilisation : ```python console.print("Hello", "World!") @@ -95,13 +95,13 @@ console.print("Hello", "World!") Comme vous pouvez vous y attendre, cela va afficher "Hello World !" sur le terminal. Notez que, contrairement à la fonction d'affichage intégrée, Rich mettra votre texte en forme pour qu'il tienne dans la largeur du terminal. -Il y a plusieurs façons d'ajouter de la couleur et du style à votre sortie. Vous pouvez définir un style pour l'ensemble de la sortie en ajoutant un argument de mot-clé style. Voici un exemple : +Il y a plusieurs façons d'ajouter de la couleur et du style à votre sortie de texte. Vous pouvez définir un style pour l'ensemble de la sortie de texte en ajoutant un argument de mot-clé style. Voici un exemple : ```python console.print("Hello", "World!", style="bold red") ``` -La sortie sera quelque chose comme ce qui suit : +La sortie de texte sera quelque chose comme ce qui suit : ![Hello World](https://github.com/textualize/rich/raw/master/imgs/hello_world.png) @@ -174,7 +174,7 @@ La méthode log peut être utilisée pour la journalisation vers le terminal pou <details> <summary>Journalisation</summary> -Vous pouvez également utiliser la classe intégrée [Handler](https://rich.readthedocs.io/en/latest/logging.html) pour formater et coloriser les sorties du module de journalisation de Python. Voici un exemple de sortie : +Vous pouvez également utiliser la classe intégrée [Handler](https://rich.readthedocs.io/en/latest/logging.html) pour formater et coloriser les textes de sortie du module de journalisation de Python. Voici un exemple de texte de sortie : ![Logging](https://github.com/textualize/rich/raw/master/imgs/logging.png) </details> @@ -195,7 +195,7 @@ Veuillez utiliser cette fonction à bon escient. <details> <summary>Tableaux</summary> -Rich peut rendre des [tableaux](https://rich.readthedocs.io/en/latest/tables.html) flexibles avec des caractères de boîte unicode. Il existe une grande variété d'options de formatage pour les bordures, les styles, l'alignement des cellules, etc. +Rich peut rendre des [tableaux](https://rich.readthedocs.io/en/latest/tables.html) flexibles avec des caractères unicodes. Il existe une grande variété d'options de formatage pour les bordures, les styles, l'alignement des cellules, etc. ![table movie](https://github.com/textualize/rich/raw/master/imgs/table_movie.gif) @@ -237,9 +237,9 @@ Cela produit le résultat suivant : ![table](https://github.com/textualize/rich/raw/master/imgs/table.png) -Notez que les balises de la console sont rendues de la même manière que `print()` et `log()`. En fait, tout ce qui peut être rendu par Rich peut être inclus dans les en-têtes / lignes (même d'autres tables). +Notez que les balises de la console sont rendues de la même manière que `print()` et `log()`. De fait, tout ce qui peut être rendu par Rich peut être inclus dans les en-têtes / lignes (même d'autres tables). -La classe `Table` est suffisamment intelligente pour redimensionner les colonnes en fonction de la largeur disponible du terminal, en enveloppant le texte si nécessaire. Voici le même exemple, avec un terminal plus petit que le tableau ci-dessus : +La classe `Table` est suffisamment intelligente pour redimensionner les colonnes en fonction de la largeur disponible du terminal, en enveloppant et en réduisant le texte si nécessaire. Voici le même exemple, avec un terminal plus petit que le tableau ci-dessus : ![table2](https://github.com/textualize/rich/raw/master/imgs/table2.png) </details> @@ -247,9 +247,9 @@ La classe `Table` est suffisamment intelligente pour redimensionner les colonnes <details> <summary>Barres de progression</summary> -Rich peut afficher plusieurs [barres de progression](https://rich.readthedocs.io/en/latest/progress.html) sans scintillement pour suivre les tâches de longue haleine. +Rich peut afficher plusieurs [barres de progression](https://rich.readthedocs.io/en/latest/progress.html) sans scintillement pour suivre les tâches de longue périodes. -Pour une utilisation basique, bouclez sur n'importe quelle séquence dans la fonction `track` et itérez sur le résultat. Voici un exemple : +Pour une utilisation basique, créez une boucle sur n'importe quelle séquence dans la fonction `track` et itérez sur le résultat. Voici un exemple : ```python from rich.progress import track @@ -266,7 +266,7 @@ Les colonnes peuvent être configurées pour afficher tous les détails que vous ![progress](https://github.com/textualize/rich/raw/master/imgs/downloader.gif) -Pour l'essayer vous-même, voyez [examples/downloader.py](https://github.com/textualize/rich/blob/master/examples/downloader.py) qui peut télécharger plusieurs URL simultanément tout en affichant la progression. +Pour l'essayer vous-même, testez [examples/downloader.py](https://github.com/textualize/rich/blob/master/examples/downloader.py) qui peut télécharger plusieurs URL simultanément tout en affichant la progression au fil du temps. </details> @@ -293,7 +293,7 @@ Cela génère la sortie suivante dans le terminal. ![status](https://github.com/textualize/rich/raw/master/imgs/status.gif) -Les animations des toupies ont été empruntées à [cli-spinners](https://www.npmjs.com/package/cli-spinners). Vous pouvez sélectionner un spinner en spécifiant le paramètre `spinner`. Exécutez la commande suivante pour voir les valeurs disponibles : +Les animations des characteres d'animations ont été empruntées à [cli-spinners](https://www.npmjs.com/package/cli-spinners). Vous pouvez en sélectionner un en spécifiant le paramètre `spinner`. Exécutez la commande suivante pour voir les valeurs disponibles : ``` python -m rich.spinner @@ -326,7 +326,7 @@ Voir l'exemple [tree.py](https://github.com/textualize/rich/blob/master/examples <details> <summary>Colonnes</summary> -Rich peut rendre le contenu en [colonnes](https://rich.readthedocs.io/en/latest/columns.html) avec une largeur égale ou optimale. Voici un clone très basique de la commande `ls` (MacOS / Linux) qui affiche une liste de répertoires en colonnes : +Rich peut rendre du contenu en [colonnes](https://rich.readthedocs.io/en/latest/columns.html) avec une largeur égale ou optimale. Voici un clone très basique de la commande `ls` (MacOS / Linux) qui affiche une liste de répertoires en colonnes : ```python import os @@ -365,7 +365,7 @@ console.print(markdown) Cela produira un résultat semblable à ce qui suit : ![markdown](https://github.com/textualize/rich/raw/master/imgs/markdown.png) - + </details> <details> @@ -405,7 +405,7 @@ Cela produira le résultat suivant : <details> <summary>Tracebacks</summary> -Rich peut rendre des [traçages d'erreurs](https://rich.readthedocs.io/en/latest/traceback.html) plus faciles à lire et montrent plus de code que les traçages d'erreurs standard de Python. Vous pouvez définir Rich comme le gestionnaire d'erreurs par défaut afin que toutes les exceptions non capturées soient rendues par Rich. +Rich peut rendre des [traçages d'erreurs](https://rich.readthedocs.io/en/latest/traceback.html) plus faciles à lire et qui montrent plus de code que les traçages d'erreurs standard de Python. Vous pouvez définir Rich comme le gestionnaire d'erreurs par défaut afin que toutes les exceptions/erreurs non capturées soient rendues par Rich. Voici à quoi cela ressemble sous OSX (similaire sous Linux) : @@ -419,7 +419,7 @@ Tous les éléments de rendu utilisent le [Console Protocol](https://rich.readth Disponible dans le cadre de l'abonnement Tidelift. -Les mainteneurs de Rich et de milliers d'autres paquets collaborent avec Tidelift pour fournir un support et une maintenance commerciale pour les paquets open source que vous utilisez pour construire vos applications. Gagnez du temps, réduisez les risques et améliorez la qualité du code, tout en payant les mainteneurs des paquets que vous utilisez. [En savoir plus](https://tidelift.com/subscription/pkg/pypi-rich?utm_source=pypi-rich&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) +Les mainteneurs de Rich et de milliers d'autres paquets collaborent avec Tidelift pour fournir un support et une maintenance commerciale pour les paquets open source que vous utilisez pour construire vos applications. Gagnez du temps, réduisez les risques et améliorez votre qualité de code, tout en payant les mainteneurs des paquets que vous utilisez. [En savoir plus](https://tidelift.com/subscription/pkg/pypi-rich?utm_source=pypi-rich&utm_medium=referral&utm_campaign=enterprise&utm_term=repo) # Projets utilisant Rich
## Type of changes - [ ] Bug fix - [ ] New feature - [X] Documentation / docstrings - [ ] Tests - [ ] Other ## Checklist - [X ] I've run the latest [black](https://github.com/psf/black) with default args on new code. - [X ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate. - [X ] I've added tests for new code. - [X ] I accept that @willmcgugan may be pedantic in the code review. ## Description I've edited the French translations which had a lot of errors. Some of them were important to understand properly (such as line 71, a bad usage of the verb être/be).
https://api.github.com/repos/Textualize/rich/pulls/2887
2023-03-21T09:26:40Z
2023-03-21T09:31:07Z
2023-03-21T09:31:07Z
2023-03-21T09:31:08Z
2,973
Textualize/rich
48,042
因Python3.6以前的字典类型是无序的而导致的错误
diff --git a/ppocr/modeling/architectures/rec_model.py b/ppocr/modeling/architectures/rec_model.py index d0441c05da..9f8c779cf2 100755 --- a/ppocr/modeling/architectures/rec_model.py +++ b/ppocr/modeling/architectures/rec_model.py @@ -16,6 +16,8 @@ from __future__ import division from __future__ import print_function +from collections import OrderedDict + from paddle import fluid from ppocr.utils.utility import create_module @@ -215,16 +217,15 @@ def __call__(self, mode): label = labels['label'] if self.loss_type == 'srn': total_loss, img_loss, word_loss = self.loss(predicts, labels) - outputs = { - 'total_loss': total_loss, - 'img_loss': img_loss, - 'word_loss': word_loss, - 'decoded_out': decoded_out, - 'label': label - } + outputs = OrderedDict([('total_loss', total_loss), + ('img_loss', img_loss), + ('word_loss', word_loss), + ('decoded_out', decoded_out), + ('label', label)]) else: - outputs = {'total_loss':loss, 'decoded_out':\ - decoded_out, 'label':label} + outputs = OrderedDict([('total_loss', loss), + ('decoded_out', decoded_out), + ('label', label)]) return loader, outputs # export_model elif mode == "export": @@ -233,16 +234,15 @@ def __call__(self, mode): predict = fluid.layers.softmax(predict) if self.loss_type == "srn": return [ - image, labels, { - 'decoded_out': decoded_out, - 'predicts': predict - } - ] + image, labels, OrderedDict([('decoded_out', decoded_out), + ('predicts', predict)])] - return [image, {'decoded_out': decoded_out, 'predicts': predict}] + return [image, OrderedDict([('decoded_out', decoded_out), + ('predicts', predict)])] # eval or test else: predict = predicts['predict'] if self.loss_type == "ctc": predict = fluid.layers.softmax(predict) - return loader, {'decoded_out': decoded_out, 'predicts': predict} + return loader, OrderedDict([('decoded_out', decoded_out), + ('predicts', predict)]) \ No newline at end of file
环境: docker hub上的paddlepaddle/paddle:latest-gpu-cuda10.1-cudnn7镜像(注意: 该镜像是python3.5) 问题: python3.6之前的dict类是无序的,导致tools/program.py文件中build函数里的fetch_name_list是无序的 造成现象: 验证时取得的preds可能是softmax后的分布结果,而不是decode后的结果,致使验证准确率计算不正确(有时正常,有时为空或0),影响训练,验证和推理脚本。 我只使用了字符识别的代码,所以其他部分在python3.6以下的版本可能也会出现相关的问题,需要开发者审核一下。 建议: 使用OrderedDict代替dict。
https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/982
2020-10-21T09:15:31Z
2020-10-21T11:52:22Z
2020-10-21T11:52:22Z
2020-11-13T06:46:17Z
563
PaddlePaddle/PaddleOCR
42,760
Added wecantrack.com API
diff --git a/README.md b/README.md index 0c0dfd2b2b..e43762e83e 100644 --- a/README.md +++ b/README.md @@ -806,6 +806,7 @@ API | Description | Auth | HTTPS | CORS | | [Postmon](http://postmon.com.br) | An API to query Brazilian ZIP codes and orders easily, quickly and free | No | No | Unknown | | [Sweden](https://developer.postnord.com/docs2) | Provides information about parcels in transport | `apiKey` | No | Unknown | | [UPS](https://www.ups.com/upsdeveloperkit) | Shipment and Address information | `apiKey` | Yes | Unknown | +| [WeCanTrack](https://docs.wecantrack.com) | Automatically place subids in affiliate links to attribute affiliate conversions to click data | `apiKey` | Yes | Yes | | [WhatPulse](https://whatpulse.org/pages/webapi/) | Small application that measures your keyboard/mouse usage | No | Yes | Unknown | **[⬆ Back to Index](#index)**
Thank you for taking the time to work on a Pull Request for this project! To ensure your PR is dealt with swiftly please check the following: - [X] Your submissions are formatted according to the guidelines in the [contributing guide](CONTRIBUTING.md) - [X] Your additions are ordered alphabetically - [X] Your submission has a useful description - [X] The description does not end with punctuation - [X] Each table column should be padded with one space on either side - [X] You have searched the repository for any relevant issues or pull requests - [X] Any category you are creating has the minimum requirement of 3 items - [x] All changes have been [squashed][squash-link] into a single commit [squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
https://api.github.com/repos/public-apis/public-apis/pulls/1300
2020-06-25T07:25:09Z
2021-04-26T23:59:28Z
2021-04-26T23:59:27Z
2021-04-26T23:59:28Z
246
public-apis/public-apis
35,872
[deflakey] Deflakey test_actor_advanced.py
diff --git a/python/ray/tests/test_actor_advanced.py b/python/ray/tests/test_actor_advanced.py index 5a7e30aae196a..765e2fdeda434 100644 --- a/python/ray/tests/test_actor_advanced.py +++ b/python/ray/tests/test_actor_advanced.py @@ -1060,13 +1060,9 @@ def graceful_exit(): state_after_ending = ray._private.state.actors()[actor_id] assert state_after_starting["StartTime"] == state_after_ending["StartTime"] - start_time = state_after_ending["StartTime"] end_time = state_after_ending["EndTime"] - lapsed = end_time - start_time - assert end_time > start_time > 0, f"Start: {start_time}, End: {end_time}" - assert 500 < lapsed < 1500, f"Start: {start_time}, End: {end_time}" def not_graceful_exit(): actor = Foo.remote() @@ -1082,10 +1078,7 @@ def not_graceful_exit(): start_time = state_after_ending["StartTime"] end_time = state_after_ending["EndTime"] - lapsed = end_time - start_time - assert end_time > start_time > 0, f"Start: {start_time}, End: {end_time}" - assert 500 < lapsed < 1500, f"Start: {start_time}, End: {end_time}" def restarted(): actor = Foo.options(max_restarts=1, max_task_retries=-1).remote() @@ -1103,10 +1096,7 @@ def restarted(): start_time = state_after_ending["StartTime"] end_time = state_after_ending["EndTime"] - lapsed = end_time - start_time - assert end_time > start_time > 0, f"Start: {start_time}, End: {end_time}" - assert 1500 < lapsed < 2500, f"Start: {start_time}, End: {end_time}" graceful_exit() not_graceful_exit()
Signed-off-by: Yi Cheng <74173148+iycheng@users.noreply.github.com> <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? The test failure is caused by the timing issues. This PR removed the timing check. The regression should be caught by other dashboard. <!-- Please give a short summary of the change and the problem this solves. --> ## Related issue number <!-- For example: "Closes #1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :(
https://api.github.com/repos/ray-project/ray/pulls/40636
2023-10-24T18:08:01Z
2023-10-24T20:30:33Z
2023-10-24T20:30:33Z
2023-10-24T20:30:33Z
465
ray-project/ray
19,322
fix ACM DomainValidationOptions to support waiters
diff --git a/localstack/services/acm/provider.py b/localstack/services/acm/provider.py index ecc5d75694bd1..2f3c45c637f18 100644 --- a/localstack/services/acm/provider.py +++ b/localstack/services/acm/provider.py @@ -20,9 +20,7 @@ def describe(describe_orig, self): "ExtendedKeyUsages": [], "Options": {"CertificateTransparencyLoggingPreference": "ENABLED"}, } - addenda["DomainValidationOptions"] = options = ( - getattr(self, "domain_validation_options", None) or [] - ) + addenda["DomainValidationOptions"] = options = cert.get("DomainValidationOptions") if not options: options = addenda["DomainValidationOptions"] = [ {"ValidationMethod": cert.get("ValidationMethod")} diff --git a/tests/integration/fixtures.py b/tests/integration/fixtures.py index 726cd3e0f2f3b..e79b4416bbdda 100644 --- a/tests/integration/fixtures.py +++ b/tests/integration/fixtures.py @@ -710,6 +710,29 @@ def _create_parameter(**kwargs): secretsmanager_client.delete_secret(SecretId=item) +@pytest.fixture +def acm_request_certificate(acm_client): + certificate_arns = [] + + def factory(**kwargs) -> str: + if "DomainName" not in kwargs: + kwargs["DomainName"] = f"test-domain-{short_uid()}.localhost.localstack.cloud" + + response = acm_client.request_certificate(**kwargs) + created_certificate_arn = response["CertificateArn"] + certificate_arns.append(created_certificate_arn) + return created_certificate_arn + + yield factory + + # cleanup + for certificate_arn in certificate_arns: + try: + acm_client.delete_certificate(CertificateArn=certificate_arn) + except Exception as e: + LOG.debug("error cleaning up certificate %s: %s", certificate_arn, e) + + only_localstack = pytest.mark.skipif( os.environ.get("TEST_TARGET") == "AWS_CLOUD", reason="test only applicable if run against localstack", diff --git a/tests/integration/test_acm.py b/tests/integration/test_acm.py index 7d66f1fea6ee4..3e4f86df170ba 100644 --- a/tests/integration/test_acm.py +++ b/tests/integration/test_acm.py @@ -1,10 +1,8 @@ -import unittest - +import pytest from moto.ec2 import utils as ec2_utils from localstack.constants import TEST_AWS_ACCOUNT_ID from localstack.utils.aws import aws_stack -from localstack.utils.common import short_uid DIGICERT_ROOT_CERT = """ -----BEGIN CERTIFICATE----- @@ -25,37 +23,41 @@ """ -class TestACM(unittest.TestCase): - def test_import_certificate(self): - acm = aws_stack.create_external_boto_client("acm") - - certs_before = acm.list_certificates().get("CertificateSummaryList", []) +class TestACM: + def test_import_certificate(self, acm_client): + certs_before = acm_client.list_certificates().get("CertificateSummaryList", []) - with self.assertRaises(Exception) as ctx: - acm.import_certificate(Certificate=b"CERT123", PrivateKey=b"KEY123") - self.assertIn("PEM", str(ctx.exception)) + with pytest.raises(Exception) as exec_info: + acm_client.import_certificate(Certificate=b"CERT123", PrivateKey=b"KEY123") + assert "PEM" in str(exec_info) private_key = ec2_utils.random_key_pair()["material"] - result = acm.import_certificate(Certificate=DIGICERT_ROOT_CERT, PrivateKey=private_key) - self.assertIn("CertificateArn", result) - - expected_arn = "arn:aws:acm:{0}:{1}:certificate".format( - aws_stack.get_region(), TEST_AWS_ACCOUNT_ID - ) - acm_cert_arn = result["CertificateArn"].split("/")[0] - self.assertEqual(expected_arn, acm_cert_arn) - - certs_after = acm.list_certificates().get("CertificateSummaryList", []) - self.assertEqual(len(certs_before) + 1, len(certs_after)) - - def test_domain_validation(self): - acm = aws_stack.create_external_boto_client("acm") - - domain_name = "example-%s.com" % short_uid() - options = [{"DomainName": domain_name, "ValidationDomain": domain_name}] - result = acm.request_certificate(DomainName=domain_name, DomainValidationOptions=options) - self.assertIn("CertificateArn", result) - - result = acm.describe_certificate(CertificateArn=result["CertificateArn"]) + result = None + try: + result = acm_client.import_certificate( + Certificate=DIGICERT_ROOT_CERT, PrivateKey=private_key + ) + assert "CertificateArn" in result + + expected_arn = "arn:aws:acm:{0}:{1}:certificate".format( + aws_stack.get_region(), TEST_AWS_ACCOUNT_ID + ) + acm_cert_arn = result["CertificateArn"].split("/")[0] + assert expected_arn == acm_cert_arn + + certs_after = acm_client.list_certificates().get("CertificateSummaryList", []) + assert len(certs_before) + 1 == len(certs_after) + finally: + if result is not None: + acm_client.delete_certificate(CertificateArn=result["CertificateArn"]) + + def test_domain_validation(self, acm_client, acm_request_certificate): + certificate_arn = acm_request_certificate() + result = acm_client.describe_certificate(CertificateArn=certificate_arn) options = result["Certificate"]["DomainValidationOptions"] - self.assertEqual(1, len(options)) + assert len(options) == 1 + + def test_boto_wait_for_certificate_validation(self, acm_client, acm_request_certificate): + certificate_arn = acm_request_certificate() + waiter = acm_client.get_waiter("certificate_validated") + waiter.wait(CertificateArn=certificate_arn, WaiterConfig={"Delay": 0, "MaxAttempts": 1})
Fixes the `DomainValidationOptions` for ACM's `DescribeCertificate` operation. The previous implementation did not use the existing array of the cert in the CertBundle, but tried to resolve a non-existing attribute, which in the end wasn't set (since the `addenda` would only be used for non-existing keys in the result). This fix is necessary to fix the ACM waiter on `certificate-validation`, which can be executed with awscli / awscli-local: ``` $ awslocal awslocal acm request-certificate --domain-name test-domain.localhost.localstack.cloud { "CertificateArn": "arn:aws:acm:us-east-1:000000000000:certificate/1547a44e-8ff9-4ee3-92f3-7e0f2bcf9b74" } $ awslocal acm wait certificate-validated --certificate-arn "arn:aws:acm:us-east-1:000000000000:certificate/1547a44e-8ff9-4ee3-92f3-7e0f2bcf9b74" ``` Without this fix, the latter call would not be successful (because the `DomainValidationOptions` of `DescribeCertificate` would never state that the `ValidationStatus` is `SUCCESSFUL`). The same mechanism is used when creating a `DnsValidatedCertificate` with CDK.
https://api.github.com/repos/localstack/localstack/pulls/5713
2022-03-21T15:56:35Z
2022-03-22T07:59:36Z
2022-03-22T07:59:36Z
2022-03-22T07:59:40Z
1,405
localstack/localstack
29,365
API/CLN: consolidate truncate into NDFrame (panel had a separate method)
diff --git a/doc/source/release.rst b/doc/source/release.rst index af9611bb98fae..d331dc7e164fc 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -311,6 +311,7 @@ API Changes - Provide __dir__ method (and local context) for tab completion / remove ipython completers code (:issue:`4501`) - Support non-unique axes in a Panel via indexing operations (:issue:`4960`) + - ``.truncate`` will raise a ``ValueError`` if invalid before and afters dates are given (:issue:`5242`) Internal Refactoring ~~~~~~~~~~~~~~~~~~~~ diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 16f4118d5d1df..266253e05ed61 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -2813,7 +2813,7 @@ def tshift(self, periods=1, freq=None, axis=0, **kwds): return self._constructor(new_data).__finalize__(self) - def truncate(self, before=None, after=None, copy=True): + def truncate(self, before=None, after=None, axis=None, copy=True): """Truncates a sorted NDFrame before and/or after some particular dates. @@ -2823,28 +2823,38 @@ def truncate(self, before=None, after=None, copy=True): Truncate before date after : date Truncate after date + axis : the truncation axis, defaults to the stat axis + copy : boolean, default is True, + return a copy of the truncated section Returns ------- truncated : type of caller """ + if axis is None: + axis = self._stat_axis_number + axis = self._get_axis_number(axis) + ax = self._get_axis(axis) + # if we have a date index, convert to dates, otherwise # treat like a slice - if self.index.is_all_dates: + if ax.is_all_dates: from pandas.tseries.tools import to_datetime before = to_datetime(before) after = to_datetime(after) if before is not None and after is not None: if before > after: - raise AssertionError('Truncate: %s must be after %s' % - (after, before)) + raise ValueError('Truncate: %s must be after %s' % + (after, before)) - result = self.ix[before:after] + slicer = [ slice(None, None) ] * self._AXIS_LEN + slicer[axis] = slice(before,after) + result = self.ix[tuple(slicer)] - if isinstance(self.index, MultiIndex): - result.index = self.index.truncate(before, after) + if isinstance(ax, MultiIndex): + setattr(result,self._get_axis_name(axis),ax.truncate(before, after)) if copy: result = result.copy() diff --git a/pandas/core/panel.py b/pandas/core/panel.py index 87e9121b2dffc..a86c186e26b53 100644 --- a/pandas/core/panel.py +++ b/pandas/core/panel.py @@ -998,30 +998,6 @@ def shift(self, lags, freq=None, axis='major'): def tshift(self, periods=1, freq=None, axis='major', **kwds): return super(Panel, self).tshift(periods, freq, axis, **kwds) - def truncate(self, before=None, after=None, axis='major'): - """Function truncates a sorted Panel before and/or after some - particular values on the requested axis - - Parameters - ---------- - before : date - Left boundary - after : date - Right boundary - axis : {'major', 'minor', 'items'} - - Returns - ------- - Panel - """ - axis = self._get_axis_name(axis) - index = self._get_axis(axis) - - beg_slice, end_slice = index.slice_locs(before, after) - new_index = index[beg_slice:end_slice] - - return self.reindex(**{axis: new_index}) - def join(self, other, how='left', lsuffix='', rsuffix=''): """ Join items with other Panel either on major and minor axes column diff --git a/pandas/sparse/panel.py b/pandas/sparse/panel.py index 74bca7de89bcc..86dcf97c8bd3d 100644 --- a/pandas/sparse/panel.py +++ b/pandas/sparse/panel.py @@ -187,6 +187,15 @@ def _ixs(self, i, axis=0): return self.xs(key, axis=axis) + def _slice(self, slobj, axis=0, raise_on_error=False, typ=None): + """ + for compat as we don't support Block Manager here + """ + axis = self._get_axis_name(axis) + index = self._get_axis(axis) + + return self.reindex(**{axis: index[slobj]}) + def _get_item_cache(self, key): return self._frames[key] diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py index fe0f9244c31a3..d74ea8a5d2ffc 100644 --- a/pandas/tests/test_frame.py +++ b/pandas/tests/test_frame.py @@ -7731,6 +7731,10 @@ def test_truncate(self): truncated = ts.truncate(after=end_missing) assert_frame_equal(truncated, expected) + self.assertRaises(ValueError, ts.truncate, + before=ts.index[-1] - 1, + after=ts.index[0] +1) + def test_truncate_copy(self): index = self.tsframe.index truncated = self.tsframe.truncate(index[5], index[10]) diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py index 3715de6dffeb9..645533d5629d2 100644 --- a/pandas/tests/test_series.py +++ b/pandas/tests/test_series.py @@ -3899,7 +3899,7 @@ def test_truncate(self): truncated = ts.truncate(before=self.ts.index[-1] + offset) assert(len(truncated) == 0) - self.assertRaises(Exception, ts.truncate, + self.assertRaises(ValueError, ts.truncate, before=self.ts.index[-1] + offset, after=self.ts.index[0] - offset)
API: truncate error if before > after is now a ValueError (rather than AssertionError) (related #5242)
https://api.github.com/repos/pandas-dev/pandas/pulls/5244
2013-10-16T21:36:53Z
2013-10-17T11:29:15Z
2013-10-17T11:29:15Z
2014-07-16T08:35:37Z
1,515
pandas-dev/pandas
45,691
Switch from pycodestyle to flake8 for code style checks
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 4e0dabc8bc..ff937094df 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -11,7 +11,7 @@ jobs: python-version: 3.9 - run: python -m pip install --upgrade pip setuptools wheel - run: make install - - run: make pycodestyle + - run: make codestyle - run: make test-cover - run: make codecov-upload env: diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst index 700ad69fa5..7a46c19941 100644 --- a/CONTRIBUTING.rst +++ b/CONTRIBUTING.rst @@ -123,7 +123,7 @@ Making Changes -------------- Please make sure your changes conform to `Style Guide for Python Code`_ (PEP8) -and that ``make pycodestyle`` passes. +and that ``make codestyle`` passes. Testing & CI @@ -152,7 +152,7 @@ HTTPie uses the `pytest`_ runner. make test-cover # Test PEP8 compliance - make pycodestyle + make codestyle # Run extended tests — for code as well as .rst files syntax, packaging, etc. make test-all diff --git a/Makefile b/Makefile index 07542e4990..6bc1816b3f 100644 --- a/Makefile +++ b/Makefile @@ -88,7 +88,7 @@ test-cover: test # test-all is meant to test everything — even this Makefile -test-all: clean install test test-dist pycodestyle +test-all: clean install test test-dist codestyle @echo @@ -116,10 +116,15 @@ test-bdist-wheel: clean venv twine-check: twine check dist/* -pycodestyle: - @echo $(H1)Running pycodestyle$(H1END) - @[ -f $(VENV_BIN)/pycodestyle ] || $(VENV_PIP) install pycodestyle - $(VENV_BIN)/pycodestyle httpie/ tests/ extras/ *.py + +# Kept for convenience, "make codestyle" is preferred though +pycodestyle: codestyle + + +codestyle: + @echo $(H1)Running flake8$(H1END) + @[ -f $(VENV_BIN)/flake8 ] || $(VENV_PIP) install --upgrade -r $(REQUIREMENTS) + $(VENV_BIN)/flake8 httpie/ tests/ extras/ *.py @echo diff --git a/requirements-dev.txt b/requirements-dev.txt index cf5f228a31..d5cff049e3 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -1,7 +1,11 @@ +flake8 +flake8-comprehensions +flake8-deprecated +flake8-mutable +flake8-tuple pytest pytest-cov pytest-httpbin>=0.0.6 docutils wheel -pycodestyle twine diff --git a/setup.cfg b/setup.cfg index 8301c1dbdb..43e4702961 100644 --- a/setup.cfg +++ b/setup.cfg @@ -11,17 +11,6 @@ norecursedirs = tests/fixtures .* addopts = --tb=native --doctest-modules -[pycodestyle] -# <http://pycodestyle.pycqa.org/en/latest/intro.html#configuration> - -exclude = .git,.idea,__pycache__,build,dist,.pytest_cache,*.egg-info - -# <http://pycodestyle.pycqa.org/en/latest/intro.html#error-codes> -# E501 - line too long -# W503 - line break before binary operator -ignore = E501,W503 - - [flake8] # <https://flake8.pycqa.org/en/latest/user/error-codes.html> # E501 - line too long diff --git a/tests/utils/__init__.py b/tests/utils/__init__.py index 6e89d274a0..3713067c90 100644 --- a/tests/utils/__init__.py +++ b/tests/utils/__init__.py @@ -52,7 +52,7 @@ class StdinBytesIO(BytesIO): class MockEnvironment(Environment): """Environment subclass with reasonable defaults for testing.""" colors = 0 # For easier debugging - stdin_isatty = True, + stdin_isatty = True stdout_isatty = True is_windows = False
`flake8` does a little more than just checking PEP8 violations, and I think it is a good balance between `pycodestyle` and other tools like `pylint`. It will catch coding issues as well, but only what maters and without being too noisy for contributors. WDYT @jakubroztocil? I am open to keep `pycodestyle` or to try something else. Same for the new `make codestyle`, it can be renammed or just kept as-is.
https://api.github.com/repos/httpie/cli/pulls/1083
2021-06-02T08:12:06Z
2021-06-02T09:06:46Z
2021-06-02T09:06:46Z
2021-06-02T09:25:19Z
1,078
httpie/cli
33,915
GitHub Workflows security hardening
diff --git a/.github/workflows/format.yml b/.github/workflows/format.yml index 02ee95871cb..f5aab7b537b 100644 --- a/.github/workflows/format.yml +++ b/.github/workflows/format.yml @@ -3,8 +3,13 @@ name: Format the code on: workflow_dispatch: +permissions: {} jobs: createPullRequest: + permissions: + contents: write # to create branch (peter-evans/create-pull-request) + pull-requests: write # to create a PR (peter-evans/create-pull-request) + runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml index 0195a2a10da..924eb73e2c4 100644 --- a/.github/workflows/lint.yml +++ b/.github/workflows/lint.yml @@ -4,6 +4,9 @@ on: push: pull_request: +permissions: + contents: read # to fetch code (actions/checkout) + jobs: lint: name: Check the code format
This PR adds explicit [permissions section](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions) to workflows. This is a security best practice because by default workflows run with [extended set of permissions](https://docs.github.com/en/actions/security-guides/automatic-token-authentication#permissions-for-the-github_token) (except from `on: pull_request` [from external forks](https://securitylab.github.com/research/github-actions-preventing-pwn-requests/)). By specifying any permission explicitly all others are set to none. By using the principle of least privilege the damage a compromised workflow can do (because of an [injection](https://securitylab.github.com/research/github-actions-untrusted-input/) or compromised third party tool or action) is restricted. It is recommended to have [most strict permissions on the top level](https://github.com/ossf/scorecard/blob/main/docs/checks.md#token-permissions) and grant write permissions on [job level](https://docs.github.com/en/actions/using-jobs/assigning-permissions-to-jobs) case by case.
https://api.github.com/repos/keras-team/keras/pulls/17050
2022-09-19T21:31:07Z
2022-09-30T18:58:06Z
2022-09-30T18:58:06Z
2022-09-30T18:58:06Z
270
keras-team/keras
47,701
Allow Google DNS plugin to write multiple TXT record values
diff --git a/certbot-dns-google/certbot_dns_google/dns_google.py b/certbot-dns-google/certbot_dns_google/dns_google.py index cea754c0624..ab8bf20de02 100644 --- a/certbot-dns-google/certbot_dns_google/dns_google.py +++ b/certbot-dns-google/certbot_dns_google/dns_google.py @@ -107,6 +107,15 @@ def add_txt_record(self, domain, record_name, record_content, record_ttl): zone_id = self._find_managed_zone_id(domain) + record_contents = self.get_existing_txt_rrset(zone_id, record_name) + add_records = record_contents[:] + + if "\""+record_content+"\"" in record_contents: + # The process was interrupted previously and validation token exists + return + + add_records.append(record_content) + data = { "kind": "dns#change", "additions": [ @@ -114,12 +123,24 @@ def add_txt_record(self, domain, record_name, record_content, record_ttl): "kind": "dns#resourceRecordSet", "type": "TXT", "name": record_name + ".", - "rrdatas": [record_content, ], + "rrdatas": add_records, "ttl": record_ttl, }, ], } + if record_contents: + # We need to remove old records in the same request + data["deletions"] = [ + { + "kind": "dns#resourceRecordSet", + "type": "TXT", + "name": record_name + ".", + "rrdatas": record_contents, + "ttl": record_ttl, + }, + ] + changes = self.dns.changes() # changes | pylint: disable=no-member try: @@ -154,6 +175,8 @@ def del_txt_record(self, domain, record_name, record_content, record_ttl): logger.warn('Error finding zone. Skipping cleanup.') return + record_contents = self.get_existing_txt_rrset(zone_id, record_name) + data = { "kind": "dns#change", "deletions": [ @@ -161,12 +184,26 @@ def del_txt_record(self, domain, record_name, record_content, record_ttl): "kind": "dns#resourceRecordSet", "type": "TXT", "name": record_name + ".", - "rrdatas": [record_content, ], + "rrdatas": record_contents, "ttl": record_ttl, }, ], } + # Remove the record being deleted from the list + readd_contents = [r for r in record_contents if r != "\"" + record_content + "\""] + if readd_contents: + # We need to remove old records in the same request + data["additions"] = [ + { + "kind": "dns#resourceRecordSet", + "type": "TXT", + "name": record_name + ".", + "rrdatas": readd_contents, + "ttl": record_ttl, + }, + ] + changes = self.dns.changes() # changes | pylint: disable=no-member try: @@ -175,6 +212,28 @@ def del_txt_record(self, domain, record_name, record_content, record_ttl): except googleapiclient_errors.Error as e: logger.warn('Encountered error deleting TXT record: %s', e) + def get_existing_txt_rrset(self, zone_id, record_name): + """ + Get existing TXT records from the RRset for the record name. + + :param str zone_id: The ID of the managed zone. + :param str record_name: The record name (typically beginning with '_acme-challenge.'). + + :returns: List of TXT record values + :rtype: `list` of `string` + + """ + rrs_request = self.dns.resourceRecordSets() # pylint: disable=no-member + request = rrs_request.list(managedZone=zone_id, project=self.project_id) + response = request.execute() + # Add dot as the API returns absolute domains + record_name += "." + if response: + for rr in response["rrsets"]: + if rr["name"] == record_name and rr["type"] == "TXT": + return rr["rrdatas"] + return [] + def _find_managed_zone_id(self, domain): """ Find the managed zone for a given domain. diff --git a/certbot-dns-google/certbot_dns_google/dns_google_test.py b/certbot-dns-google/certbot_dns_google/dns_google_test.py index 53f84dd6ec1..3291b2c3a6f 100644 --- a/certbot-dns-google/certbot_dns_google/dns_google_test.py +++ b/certbot-dns-google/certbot_dns_google/dns_google_test.py @@ -74,10 +74,15 @@ def _setUp_client_with_mock(self, zone_request_side_effect): mock_mz = mock.MagicMock() mock_mz.list.return_value.execute.side_effect = zone_request_side_effect + mock_rrs = mock.MagicMock() + rrsets = {"rrsets": [{"name": "_acme-challenge.example.org.", "type": "TXT", + "rrdatas": ["\"example-txt-contents\""]}]} + mock_rrs.list.return_value.execute.return_value = rrsets mock_changes = mock.MagicMock() client.dns.managedZones = mock.MagicMock(return_value=mock_mz) client.dns.changes = mock.MagicMock(return_value=mock_changes) + client.dns.resourceRecordSets = mock.MagicMock(return_value=mock_rrs) return client, mock_changes @@ -137,6 +142,30 @@ def test_add_txt_record_and_poll(self, unused_credential_mock): managedZone=self.zone, project=PROJECT_ID) + @mock.patch('oauth2client.service_account.ServiceAccountCredentials.from_json_keyfile_name') + @mock.patch('certbot_dns_google.dns_google.open', + mock.mock_open(read_data='{"project_id": "' + PROJECT_ID + '"}'), create=True) + def test_add_txt_record_delete_old(self, unused_credential_mock): + client, changes = self._setUp_client_with_mock( + [{'managedZones': [{'id': self.zone}]}]) + mock_get_rrs = "certbot_dns_google.dns_google._GoogleClient.get_existing_txt_rrset" + with mock.patch(mock_get_rrs) as mock_rrs: + mock_rrs.return_value = ["sample-txt-contents"] + client.add_txt_record(DOMAIN, self.record_name, self.record_content, self.record_ttl) + self.assertTrue(changes.create.called) + self.assertTrue("sample-txt-contents" in + changes.create.call_args_list[0][1]["body"]["deletions"][0]["rrdatas"]) + + @mock.patch('oauth2client.service_account.ServiceAccountCredentials.from_json_keyfile_name') + @mock.patch('certbot_dns_google.dns_google.open', + mock.mock_open(read_data='{"project_id": "' + PROJECT_ID + '"}'), create=True) + def test_add_txt_record_noop(self, unused_credential_mock): + client, changes = self._setUp_client_with_mock( + [{'managedZones': [{'id': self.zone}]}]) + client.add_txt_record(DOMAIN, "_acme-challenge.example.org", + "example-txt-contents", self.record_ttl) + self.assertFalse(changes.create.called) + @mock.patch('oauth2client.service_account.ServiceAccountCredentials.from_json_keyfile_name') @mock.patch('certbot_dns_google.dns_google.open', mock.mock_open(read_data='{"project_id": "' + PROJECT_ID + '"}'), create=True) @@ -172,7 +201,12 @@ def test_add_txt_record_error_during_add(self, unused_credential_mock): def test_del_txt_record(self, unused_credential_mock): client, changes = self._setUp_client_with_mock([{'managedZones': [{'id': self.zone}]}]) - client.del_txt_record(DOMAIN, self.record_name, self.record_content, self.record_ttl) + mock_get_rrs = "certbot_dns_google.dns_google._GoogleClient.get_existing_txt_rrset" + with mock.patch(mock_get_rrs) as mock_rrs: + mock_rrs.return_value = ["\"sample-txt-contents\"", + "\"example-txt-contents\""] + client.del_txt_record(DOMAIN, "_acme-challenge.example.org", + "example-txt-contents", self.record_ttl) expected_body = { "kind": "dns#change", @@ -180,8 +214,17 @@ def test_del_txt_record(self, unused_credential_mock): { "kind": "dns#resourceRecordSet", "type": "TXT", - "name": self.record_name + ".", - "rrdatas": [self.record_content, ], + "name": "_acme-challenge.example.org.", + "rrdatas": ["\"sample-txt-contents\"", "\"example-txt-contents\""], + "ttl": self.record_ttl, + }, + ], + "additions": [ + { + "kind": "dns#resourceRecordSet", + "type": "TXT", + "name": "_acme-challenge.example.org.", + "rrdatas": ["\"sample-txt-contents\"", ], "ttl": self.record_ttl, }, ], @@ -217,6 +260,18 @@ def test_del_txt_record_error_during_delete(self, unused_credential_mock): client.del_txt_record(DOMAIN, self.record_name, self.record_content, self.record_ttl) + @mock.patch('oauth2client.service_account.ServiceAccountCredentials.from_json_keyfile_name') + @mock.patch('certbot_dns_google.dns_google.open', + mock.mock_open(read_data='{"project_id": "' + PROJECT_ID + '"}'), create=True) + def test_get_existing(self, unused_credential_mock): + client, unused_changes = self._setUp_client_with_mock( + [{'managedZones': [{'id': self.zone}]}]) + # Record name mocked in setUp + found = client.get_existing_txt_rrset(self.zone, "_acme-challenge.example.org") + self.assertEquals(found, ["\"example-txt-contents\""]) + not_found = client.get_existing_txt_rrset(self.zone, "nonexistent.tld") + self.assertEquals(not_found, []) + def test_get_project_id(self): from certbot_dns_google.dns_google import _GoogleClient
Google API requires all the records in a RRset to be set as a list on a single call. To achieve this, this PR adds automatic check of pre-existing records, deletes the old ones, and adds them to the list with the new one created before making the API call. Fixes an issue mentioned in comments of #5472
https://api.github.com/repos/certbot/certbot/pulls/5652
2018-03-02T19:21:37Z
2018-03-05T20:49:02Z
2018-03-05T20:49:02Z
2019-06-05T21:17:16Z
2,377
certbot/certbot
1,256
Fix protocol separator typo.
diff --git a/docs/src/cors/tutorial001.py b/docs/src/cors/tutorial001.py index 4ec8d01881fd6..b0a581d2e953d 100644 --- a/docs/src/cors/tutorial001.py +++ b/docs/src/cors/tutorial001.py @@ -6,8 +6,8 @@ origins = [ "http://localhost.tiangolo.com", "https://localhost.tiangolo.com", - "http:localhost", - "http:localhost:8080", + "http://localhost", + "http://localhost:8080", ] app.add_middleware(
https://api.github.com/repos/tiangolo/fastapi/pulls/647
2019-10-24T02:52:47Z
2019-11-27T19:37:20Z
2019-11-27T19:37:20Z
2019-11-27T19:38:02Z
139
tiangolo/fastapi
22,681
Pin numpy version
diff --git a/requirements.txt b/requirements.txt index 516ce4078..842219b0a 100644 --- a/requirements.txt +++ b/requirements.txt @@ -3,7 +3,8 @@ umap-learn visdom librosa>=0.8.0 matplotlib>=3.3.0 -numpy>=1.14.0 +numpy==1.19.3; platform_system == "Windows" +numpy==1.19.4; platform_system != "Windows" scipy>=1.0.0 tqdm sounddevice
Windows and Linux require different versions of numpy due to an unresolved Windows runtime bug. Reported in #596
https://api.github.com/repos/CorentinJ/Real-Time-Voice-Cloning/pulls/597
2020-11-18T05:26:02Z
2020-11-18T15:55:15Z
2020-11-18T15:55:15Z
2020-11-18T15:55:20Z
128
CorentinJ/Real-Time-Voice-Cloning
27,375
[CBC Gem] Support 1080p shows
diff --git a/yt_dlp/extractor/cbc.py b/yt_dlp/extractor/cbc.py index 413053499bf..392c7788488 100644 --- a/yt_dlp/extractor/cbc.py +++ b/yt_dlp/extractor/cbc.py @@ -11,11 +11,13 @@ compat_str, ) from ..utils import ( + int_or_none, + join_nonempty, js_to_json, - smuggle_url, - try_get, orderedSet, + smuggle_url, strip_or_none, + try_get, ExtractorError, ) @@ -313,6 +315,37 @@ def _real_initialize(self): return self._claims_token = self._downloader.cache.load(self._NETRC_MACHINE, 'claims_token') + def _find_secret_formats(self, formats, video_id): + """ Find a valid video url and convert it to the secret variant """ + base_format = next((f for f in formats if f.get('vcodec') != 'none'), None) + if not base_format: + return + + base_url = re.sub(r'(Manifest\(.*?),filter=[\w-]+(.*?\))', r'\1\2', base_format['url']) + url = re.sub(r'(Manifest\(.*?),format=[\w-]+(.*?\))', r'\1\2', base_url) + + secret_xml = self._download_xml(url, video_id, note='Downloading secret XML', fatal=False) + if not secret_xml: + return + + for child in secret_xml: + if child.attrib.get('Type') != 'video': + continue + for video_quality in child: + bitrate = int_or_none(video_quality.attrib.get('Bitrate')) + if not bitrate or 'Index' not in video_quality.attrib: + continue + height = int_or_none(video_quality.attrib.get('MaxHeight')) + + yield { + **base_format, + 'format_id': join_nonempty('sec', height), + 'url': re.sub(r'(QualityLevels\()\d+(\))', fr'\<1>{bitrate}\2', base_url), + 'width': int_or_none(video_quality.attrib.get('MaxWidth')), + 'tbr': bitrate / 1000.0, + 'height': height, + } + def _real_extract(self, url): video_id = self._match_id(url) video_info = self._download_json('https://services.radio-canada.ca/ott/cbc-api/v2/assets/' + video_id, video_id) @@ -335,6 +368,7 @@ def _real_extract(self, url): formats = self._extract_m3u8_formats(m3u8_url, video_id, m3u8_id='hls') self._remove_duplicate_formats(formats) + formats.extend(self._find_secret_formats(formats, video_id)) for format in formats: if format.get('vcodec') == 'none':
## Please follow the guide below - You will be asked some questions, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x]) - Use *Preview* tab to see how your *pull request* will actually look like --- ### Before submitting a *pull request* make sure you have: - [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions) - [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: - [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) ### What is the purpose of your *pull request*? - [ ] Bug fix - [x] Improvement - [ ] New extractor - [ ] New feature --- ### Description of your *pull request* and other information This adds a new check to pull out "secret" video formats such at 6Mb 1080p. The original video formats begin with `hls-` while the new ones begin with `sec-`. Here is sample output with the changes (requires a free CBC account and geolocation within Canada): ``` ~ % yt-dlp "https://gem.cbc.ca/media/murdoch-mysteries/s01e01" --username '***' --password '***' -F [gem.cbc.ca] murdoch-mysteries/s01e01: Downloading JSON metadata [gem.cbc.ca] murdoch-mysteries/s01e01: Downloading JSON metadata [gem.cbc.ca] murdoch-mysteries/s01e01: Downloading m3u8 information [gem.cbc.ca] murdoch-mysteries/s01e01: Downloading secret XML [info] Available formats for murdoch-mysteries/s01e01: ID EXT RESOLUTION │ FILESIZE TBR PROTO │ VCODEC VBR ACODEC MORE INFO ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── hls-audio-English__Descriptive_ m4a │ m3u8_n │ audio only mp4a.40.2 [eng] English (Descriptive) hls-audio-English m4a │ m3u8_n │ audio only mp4a.40.2 [eng] English sec-234-0 mp4 416x234 │ ~ 32.34MiB 99k m3u8_n │ avc1.4d401f 99k video only sec-234-1 mp4 416x234 │ ~ 64.70MiB 199k m3u8_n │ avc1.4d401f 199k video only sec-234-2 mp4 416x234 │ ~129.39MiB 399k m3u8_n │ avc1.4d401f 399k video only hls-621 mp4 416x234 │ ~200.99MiB 621k m3u8_n │ avc1.42c00d 621k video only sec-360-0 mp4 640x360 │ ~194.09MiB 599k m3u8_n │ avc1.4d401f 599k video only hls-825 mp4 640x360 │ ~267.11MiB 825k m3u8_n │ avc1.42c01e 825k video only sec-360-1 mp4 640x360 │ ~320.33MiB 990k m3u8_n │ avc1.4d401f 990k video only hls-1224 mp4 640x360 │ ~396.13MiB 1224k m3u8_n │ avc1.42c01e 1224k video only sec-540 mp4 960x540 │ ~571.11MiB 1765k m3u8_n │ avc1.4d401f 1765k video only hls-2016 mp4 960x540 │ ~652.43MiB 2016k m3u8_n │ avc1.4d401f 2016k video only sec-720-0 mp4 1280x720 │ ~796.94MiB 2463k m3u8_n │ avc1.4d401f 2463k video only hls-2730 mp4 1280x720 │ ~883.22MiB 2730k m3u8_n │ avc1.4d401f 2730k video only sec-720-1 mp4 1280x720 │ ~ 1.07GiB 3380k m3u8_n │ avc1.4d401f 3380k video only hls-3667 mp4 1280x720 │ ~ 1.16GiB 3667k m3u8_n │ avc1.4d401f 3667k video only sec-1080 mp4 1920x1080 │ ~ 1.88GiB 5939k m3u8_n │ avc1.4d401f 5939k video only ``` Feel free to make or suggest any changes - I just want to make my findings available to others
https://api.github.com/repos/yt-dlp/yt-dlp/pulls/1913
2021-12-07T05:29:34Z
2021-12-09T11:47:57Z
2021-12-09T11:47:57Z
2021-12-25T02:01:49Z
676
yt-dlp/yt-dlp
7,653
[Serve] fix flaky `test_autoscaling_policy`
diff --git a/python/ray/serve/_private/client.py b/python/ray/serve/_private/client.py index 7c75f4da74c29..470dc6fd363c3 100644 --- a/python/ray/serve/_private/client.py +++ b/python/ray/serve/_private/client.py @@ -94,16 +94,22 @@ def __del__(self): def __reduce__(self): raise RayServeException(("Ray Serve client cannot be serialized.")) + def shutdown_cached_handles(self): + """Shuts down all cached handles. + + Remove the reference to the cached handles so that they can be + garbage collected. + """ + for cache_key in list(self.handle_cache): + del self.handle_cache[cache_key] + def shutdown(self, timeout_s: float = 30.0) -> None: """Completely shut down the connected Serve instance. Shuts down all processes and deletes all state associated with the instance. """ - - # Shut down handles - for k in list(self.handle_cache): - del self.handle_cache[k] + self.shutdown_cached_handles() if ray.is_initialized() and not self._shutdown: try: diff --git a/python/ray/serve/tests/conftest.py b/python/ray/serve/tests/conftest.py index 142f7606bb846..69f53fcacf209 100644 --- a/python/ray/serve/tests/conftest.py +++ b/python/ray/serve/tests/conftest.py @@ -83,7 +83,7 @@ def serve_instance(_shared_serve_instance): # Clear all state between tests to avoid naming collisions. _shared_serve_instance.delete_deployments(serve.list_deployments().keys()) # Clear the ServeHandle cache between tests to avoid them piling up. - _shared_serve_instance.handle_cache.clear() + _shared_serve_instance.shutdown_cached_handles() def check_ray_stop():
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? Calling `.clear()` will free up the memory and can cause issue in raylet. Refactored `shutdown_cached_handles()` method for iterating through all cached handles and remove their reference to safely garbage collect the handles (which is what's done when calling `serve.shutdown()`). ## Related issue number Fix https://buildkite.com/ray-project/oss-ci-build-branch/builds/5018#01896734-ab26-4231-aa48-f2d9e11c436b/2955-4683 ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :(
https://api.github.com/repos/ray-project/ray/pulls/37527
2023-07-18T20:32:51Z
2023-07-19T19:38:26Z
2023-07-19T19:38:26Z
2023-07-19T19:43:49Z
435
ray-project/ray
19,648
removed Mixer, fixes #680
diff --git a/data_bad_site.json b/data_bad_site.json index 5e6b78c67..a97d716a0 100644 --- a/data_bad_site.json +++ b/data_bad_site.json @@ -407,6 +407,15 @@ "urlMain": "https://www.zomato.com/", "username_claimed": "deepigoyal", "username_unclaimed": "noonewouldeverusethis7" + }, + "mixer.com": { + "errorType": "status_code", + "rank": 1544, + "url": "https://mixer.com/{}", + "urlMain": "https://mixer.com/", + "urlProbe": "https://mixer.com/api/v1/channels/{}", + "username_claimed": "blue", + "username_unclaimed": "noonewouldeverusethis7" } } diff --git a/removed_sites.md b/removed_sites.md index ae64b38f9..c2cfe5d3b 100644 --- a/removed_sites.md +++ b/removed_sites.md @@ -812,3 +812,17 @@ As of 2020-07-24, Zomato seems to be unstable. Majority of the time, Zomato take "username_unclaimed": "noonewouldeverusethis7" }, ``` + +## Mixer +As of 2020-07-22, the Mixer service has closed down. +``` + "mixer.com": { + "errorType": "status_code", + "rank": 1544, + "url": "https://mixer.com/{}", + "urlMain": "https://mixer.com/", + "urlProbe": "https://mixer.com/api/v1/channels/{}", + "username_claimed": "blue", + "username_unclaimed": "noonewouldeverusethis7" + }, +``` diff --git a/sherlock/resources/data.json b/sherlock/resources/data.json index dcb2c284b..b0d5b88c6 100644 --- a/sherlock/resources/data.json +++ b/sherlock/resources/data.json @@ -2251,15 +2251,6 @@ "username_claimed": "blue", "username_unclaimed": "noonewould" }, - "mixer.com": { - "errorType": "status_code", - "rank": 1544, - "url": "https://mixer.com/{}", - "urlMain": "https://mixer.com/", - "urlProbe": "https://mixer.com/api/v1/channels/{}", - "username_claimed": "blue", - "username_unclaimed": "noonewouldeverusethis7" - }, "moikrug": { "errorType": "status_code", "rank": 174869,
because it has shut down
https://api.github.com/repos/sherlock-project/sherlock/pulls/682
2020-07-27T10:25:58Z
2020-07-27T10:28:29Z
2020-07-27T10:28:29Z
2020-07-27T10:30:31Z
686
sherlock-project/sherlock
36,279
Update developing_api.rst
diff --git a/docs/docsite/rst/dev_guide/developing_api.rst b/docs/docsite/rst/dev_guide/developing_api.rst index bb2cbc43426a48..302f388e8c7f39 100644 --- a/docs/docsite/rst/dev_guide/developing_api.rst +++ b/docs/docsite/rst/dev_guide/developing_api.rst @@ -1,6 +1,8 @@ Python API ========== +.. note:: This document is out of date: 'ansible.parsing.dataloader' and 'ansible.runner' are not available in the current version of Ansible. + .. contents:: Topics Please note that while we make this API available it is not intended for direct consumption, it is here
##### SUMMARY <!--- Describe the change, including rationale and design decisions --> <!--- If you are fixing an existing issue, please include "Fixes #nnn" in your commit message and your description; but you should still explain what the change does. --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Docs Pull Request ##### COMPONENT NAME <!--- Name of the module/plugin/module/task --> ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ``` ##### ADDITIONAL INFORMATION <!--- Include additional information to help people understand the change here. For bugs that don't have a linked bug report, a step-by-step reproduction of the problem is helpful. --> <!--- Paste verbatim command output below, e.g. before and after your change --> ``` ```
https://api.github.com/repos/ansible/ansible/pulls/25922
2017-06-20T18:39:57Z
2017-08-07T18:40:29Z
2017-08-07T18:40:29Z
2019-04-26T21:33:37Z
164
ansible/ansible
49,339
Fix a few more typos
diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md index 2e26d74dd..c0170b4fe 100644 --- a/CppCoreGuidelines.md +++ b/CppCoreGuidelines.md @@ -2821,7 +2821,7 @@ In traditional C and C++ code, plain `T*` is used for many weakly-related purpos * Identify an array with a length specified separately * Identify a location in an array -The makes it hard to understand what code does and is supposed to do. +This makes it hard to understand what the code does and is supposed to do. It complicates checking and tool support. ##### Example @@ -5894,7 +5894,7 @@ Designing rules for classes in a hierarchy summary: * [C.126: An abstract class typically doesn't need a constructor](#Rh-abstract-ctor) * [C.127: A class with a virtual function should have a virtual or protected destructor](#Rh-dtor) -* [C.128: Use `override` to make overriding explicit in large class hierarchies](#Rh-override) +* [C.128: Virtual functions should specify exactly one of `virtual`, `override`, or `final`](#Rh-override) * [C.129: When designing a class hierarchy, distinguish between implementation inheritance and interface inheritance](#Rh-kind) * [C.130: Redefine or prohibit copying for a base class; prefer a virtual `clone` function instead](#Rh-copy) * [C.131: Avoid trivial getters and setters](#Rh-get) @@ -13713,7 +13713,7 @@ Constant rule summary: ##### Reason -Immutable objects are easier to reason about, so make object non-`const` only when there is a need to change their value. +Immutable objects are easier to reason about, so make objects non-`const` only when there is a need to change their value. Prevents accidental or hard-to-notice change of value. ##### Example @@ -15547,7 +15547,7 @@ It could be a base class: List<string> ls; Now there is only one copy of the operations linking and unlinking elements of a `List`. -The `Link` and `List` classes does nothing but type manipulation. +The `Link` and `List` classes do nothing but type manipulation. Instead of using a separate "base" type, another common technique is to specialize for `void` or `void*` and have the general template for `T` be just the safely-encapsulated casts to and from the core `void` implementation. @@ -16620,7 +16620,7 @@ The positive arguments for alternatives to these non-rules are listed in the rul Non-rule summary: * [NR.1: Don't: All declarations should be at the top of a function](#Rnr-top) -* [NR.2: Don't: Have only a single single `return`-statement in a function](#Rnr-single-return) +* [NR.2: Don't: Have only a single `return`-statement in a function](#Rnr-single-return) * [NR.3: Don't: Don't use exceptions](#Rnr-no-exceptions) * [NR.4: Don't: Place each class declaration in its own source file](#Rnr-lots-of-files) * [NR.5: Don't: Don't do substantive work in a constructor; instead use two-phase initialization](#Rnr-two-phase-init) @@ -16648,7 +16648,7 @@ Fortunately, compilers catch many "used before set" errors. * [Always initialize an object](#Res-always) * [ES.21: Don't introduce a variable (or constant) before you need to use it](#Res-introduce) -### <a name="Rnr-single-return"></a>NR.2: Don't: Have only a single single `return`-statement in a function +### <a name="Rnr-single-return"></a>NR.2: Don't: Have only a single `return`-statement in a function ##### Reason (not to follow this rule)
F.22: Fix typos C.128: Update the title in the ToC, to match the new title from commit f0e29f0 Con.1: Fix typo T.84: Fix typo NR.2: Remove duplicated word from title (single single)
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/803
2016-11-22T15:16:44Z
2016-11-22T15:51:14Z
2016-11-22T15:51:14Z
2016-11-22T16:44:37Z
908
isocpp/CppCoreGuidelines
15,586
Add display of context when input was generated
diff --git a/modules/text_generation.py b/modules/text_generation.py index fd017e2c1d..9b2c233dc4 100644 --- a/modules/text_generation.py +++ b/modules/text_generation.py @@ -270,5 +270,5 @@ def generate_with_streaming(**kwargs): traceback.print_exc() finally: t1 = time.time() - print(f"Output generated in {(t1-t0):.2f} seconds ({(len(output)-len(original_input_ids[0]))/(t1-t0):.2f} tokens/s, {len(output)-len(original_input_ids[0])} tokens)") + print(f"Output generated in {(t1-t0):.2f} seconds ({(len(output)-len(original_input_ids[0]))/(t1-t0):.2f} tokens/s, {len(output)-len(original_input_ids[0])} tokens, context {len(original_input_ids[0])})") return
Not sure if I did this right but it does move with the conversation and seems to match value. Output generated in 12.97 seconds (15.34 tokens/s, 199 tokens, context 4) Output generated in 4.07 seconds (2.21 tokens/s, 9 tokens, context 1848) Easier to see for OOM and benchmarks, etc.
https://api.github.com/repos/oobabooga/text-generation-webui/pulls/538
2023-03-24T13:58:00Z
2023-03-25T14:56:18Z
2023-03-25T14:56:18Z
2023-03-25T23:11:22Z
212
oobabooga/text-generation-webui
26,256
[2.9] pipe: update docs for Popen with shell=True usage
diff --git a/changelogs/fragments/70261_pipe_lookup.yml b/changelogs/fragments/70261_pipe_lookup.yml new file mode 100644 index 00000000000000..cc10e8c36bd18e --- /dev/null +++ b/changelogs/fragments/70261_pipe_lookup.yml @@ -0,0 +1,2 @@ +minor_changes: +- pipe lookup - update docs for Popen with shell=True usages (https://github.com/ansible/ansible/issues/70159). diff --git a/lib/ansible/plugins/lookup/pipe.py b/lib/ansible/plugins/lookup/pipe.py index 0f5c974c2fa3aa..81fd42bc67acf2 100644 --- a/lib/ansible/plugins/lookup/pipe.py +++ b/lib/ansible/plugins/lookup/pipe.py @@ -4,32 +4,39 @@ from __future__ import (absolute_import, division, print_function) __metaclass__ = type -DOCUMENTATION = """ +DOCUMENTATION = r""" lookup: pipe author: Daniel Hokka Zakrisson <daniel@hozac.com> version_added: "0.9" short_description: read output from a command description: - - Run a command and return the output + - Run a command and return the output. options: _terms: - description: command(s) to run + description: command(s) to run. required: True notes: - Like all lookups this runs on the Ansible controller and is unaffected by other keywords, such as become, so if you need to different permissions you must change the command or run Ansible as another user. - Alternatively you can use a shell/command task that runs against localhost and registers the result. + - Pipe lookup internally invokes Popen with shell=True (this is required and intentional). + This type of invocation is considered as security issue if appropriate care is not taken to sanitize any user provided or variable input. + It is strongly recommended to pass user input or variable input via quote filter before using with pipe lookup. + See example section for this. + Read more about this L(Bandit B602 docs,https://bandit.readthedocs.io/en/latest/plugins/b602_subprocess_popen_with_shell_equals_true.html) """ -EXAMPLES = """ +EXAMPLES = r""" - name: raw result of running date command" - debug: msg="{{ lookup('pipe','date') }}" + debug: + msg: "{{ lookup('pipe', 'date') }}" - name: Always use quote filter to make sure your variables are safe to use with shell - debug: msg="{{ lookup('pipe','getent ' + myuser|quote ) }}" + debug: + msg: "{{ lookup('pipe', 'getent ' + myuser | quote ) }}" """ -RETURN = """ +RETURN = r""" _string: description: - stdout from command
##### SUMMARY pipe lookup plugin uses Popen with shell=True intentionally. This is considered a security issue if user input is not validated. Updated docs to reflect this information for the user. Also, added Bandit B602 documentation link for further reading. Fixes: #70159 Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com> (cherry picked from commit e5649ca3e807f17e7c034ee22791f107162973b0) ##### ISSUE TYPE - Docs Pull Request ##### COMPONENT NAME changelogs/fragments/70261_pipe_lookup.yml lib/ansible/plugins/lookup/pipe.py
https://api.github.com/repos/ansible/ansible/pulls/70603
2020-07-13T16:15:49Z
2020-07-17T21:41:32Z
2020-07-17T21:41:32Z
2021-02-08T09:01:37Z
650
ansible/ansible
48,936
📌 Pin SQLAlchemy range for tests, as it doesn't use SemVer
diff --git a/pyproject.toml b/pyproject.toml index 2f057f5aa844e..66cb00345b608 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -53,7 +53,7 @@ test = [ "requests >=2.24.0,<3.0.0", "httpx >=0.14.0,<0.15.0", "email_validator >=1.1.1,<2.0.0", - "sqlalchemy >=1.3.18,<2.0.0", + "sqlalchemy >=1.3.18,<1.4.0", "peewee >=3.13.3,<4.0.0", "databases[sqlite] >=0.3.2,<0.4.0", "orjson >=3.2.1,<4.0.0",
📌 Pin SQLAlchemy range for tests, as it doesn't use SemVer and I forgot (or didn't know?) that when I pinned it the first time. This only affects FastAPI internal tests... and by accident, [Pydantic tests](https://github.com/samuelcolvin/pydantic/pull/2584). :grimacing: :see_no_evil:
https://api.github.com/repos/tiangolo/fastapi/pulls/3001
2021-03-27T16:32:31Z
2021-03-27T16:36:08Z
2021-03-27T16:36:08Z
2021-03-27T17:47:51Z
204
tiangolo/fastapi
22,920
fix omp flag
diff --git a/deploy/cpp_infer/CMakeLists.txt b/deploy/cpp_infer/CMakeLists.txt index 466c2be8f7..0f751a22ce 100644 --- a/deploy/cpp_infer/CMakeLists.txt +++ b/deploy/cpp_infer/CMakeLists.txt @@ -52,20 +52,29 @@ include_directories(${OpenCV_INCLUDE_DIRS}) if (WIN32) add_definitions("/DGOOGLE_GLOG_DLL_DECL=") - set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} /bigobj /MTd") - set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /bigobj /MT") - set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /bigobj /MTd") - set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /bigobj /MT") + if(WITH_MKL) + set(FLAG_OPENMP "/openmp") + endif() + set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} /bigobj /MTd ${FLAG_OPENMP}") + set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /bigobj /MT ${FLAG_OPENMP}") + set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /bigobj /MTd ${FLAG_OPENMP}") + set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /bigobj /MT ${FLAG_OPENMP}") if (WITH_STATIC_LIB) safe_set_static_flag() add_definitions(-DSTATIC_LIB) endif() + message("cmake c debug flags " ${CMAKE_C_FLAGS_DEBUG}) + message("cmake c release flags " ${CMAKE_C_FLAGS_RELEASE}) + message("cmake cxx debug flags " ${CMAKE_CXX_FLAGS_DEBUG}) + message("cmake cxx release flags " ${CMAKE_CXX_FLAGS_RELEASE}) else() - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -o3 -std=c++11") + if(WITH_MKL) + set(FLAG_OPENMP "-fopenmp") + endif() + set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -o3 ${FLAG_OPENMP} -std=c++11") set(CMAKE_STATIC_LIBRARY_PREFIX "") + message("cmake cxx flags" ${CMAKE_CXX_FLAGS}) endif() -message("flags" ${CMAKE_CXX_FLAGS}) - if (WITH_GPU) if (NOT DEFINED CUDA_LIB OR ${CUDA_LIB} STREQUAL "") @@ -198,4 +207,4 @@ if (WIN32 AND WITH_MKL) COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_LIB}/third_party/install/mklml/lib/libiomp5md.dll ./release/libiomp5md.dll COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_LIB}/third_party/install/mkldnn/lib/mkldnn.dll ./release/mkldnn.dll ) -endif() \ No newline at end of file +endif() diff --git a/deploy/cpp_infer/src/main.cpp b/deploy/cpp_infer/src/main.cpp index e708a6e341..4b84dbd090 100644 --- a/deploy/cpp_infer/src/main.cpp +++ b/deploy/cpp_infer/src/main.cpp @@ -12,6 +12,8 @@ // See the License for the specific language governing permissions and // limitations under the License. +#include "glog/logging.h" +#include "omp.h" #include "opencv2/core.hpp" #include "opencv2/imgcodecs.hpp" #include "opencv2/imgproc.hpp" @@ -67,6 +69,19 @@ int main(int argc, char **argv) { config.use_mkldnn, config.use_zero_copy_run, config.char_list_file); +#ifdef USE_MKL +#pragma omp parallel + for (auto i = 0; i < 10; i++) { + LOG_IF(WARNING, + config.cpu_math_library_num_threads != omp_get_num_threads()) + << "WARNING! MKL is running on " << omp_get_num_threads() + << " threads while cpu_math_library_num_threads is set to " + << config.cpu_math_library_num_threads + << ". Possible reason could be 1. You have set omp_set_num_threads() " + "somewhere; 2. MKL is not linked properly"; + } +#endif + auto start = std::chrono::system_clock::now(); std::vector<std::vector<std::vector<int>>> boxes; det.Run(srcimg, boxes);
**machine:Linux i9** **Compile option: WITH_MKL ON** **mkldnn 0 cpu_math_library_num_threads 1** ``` ./ocr_system /home/li/repo/PaddleOCR/deploy/cpp_infer/tools/config.txt /home/li/repo/PaddleOCR/doc/imgs/12.jpg ``` 花费了17.8732秒 **mkldnn 0 cpu_math_library_num_threads 8** ``` ./ocr_system /home/li/repo/PaddleOCR/deploy/cpp_infer/tools/config.txt /home/li/repo/PaddleOCR/doc/imgs/12.jpg ``` 花费了14.1399秒 **Windiws machine** Please help verify on i7 performance @OliverLPH
https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/871
2020-09-27T13:53:37Z
2020-09-29T10:55:50Z
2020-09-29T10:55:50Z
2020-09-29T10:55:50Z
1,032
PaddlePaddle/PaddleOCR
42,255
bpo-41180: Fixes documentation from earlier change
diff --git a/Doc/library/marshal.rst b/Doc/library/marshal.rst index 458c0d53a225ef..24f9dc1689da4a 100644 --- a/Doc/library/marshal.rst +++ b/Doc/library/marshal.rst @@ -76,13 +76,18 @@ The module defines these functions: format), raise :exc:`EOFError`, :exc:`ValueError` or :exc:`TypeError`. The file must be a readable :term:`binary file`. - .. audit-event:: marshal.loads bytes marshal.load + .. audit-event:: marshal.load "" marshal.load .. note:: If an object containing an unsupported type was marshalled with :func:`dump`, :func:`load` will substitute ``None`` for the unmarshallable type. + .. versionchanged:: 3.10 + + This call used to raise a ``code.__new__`` audit event for each code object. Now + it raises a single ``marshal.load`` event for the entire load operation. + .. function:: dumps(value[, version]) @@ -104,6 +109,11 @@ The module defines these functions: .. audit-event:: marshal.loads bytes marshal.load + .. versionchanged:: 3.10 + + This call used to raise a ``code.__new__`` audit event for each code object. Now + it raises a single ``marshal.loads`` event for the entire load operation. + In addition, the following constants are defined:
<!-- issue-number: [bpo-41180](https://bugs.python.org/issue41180) --> https://bugs.python.org/issue41180 <!-- /issue-number -->
https://api.github.com/repos/python/cpython/pulls/26972
2021-06-30T16:46:01Z
2021-06-30T17:53:13Z
2021-06-30T17:53:13Z
2021-06-30T17:53:17Z
339
python/cpython
4,221
Allow for easier subclassing
diff --git a/CHANGELOG.md b/CHANGELOG.md index 1c34e9e91..2fbe9aedc 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,12 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## Unreleased + +### Changed + +- Udated `markdown.Heading.create()` to work with subclassing. + ## [1.2.3] - 2020-05-24 ### Added diff --git a/rich/markdown.py b/rich/markdown.py index d8f55c661..906f8be72 100644 --- a/rich/markdown.py +++ b/rich/markdown.py @@ -133,7 +133,7 @@ class Heading(TextElement): @classmethod def create(cls, markdown: "Markdown", node: Any) -> "Heading": - heading = Heading(node.level) + heading = cls(node.level) return heading def on_enter(self, context: "MarkdownContext") -> None:
## Type of changes Allow for DRY:er subclasses by allowing us to reuse `Heading.create` - [ ] Bug fix - [ ] New feature - [ ] Documentation / docstrings - [ ] Tests - [x] Other ## Checklist - [x] I've run the latest [black](https://github.com/ambv/black) with default args on new code. - [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate. - There's no CONTRIBUTORS.md file to update, I did update the CHANGELOG.md - [ ] I've added tests for new code. - This is a tiny change mainly changing one function to adhere to the other functions. Since you are currently not testing subclassing for the other classes I didn't add anyone here. - [x] I accept that @willmcgugan may be pedantic in the code review. ## Description Please describe your changes here. If this fixes a bug, please link to the issue, if possible. The `markdown.Headings` class unlike all other `MarkdownElement` subclasses were using an explicit self reference in it's factory/create-method. This forced us to overload the create-method every time subclassing Heading so that we could reference the new subclass instead (or as I used, a self reference). By using the implicit `cls`-reference instead we don't need to overload the create method making code inheriting Heading DRY:er and less error-prone if create-method internals is changed in the future. All the other markdown element classes already uses this approach. **Example of how a Heading subclass could look today:** ```python class SubtleHeading(Heading): @classmethod def create(cls, markdown: "Markdown", node: Any) -> "Heading": heading = cls(node.level) return heading def __init__(self, level): # Defering all headings by one level to avoid the borderd box for H1, hence level 2 is max super().__init__(level + 1) def __rich_console__( self, console: Console, options: ConsoleOptions ) -> RenderResult: # Overwriting this to avoid the center-alginment # h1 is unused since we defer all levels by one level text = self.text # Styled text for h2 and beyond if self.level == 2: yield Text("\n") yield text ``` **Same change after this PR:** ```python class SubtleHeading(Heading): def __init__(self, level): # Defering all headings by one level to avoid the borderd box for H1, hence level 2 is max super().__init__(level + 1) def __rich_console__( self, console: Console, options: ConsoleOptions ) -> RenderResult: # Overwriting this to avoid the center-alginment # h1 is unused since we defer all levels by one level text = self.text # Styled text for h2 and beyond if self.level == 2: yield Text("\n") yield text ``` For additional background, I discovered this as a follow up to this [twitter thread](https://twitter.com/ahultner/status/1265253708499124225).
https://api.github.com/repos/Textualize/rich/pulls/93
2020-05-26T13:35:52Z
2020-05-26T14:52:20Z
2020-05-26T14:52:20Z
2020-05-26T14:52:20Z
292
Textualize/rich
48,639
📝 Tweak default suggested configs for generating clients
diff --git a/docs/en/docs/advanced/generate-clients.md b/docs/en/docs/advanced/generate-clients.md index fb9aa643e261a..e8d771f7123f4 100644 --- a/docs/en/docs/advanced/generate-clients.md +++ b/docs/en/docs/advanced/generate-clients.md @@ -87,7 +87,7 @@ It could look like this: "description": "", "main": "index.js", "scripts": { - "generate-client": "openapi --input http://localhost:8000/openapi.json --output ./src/client --client axios" + "generate-client": "openapi --input http://localhost:8000/openapi.json --output ./src/client --client axios --useOptions --useUnionTypes" }, "author": "", "license": "", @@ -106,7 +106,7 @@ After having that NPM `generate-client` script there, you can run it with: $ npm run generate-client frontend-app@1.0.0 generate-client /home/user/code/frontend-app -> openapi --input http://localhost:8000/openapi.json --output ./src/client --client axios +> openapi --input http://localhost:8000/openapi.json --output ./src/client --client axios --useOptions --useUnionTypes ``` </div> @@ -246,7 +246,7 @@ Now as the end result is in a file `openapi.json`, you would modify the `package "description": "", "main": "index.js", "scripts": { - "generate-client": "openapi --input ./openapi.json --output ./src/client --client axios" + "generate-client": "openapi --input ./openapi.json --output ./src/client --client axios --useOptions --useUnionTypes" }, "author": "", "license": "",
📝 Tweak default suggested configs for generating clients
https://api.github.com/repos/tiangolo/fastapi/pulls/10736
2023-11-30T20:43:11Z
2023-11-30T20:48:01Z
2023-11-30T20:48:01Z
2023-11-30T20:48:02Z
412
tiangolo/fastapi
22,667
Fixed broken link to QUnit docs.
diff --git a/docs/internals/contributing/writing-code/javascript.txt b/docs/internals/contributing/writing-code/javascript.txt index be0e43a3b6875..54f05da20af51 100644 --- a/docs/internals/contributing/writing-code/javascript.txt +++ b/docs/internals/contributing/writing-code/javascript.txt @@ -106,7 +106,7 @@ Django's JavaScript tests use `QUnit`_. Here is an example test module: Please consult the QUnit documentation for information on the types of -`assertions supported by QUnit <https://api.qunitjs.com/category/assert/>`_. +`assertions supported by QUnit <https://api.qunitjs.com/assert/>`_. Running tests -------------
https://api.github.com/repos/django/django/pulls/8454
2017-05-03T06:47:42Z
2017-05-03T11:32:00Z
2017-05-03T11:32:00Z
2017-05-03T11:34:39Z
169
django/django
51,577
Improve warning reporting in LARS test
diff --git a/sklearn/linear_model/tests/test_least_angle.py b/sklearn/linear_model/tests/test_least_angle.py index 4dc69f63ee6ba..0644dcc864fed 100644 --- a/sklearn/linear_model/tests/test_least_angle.py +++ b/sklearn/linear_model/tests/test_least_angle.py @@ -451,16 +451,23 @@ def test_lars_cv(): assert not hasattr(lars_cv, 'n_nonzero_coefs') -@pytest.mark.filterwarnings('ignore::FutureWarning') -def test_lars_cv_max_iter(): - with warnings.catch_warnings(record=True) as w: +def test_lars_cv_max_iter(recwarn): + warnings.simplefilter('always') + with np.errstate(divide='raise', invalid='raise'): + X = diabetes.data + y = diabetes.target rng = np.random.RandomState(42) x = rng.randn(len(y)) X = diabetes.data X = np.c_[X, x, x] # add correlated features - lars_cv = linear_model.LassoLarsCV(max_iter=5) + lars_cv = linear_model.LassoLarsCV(max_iter=5, cv=5) lars_cv.fit(X, y) - assert len(w) == 0 + # Check that there is no warning in general and no ConvergenceWarning + # in particular. + # Materialize the string representation of the warning to get a more + # informative error message in case of AssertionError. + recorded_warnings = [str(w) for w in recwarn] + assert recorded_warnings == [] def test_lasso_lars_ic():
This should improve the error message for the failures reported on the Debian AMD64 continuous integration (see #12548).
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/12597
2018-11-15T15:37:02Z
2019-05-20T22:35:01Z
2019-05-20T22:35:01Z
2019-12-08T18:18:26Z
372
scikit-learn/scikit-learn
46,007
Add missing slash in selecting dynamically-loaded content documentation
diff --git a/docs/topics/dynamic-content.rst b/docs/topics/dynamic-content.rst index ea5d0621060..79f502593b3 100644 --- a/docs/topics/dynamic-content.rst +++ b/docs/topics/dynamic-content.rst @@ -263,7 +263,7 @@ The following is a simple snippet to illustrate its usage within a Scrapy spider async with async_playwright() as pw: browser = await pw.chromium.launch() page = await browser.new_page() - await page.goto("https:/example.org") + await page.goto("https://example.org") title = await page.title() return {"title": title}
I've been reading scrapy documentation and I've realized that a slash is missing in one of the examples. This PR adds the missing slash in the "selecting dynamically-loaded content" documentation.
https://api.github.com/repos/scrapy/scrapy/pulls/5890
2023-04-09T14:47:14Z
2023-04-10T12:14:50Z
2023-04-10T12:14:50Z
2023-05-08T22:28:39Z
149
scrapy/scrapy
34,972
Removes Python 3.8 support
diff --git a/oss_setup.py b/oss_setup.py index 07db3105ccb..096eb6ff91a 100644 --- a/oss_setup.py +++ b/oss_setup.py @@ -66,7 +66,7 @@ packages=setuptools.find_packages(), install_requires=[], # Supported Python versions - python_requires=">=3.8", + python_requires=">=3.9", # PyPI package information. classifiers=[ "Development Status :: 5 - Production/Stable", @@ -75,7 +75,6 @@ "Intended Audience :: Science/Research", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3", - "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11",
https://api.github.com/repos/keras-team/keras/pulls/18332
2023-08-02T22:23:44Z
2023-08-02T22:27:23Z
2023-08-02T22:27:23Z
2023-08-02T22:27:51Z
207
keras-team/keras
47,292
DOC Adds changed model entry for _approximate_mode
diff --git a/doc/whats_new/v1.0.rst b/doc/whats_new/v1.0.rst index 1418bcbdf37f0..6ece2f16b6e93 100644 --- a/doc/whats_new/v1.0.rst +++ b/doc/whats_new/v1.0.rst @@ -324,6 +324,10 @@ random sampling procedures. :class:`tree.DecisionTreeRegressor` can be impacted by a fix in the handling of rounding errors. Previously some extra spurious splits could occur. +- |Fix| :func:`model_selection.train_test_split` with a `stratify` parameter + and :class:`model_selection.StratifiedShuffleSplit` may lead to slightly + different results. + Details are listed in the changelog below. (While we are trying to better inform users by providing this information, we
<!-- Thanks for contributing a pull request! Please ensure you have taken a look at the contribution guidelines: https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md --> #### Reference Issues/PRs <!-- Example: Fixes #1234. See also #3456. Please use keywords (e.g., Fixes) to create link to the issues or pull requests you resolved, so that they will automatically be closed when your pull request is merged. See https://github.com/blog/1506-closing-issues-via-pull-requests --> Fixes https://github.com/scikit-learn/scikit-learn/issues/22885 #### What does this implement/fix? Explain your changes. This PR documents the change in behavior that came from https://github.com/scikit-learn/scikit-learn/pull/20904 #### Any other comments? We likely need to backport this to 1.0.X. CC @glemaitre <!-- Please be aware that we are a loose team of volunteers so patience is necessary; assistance handling other issues is very welcome. We value all user contributions, no matter how minor they are. If we are slow to review, either the pull request needs some benchmarking, tinkering, convincing, etc. or more likely the reviewers are simply busy. In either case, we ask for your understanding during the review process. For more information, see our FAQ on this topic: http://scikit-learn.org/dev/faq.html#why-is-my-pull-request-not-getting-any-attention. Thanks for contributing! -->
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/23291
2022-05-05T21:21:57Z
2022-05-06T12:09:34Z
2022-05-06T12:09:34Z
2022-05-06T12:09:43Z
204
scikit-learn/scikit-learn
46,212