in_source_id stringlengths 13 58 | issue stringlengths 3 241k | before_files listlengths 0 3 | after_files listlengths 0 3 | pr_diff stringlengths 109 107M ⌀ |
|---|---|---|---|---|
ManimCommunity__manim-2770 | proportion_from_point is not working for some cases
## Description of bug / unexpected behavior
<!-- Add a clear and concise description of the problem you encountered. -->
Hello, I'm trying to get the proportion of the vertices from a triangle outline, but the result is not correct for some cases.
For easy testing purpose, I tried a regular triangle which is shifted and scaled somehow.
The result i got was `0, 0.33333333333333337, 16336461.942820664` for the three vertices instead of `0, 0.3333333333333333, 0.6666666666666667`.
Obviously, the proportion of the third vertex was wrong.
## Expected behavior
<!-- Add a clear and concise description of what you expected to happen. -->
In the case mentioned above, i should get `0, 0.3333333333333333, 0.6666666666666667` for each vertex of any regular triangle regardless of whether it's shifted/scaled or not.
## How to reproduce the issue
<!-- Provide a piece of code illustrating the undesired behavior. -->
Here is the code i used for testing the regular triangle case:
<details><summary>Code for reproducing the problem</summary>
## 1. Without shift and scale, the result is ok.
```py
class TestProportion(Scene):
def construct(self):
from math import sqrt
A=sqrt(3)*UP
B=LEFT
C=RIGHT
abc = Polygon(A,B,C)
for p in abc.get_vertices():
print(abc.proportion_from_point(p))
================================
output:
0.0
0.3333333333333333
0.6666666666666666
```
## 2. With shift only, the result is ok.
```py
class TestProportion(Scene):
def construct(self):
from math import sqrt
A=sqrt(3)*UP
B=LEFT
C=RIGHT
abc = Polygon(A,B,C)
abc.shift(LEFT)
for p in abc.get_vertices():
print(abc.proportion_from_point(p))
================================
output:
0.0
0.3333333333333333
0.6666666666666666
```
## 3. With scale only, the result is ok.
```py
class TestProportion(Scene):
def construct(self):
from math import sqrt
A=sqrt(3)*UP
B=LEFT
C=RIGHT
abc = Polygon(A,B,C)
abc.scale(0.8)
for p in abc.get_vertices():
print(abc.proportion_from_point(p))
================================
output:
0.0
0.3333333333333333
0.6666666666666666
```
## 4. With shift and scale, the result is NOT ok.
```py
class TestProportion(Scene):
def construct(self):
from math import sqrt
A=sqrt(3)*UP
B=LEFT
C=RIGHT
abc = Polygon(A,B,C)
abc.shift(LEFT)
abc.scale(0.8)
for p in abc.get_vertices():
print(abc.proportion_from_point(p))
================================
output:
0.0
0.33333333333333337
16336461.942820664
```
</details>
## Additional media files
<!-- Paste in the files manim produced on rendering the code above. -->
<details><summary>Images/GIFs</summary>
<!-- PASTE MEDIA HERE -->
</details>
## Logs
<details><summary>Terminal output</summary>
<!-- Add "-v DEBUG" when calling manim to generate more detailed logs -->
```
PASTE HERE OR PROVIDE LINK TO https://pastebin.com/ OR SIMILAR
```
<!-- Insert screenshots here (only when absolutely necessary, we prefer copy/pasted output!) -->
</details>
## System specifications
<details><summary>System Details</summary>
- OS (with version, e.g., Windows 10 v2004 or macOS 10.15 (Catalina)):
- RAM:
- Python version (`python/py/python3 --version`):
- Installed modules (provide output from `pip list`):
```
PASTE HERE
```
</details>
<details><summary>LaTeX details</summary>
+ LaTeX distribution (e.g. TeX Live 2020):
+ Installed LaTeX packages:
<!-- output of `tlmgr list --only-installed` for TeX Live or a screenshot of the Packages page for MikTeX -->
</details>
<details><summary>FFMPEG</summary>
Output of `ffmpeg -version`:
```
PASTE HERE
```
</details>
## Additional comments
<!-- Add further context that you think might be relevant for this issue here. -->
I found the wrong value '16336461.942820664' was from `proportions_along_bezier_curve_for_point`, it could be caused by the accuracy while finding the roots of the bezier polynomial.
As per the reference of [Numpy](https://numpy.org/doc/stable/reference/generated/numpy.polynomial.polynomial.Polynomial.roots.html?highlight=roots#numpy.polynomial.polynomial.Polynomial.roots),
>the accuracy of the roots decrease the further outside the domain they lie.
- Maybe we could do some correction on the coefficient (like we are rounding the roots) before solving the polynomial?
- Or we could bypass all the roots which do not belong to [0,1]. Not sure why this is not checked currently, any reason that i missed?
p.s. The two approaches above are roughly tested on my case and works, but maybe there are better fixes that i'm not aware of.
At last, thanks for the great library, it really helps making math video much more easily.
| [
{
"content": "\"\"\"Utility functions related to Bézier curves.\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"bezier\",\n \"partial_bezier_points\",\n \"partial_quadratic_bezier_points\",\n \"interpolate\",\n \"integer_interpolate\",\n \"mid\",\n \"inverse_interpolate\",\n ... | [
{
"content": "\"\"\"Utility functions related to Bézier curves.\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"bezier\",\n \"partial_bezier_points\",\n \"partial_quadratic_bezier_points\",\n \"interpolate\",\n \"integer_interpolate\",\n \"mid\",\n \"inverse_interpolate\",\n ... | diff --git a/manim/utils/bezier.py b/manim/utils/bezier.py
index 6f8187eef5..4a489318bd 100644
--- a/manim/utils/bezier.py
+++ b/manim/utils/bezier.py
@@ -478,7 +478,7 @@ def proportions_along_bezier_curve_for_point(
roots = [[root for root in rootlist if root.imag == 0] for rootlist in roots]
roots = reduce(np.intersect1d, roots) # Get common roots.
- roots = np.array([r.real for r in roots])
+ roots = np.array([r.real for r in roots if 0 <= r.real <= 1])
return roots
diff --git a/tests/test_vectorized_mobject.py b/tests/test_vectorized_mobject.py
index 181efc0444..5e87c820ca 100644
--- a/tests/test_vectorized_mobject.py
+++ b/tests/test_vectorized_mobject.py
@@ -3,7 +3,17 @@
import numpy as np
import pytest
-from manim import Circle, Line, Mobject, RegularPolygon, Square, VDict, VGroup, VMobject
+from manim import (
+ Circle,
+ Line,
+ Mobject,
+ Polygon,
+ RegularPolygon,
+ Square,
+ VDict,
+ VGroup,
+ VMobject,
+)
from manim.constants import PI
@@ -295,3 +305,14 @@ def test_vmobject_point_at_angle():
a = Circle()
p = a.point_at_angle(4 * PI)
np.testing.assert_array_equal(a.points[0], p)
+
+
+def test_proportion_from_point():
+ A = np.sqrt(3) * np.array([0, 1, 0])
+ B = np.array([-1, 0, 0])
+ C = np.array([1, 0, 0])
+ abc = Polygon(A, B, C)
+ abc.shift(np.array([-1, 0, 0]))
+ abc.scale(0.8)
+ props = [abc.proportion_from_point(p) for p in abc.get_vertices()]
+ np.testing.assert_allclose(props, [0, 1 / 3, 2 / 3])
|
jupyterhub__jupyterhub-3646 | New user token returns `200` instead of `201`
### Bug description
The api docs for stable & latest list the following response status:
```
201 Created
The newly created token
```
But currently returns a status of `200`
#### Expected behaviour
Should return a response status of `201`
#### Actual behaviour
Currently returns a response status of `200`
### How to reproduce
Make a request to `POST /users/{name}/tokens`
See (https://jupyterhub.readthedocs.io/en/latest/_static/rest-api/index.html#path--users--name--tokens)
| [
{
"content": "\"\"\"User handlers\"\"\"\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\nimport asyncio\nimport json\nfrom datetime import datetime\nfrom datetime import timedelta\nfrom datetime import timezone\n\nfrom async_generator import aclosing\nfrom ... | [
{
"content": "\"\"\"User handlers\"\"\"\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\nimport asyncio\nimport json\nfrom datetime import datetime\nfrom datetime import timedelta\nfrom datetime import timezone\n\nfrom async_generator import aclosing\nfrom ... | diff --git a/jupyterhub/apihandlers/users.py b/jupyterhub/apihandlers/users.py
index e36faf9eba..97aa3a87f8 100644
--- a/jupyterhub/apihandlers/users.py
+++ b/jupyterhub/apihandlers/users.py
@@ -421,6 +421,7 @@ async def post(self, user_name):
token_model = self.token_model(orm.APIToken.find(self.db, api_token))
token_model['token'] = api_token
self.write(json.dumps(token_model))
+ self.set_status(201)
class UserTokenAPIHandler(APIHandler):
diff --git a/jupyterhub/tests/test_api.py b/jupyterhub/tests/test_api.py
index bbbb7f3f00..b20566259a 100644
--- a/jupyterhub/tests/test_api.py
+++ b/jupyterhub/tests/test_api.py
@@ -1366,8 +1366,8 @@ async def test_get_new_token_deprecated(app, headers, status):
@mark.parametrize(
"headers, status, note, expires_in",
[
- ({}, 200, 'test note', None),
- ({}, 200, '', 100),
+ ({}, 201, 'test note', None),
+ ({}, 201, '', 100),
({'Authorization': 'token bad'}, 403, '', None),
],
)
@@ -1386,7 +1386,7 @@ async def test_get_new_token(app, headers, status, note, expires_in):
app, 'users/admin/tokens', method='post', headers=headers, data=body
)
assert r.status_code == status
- if status != 200:
+ if status != 201:
return
# check the new-token reply
reply = r.json()
@@ -1424,10 +1424,10 @@ async def test_get_new_token(app, headers, status, note, expires_in):
@mark.parametrize(
"as_user, for_user, status",
[
- ('admin', 'other', 200),
+ ('admin', 'other', 201),
('admin', 'missing', 403),
('user', 'other', 403),
- ('user', 'user', 200),
+ ('user', 'user', 201),
],
)
async def test_token_for_user(app, as_user, for_user, status):
@@ -1448,7 +1448,7 @@ async def test_token_for_user(app, as_user, for_user, status):
)
assert r.status_code == status
reply = r.json()
- if status != 200:
+ if status != 201:
return
assert 'token' in reply
@@ -1486,7 +1486,7 @@ async def test_token_authenticator_noauth(app):
data=json.dumps(data) if data else None,
noauth=True,
)
- assert r.status_code == 200
+ assert r.status_code == 201
reply = r.json()
assert 'token' in reply
r = await api_request(app, 'authorizations', 'token', reply['token'])
@@ -1509,7 +1509,7 @@ async def test_token_authenticator_dict_noauth(app):
data=json.dumps(data) if data else None,
noauth=True,
)
- assert r.status_code == 200
+ assert r.status_code == 201
reply = r.json()
assert 'token' in reply
r = await api_request(app, 'authorizations', 'token', reply['token'])
diff --git a/jupyterhub/tests/test_roles.py b/jupyterhub/tests/test_roles.py
index 9c6e66fb65..ed9ede2e48 100644
--- a/jupyterhub/tests/test_roles.py
+++ b/jupyterhub/tests/test_roles.py
@@ -661,11 +661,11 @@ async def test_load_roles_user_tokens(tmpdir, request):
"headers, rolename, scopes, status",
[
# no role requested - gets default 'token' role
- ({}, None, None, 200),
+ ({}, None, None, 201),
# role scopes within the user's default 'user' role
- ({}, 'self-reader', ['read:users'], 200),
+ ({}, 'self-reader', ['read:users'], 201),
# role scopes outside of the user's role but within the group's role scopes of which the user is a member
- ({}, 'groups-reader', ['read:groups'], 200),
+ ({}, 'groups-reader', ['read:groups'], 201),
# non-existing role request
({}, 'non-existing', [], 404),
# role scopes outside of both user's role and group's role scopes
|
facebookresearch__hydra-2729 | CI failing: `./tools/configen/configen/utils.py:4:1: F401 'typing.Tuple' imported but unused`
```
./tools/configen/configen/utils.py:4:1: F401 'typing.Tuple' imported but unused
nox > [2023-07-24 22:16:52,631] Command flake8 --config .flake8 failed with exit code 1
nox > [2023-07-24 22:16:52,632] Session lint-3.10 failed.
```
| [
{
"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport sys\nfrom enum import Enum\nfrom typing import Any, Dict, Iterable, List, Optional, Set, Tuple\n\nfrom omegaconf._utils import (\n _resolve_optional,\n get_dict_key_value_types,\n get_list_element_type,\n is... | [
{
"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport sys\nfrom enum import Enum\nfrom typing import Any, Dict, Iterable, List, Optional, Set\n\nfrom omegaconf._utils import (\n _resolve_optional,\n get_dict_key_value_types,\n get_list_element_type,\n is_dict_a... | diff --git a/tools/configen/configen/utils.py b/tools/configen/configen/utils.py
index 4faf42fb26..546ebec797 100644
--- a/tools/configen/configen/utils.py
+++ b/tools/configen/configen/utils.py
@@ -1,7 +1,7 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import sys
from enum import Enum
-from typing import Any, Dict, Iterable, List, Optional, Set, Tuple
+from typing import Any, Dict, Iterable, List, Optional, Set
from omegaconf._utils import (
_resolve_optional,
|
docker__docker-py-683 | client.py - exec_create - broken API
Missing 'Id' extraction from a container's dictionary representation:
Line #296 :
`url = self._url('/containers/{0}/exec'.format(container))`
should add the following above it:
```
if isinstance(container, dict):
container = container.get('Id')
```
| [
{
"content": "# Copyright 2013 dotCloud inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless requir... | [
{
"content": "# Copyright 2013 dotCloud inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless requir... | diff --git a/docker/client.py b/docker/client.py
index 908468959..af4b635bb 100644
--- a/docker/client.py
+++ b/docker/client.py
@@ -273,6 +273,7 @@ def events(self, since=None, until=None, filters=None, decode=None):
decode=decode
)
+ @check_resource
def exec_create(self, container, cmd, stdout=True, stderr=True, tty=False,
privileged=False):
if utils.compare_version('1.15', self._version) < 0:
|
django-oscar__django-oscar-4257 | There is no link from the dashboard to Attribute Option Group list page
I found the page `/dashboard/catalogue/attribute-option-group/` useful to manage option groups (insert / delete options from groups), but I see no link from the dashboard to this page, is it missing?
| [
{
"content": "from django.urls import reverse_lazy\nfrom django.utils.translation import gettext_lazy as _\n\nOSCAR_SHOP_NAME = \"Oscar\"\nOSCAR_SHOP_TAGLINE = \"\"\nOSCAR_HOMEPAGE = reverse_lazy(\"catalogue:index\")\n\n# Dynamic class loading\nOSCAR_DYNAMIC_CLASS_LOADER = \"oscar.core.loading.default_class_loa... | [
{
"content": "from django.urls import reverse_lazy\nfrom django.utils.translation import gettext_lazy as _\n\nOSCAR_SHOP_NAME = \"Oscar\"\nOSCAR_SHOP_TAGLINE = \"\"\nOSCAR_HOMEPAGE = reverse_lazy(\"catalogue:index\")\n\n# Dynamic class loading\nOSCAR_DYNAMIC_CLASS_LOADER = \"oscar.core.loading.default_class_loa... | diff --git a/src/oscar/defaults.py b/src/oscar/defaults.py
index f231afbf449..53639d6e8f3 100644
--- a/src/oscar/defaults.py
+++ b/src/oscar/defaults.py
@@ -133,6 +133,10 @@
"label": _("Options"),
"url_name": "dashboard:catalogue-option-list",
},
+ {
+ "label": _("Attribute Option Groups"),
+ "url_name": "dashboard:catalogue-attribute-option-group-list",
+ },
],
},
{
|
PokemonGoF__PokemonGo-Bot-4547 | No usable pokeballs found
When "No usable pokeballs found" happens, the bot just hanging there and not tried to move and spin the fort.
| [
{
"content": "# -*- coding: utf-8 -*-\n\nimport os\nimport time\nimport json\nimport logging\nimport time\nfrom random import random, randrange\nfrom pokemongo_bot import inventory\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot.human_behaviour import sleep, action_delay\nfrom pokemongo_bot.in... | [
{
"content": "# -*- coding: utf-8 -*-\n\nimport os\nimport time\nimport json\nimport logging\nimport time\nfrom random import random, randrange\nfrom pokemongo_bot import inventory\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot.human_behaviour import sleep, action_delay\nfrom pokemongo_bot.in... | diff --git a/pokemongo_bot/cell_workers/pokemon_catch_worker.py b/pokemongo_bot/cell_workers/pokemon_catch_worker.py
index aa060b46be..a494783382 100644
--- a/pokemongo_bot/cell_workers/pokemon_catch_worker.py
+++ b/pokemongo_bot/cell_workers/pokemon_catch_worker.py
@@ -355,7 +355,7 @@ def _do_catch(self, pokemon, encounter_id, catch_rate_by_ball, is_vip=False):
maximum_ball = ITEM_ULTRABALL
continue
else:
- break
+ return WorkerResult.ERROR
# check future ball count
num_next_balls = 0
|
learningequality__kolibri-4689 | Shows sorry! something went wrong.
### Observed behavior
When coach is going to the recent tab to see exercise and video progress then it shows error.
### Expected behavior
It must show progress instead of error.
### Steps to reproduce
1. Login with coach.
2. go to the recent tab.
3. Go to the exercise/video and see.
### Context
* Kolibri version : kolibri 0.11.0
* Operating system : Ubuntu 14.04
* Browser : chrome
### Screenshot



| [
{
"content": "import datetime\n\nfrom dateutil.parser import parse\nfrom django.db import connection\nfrom django.db.models import Min\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom rest_framework import mixins\nfrom rest_framework import pagination\nfrom rest_framework import permissi... | [
{
"content": "import datetime\n\nfrom dateutil.parser import parse\nfrom django.db import connection\nfrom django.db.models import Min\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom rest_framework import mixins\nfrom rest_framework import pagination\nfrom rest_framework import permissi... | diff --git a/kolibri/plugins/coach/api.py b/kolibri/plugins/coach/api.py
index 8310fa13339..3fa7ae1568e 100644
--- a/kolibri/plugins/coach/api.py
+++ b/kolibri/plugins/coach/api.py
@@ -102,7 +102,7 @@ class ContentSummaryViewSet(viewsets.ReadOnlyModelViewSet):
def get_queryset(self):
channel_id = self.kwargs['channel_id']
- return ContentNode.objects.filter(Q(channel_id=channel_id) & Q(available=True)).order_by('lft')
+ return ContentNode.objects.filter(Q(channel_id=channel_id)).order_by('lft')
class RecentReportViewSet(ReportBaseViewSet):
|
ansible__ansible-modules-core-4285 | cron state is changed when using multiline job
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
cron
##### ANSIBLE VERSION
2.0.2.0
##### CONFIGURATION
##### OS / ENVIRONMENT
Darwin craneworks 15.5.0 Darwin Kernel Version 15.5.0: Tue Apr 19 18:36:36 PDT 2016; root:xnu-3248.50.21~8/RELEASE_X86_64 x86_64
##### SUMMARY
When the corn moduled is faced with a multiline jobit can't figure out the correct state of the task. It just reports everytime a changed.
##### STEPS TO REPRODUCE
```
- name: returns a changed every time
cron:
name: renewal cron
job: >
bash -l -c
"mkdir -p /tmp/certbot-auto &&
/opt/certbot/certbot-auto certonly
-d www.mydomain
--agree-tos --renew-by-default
-a webroot --webroot-path=\"/tmp/certbot-auto\" &&
service nginx reload"
special_time: "monthly"
- name: works as expected
cron:
name: debug cron
job: bash -l -c "mkdir -p /tmp/certbot-auto && /opt/certbot/certbot-auto certonly -d www.mydomain --agree-tos --renew-by-default -a webroot --webroot-path=\"/tmp/certbot-auto\" && service nginx reload"
special_time: "monthly"
```
##### EXPECTED RESULTS
A changed the first time it is run and after this just a ok
##### ACTUAL RESULTS
Changed is returned every time
| [
{
"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n#\n# (c) 2012, Dane Summers <dsummers@pinedesk.biz>\n# (c) 2013, Mike Grozak <mike.grozak@gmail.com>\n# (c) 2013, Patrick Callahan <pmc@patrickcallahan.com>\n# (c) 2015, Evan Kaufman <evan@digitalflophouse.com>\n# (c) 2015, Luca Berruti <nadirio@gmail.c... | [
{
"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n#\n# (c) 2012, Dane Summers <dsummers@pinedesk.biz>\n# (c) 2013, Mike Grozak <mike.grozak@gmail.com>\n# (c) 2013, Patrick Callahan <pmc@patrickcallahan.com>\n# (c) 2015, Evan Kaufman <evan@digitalflophouse.com>\n# (c) 2015, Luca Berruti <nadirio@gmail.c... | diff --git a/system/cron.py b/system/cron.py
index aed25d3feab..f7308fda9d4 100644
--- a/system/cron.py
+++ b/system/cron.py
@@ -383,6 +383,9 @@ def find_env(self, name):
return []
def get_cron_job(self,minute,hour,day,month,weekday,job,special,disabled):
+ # normalize any leading/trailing newlines (ansible/ansible-modules-core#3791)
+ job = job.strip('\r\n')
+
if disabled:
disable_prefix = '#'
else:
|
carpentries__amy-1622 | 500 Server error when 'manual attendance' field is not set
Having 'Manual attendance' field unset causes a 500 Server Error. I tried leaving it unset for workshops where the attendance number is not known yet. It defaults to '0' - not sure if this is the desired behaviour (to have attendance at 0). If yes, disregard the issue.
| [
{
"content": "from datetime import datetime, timezone\nimport re\n\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Layout, Div, HTML, Submit, Button, Field\nfrom django import forms\nfrom django.contrib.auth.models import Permission\nfrom django.contrib.sites.models import Site\nfro... | [
{
"content": "from datetime import datetime, timezone\nimport re\n\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Layout, Div, HTML, Submit, Button, Field\nfrom django import forms\nfrom django.contrib.auth.models import Permission\nfrom django.contrib.sites.models import Site\nfro... | diff --git a/amy/workshops/forms.py b/amy/workshops/forms.py
index a1da9863d..03a4051e2 100644
--- a/amy/workshops/forms.py
+++ b/amy/workshops/forms.py
@@ -575,6 +575,11 @@ def clean_curricula(self):
return curricula
+ def clean_manual_attendance(self):
+ """Regression: #1608 - fix 500 server error when field is cleared."""
+ manual_attendance = self.cleaned_data['manual_attendance'] or 0
+ return manual_attendance
+
def save(self, *args, **kwargs):
res = super().save(*args, **kwargs)
diff --git a/amy/workshops/tests/test_event.py b/amy/workshops/tests/test_event.py
index 41af7fdf2..25fb4eb4c 100644
--- a/amy/workshops/tests/test_event.py
+++ b/amy/workshops/tests/test_event.py
@@ -567,6 +567,24 @@ def test_negative_manual_attendance(self):
f = EventForm(data)
self.assertTrue(f.is_valid())
+ def test_empty_manual_attendance(self):
+ """Ensure we don't get 500 server error when field is left with empty
+ value.
+
+ This is a regression test for
+ https://github.com/swcarpentry/amy/issues/1608."""
+
+ data = {
+ 'slug': '2016-06-30-test-event',
+ 'host': self.test_host.id,
+ 'tags': [self.test_tag.id],
+ 'manual_attendance': '',
+ }
+ f = EventForm(data)
+ self.assertTrue(f.is_valid())
+ event = f.save()
+ self.assertEqual(event.manual_attendance, 0)
+
def test_number_of_attendees_increasing(self):
"""Ensure event.attendance gets bigger after adding new learners."""
event = Event.objects.get(slug='test_event_0')
|
mdn__kuma-5638 | Why does SSR rendering /en-US/docs/Web require 54 SQL queries?
<img width="1500" alt="Screen Shot 2019-08-13 at 3 05 10 PM" src="https://user-images.githubusercontent.com/26739/62969706-da54f880-bddb-11e9-8405-dd1ecd25b657.png">
| [
{
"content": "from django.conf import settings\nfrom django.http import HttpResponsePermanentRedirect, JsonResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.utils.translation import activate, ugettext as _\nfrom django.views.decorators.cache import never_cache\nfrom django.views.decorators.h... | [
{
"content": "from django.conf import settings\nfrom django.http import HttpResponsePermanentRedirect, JsonResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.utils.translation import activate, ugettext as _\nfrom django.views.decorators.cache import never_cache\nfrom django.views.decorators.h... | diff --git a/kuma/api/v1/views.py b/kuma/api/v1/views.py
index ddabd8ac64f..48a6c2f90c0 100644
--- a/kuma/api/v1/views.py
+++ b/kuma/api/v1/views.py
@@ -125,7 +125,7 @@ def document_api_data(doc=None, ensure_contributors=False, redirect_url=None):
en_slug = ''
other_translations = doc.get_other_translations(
- fields=('locale', 'slug', 'title'))
+ fields=('locale', 'slug', 'title', 'parent'))
available_locales = (
set([doc.locale]) | set(t.locale for t in other_translations))
|
ManimCommunity__manim-3200 | issue when using opengl as renderer
## Description of bug / unexpected behavior
<!--No module named 'moderngl.program_members' -->
When I tried to run the command: `manim --renderer=opengl interactive.py`, an error occurred saying that no such module. But I suppose this is something in the source code, does this mean there's something going on with moderngl?
## Expected behavior
<!-- Add a clear and concise description of what you expected to happen. -->
I think the program should run, and it does when I unflagged the renderer command.
## How to reproduce the issue
<!-- Provide a piece of code illustrating the undesired behavior. -->
<details><summary>Code for reproducing the problem</summary>
```
from manim import *
class InteractiveRadius(Scene):
def construct(self):
plane = NumberPlane()
cursor_dot = Dot().move_to(3 * RIGHT + 2 * UP)
red_circle = Circle(
radius=5,
color=RED
)
red_circle.add_updater(lambda mob: mob.become(
Circle(
radius=3,
color=RED
)
))
self.play(Create(plane), Create(red_circle), FadeIn(cursor_dot))
self.cursor_dot = cursor_dot
self.interactive_embed()
def on_key_press(self, symbol, modifiers):
from pyglet.window import key as pyglet_key
if symbol == pyglet_key.G:
self.play(
self.cursor_dot.animate.move_to(self.mouse_point.get_center())
)
super().on_key_press(symbol, modifiers)
```
</details>
## Additional media files
<!-- Paste in the files manim produced on rendering the code above. -->
<details><summary>Images/GIFs</summary>
<!-- PASTE MEDIA HERE -->
</details>
## Logs
<details><summary>Terminal output</summary>
<!-- Add "-v DEBUG" when calling manim to generate more detailed logs -->
```
$ manim --renderer=opengl interactive.py
Manim Community v0.17.2
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\cli\render\commands.py:97 in │
│ render │
│ │
│ 94 │ │ │ │ for SceneClass in scene_classes_from_file(file): │
│ 95 │ │ │ │ │ with tempconfig({}): │
│ 96 │ │ │ │ │ │ scene = SceneClass(renderer) │
│ ❱ 97 │ │ │ │ │ │ rerun = scene.render() │
│ 98 │ │ │ │ │ if rerun or config["write_all"]: │
│ 99 │ │ │ │ │ │ renderer.num_plays = 0 │
│ 100 │ │ │ │ │ │ continue │
│ │
│ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\scene\scene.py:223 in render │
│ │
│ 220 │ │ """ │
│ 221 │ │ self.setup() │
│ 222 │ │ try: │
│ ❱ 223 │ │ │ self.construct() │
│ 224 │ │ except EndSceneEarlyException: │
│ 225 │ │ │ pass │
│ 226 │ │ except RerunSceneException as e: │
│ │
│ C:\Users\baichuanzhou\Desktop\ManimDL\interactive.py:20 in construct │
│ │
│ 17 │ │ │ ) │
│ 18 │ │ )) │
│ 19 │ │ │
│ ❱ 20 │ │ self.play(Create(plane), Create(red_circle), FadeIn(cursor_dot)) │
│ 21 │ │ self.cursor_dot = cursor_dot │
│ 22 │ │ self.interactive_embed() │
│ 23 │
│ │
│ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\scene\scene.py:1033 in play │
│ │
│ 1030 │ │ │ return │
│ 1031 │ │ │
│ 1032 │ │ start_time = self.renderer.time │
│ ❱ 1033 │ │ self.renderer.play(self, *args, **kwargs) │
│ 1034 │ │ run_time = self.renderer.time - start_time │
│ 1035 │ │ if subcaption: │
│ 1036 │ │ │ if subcaption_duration is None: │
│ │
│ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\utils\caching.py:65 in │
│ wrapper │
│ │
│ 62 │ │ │ "List of the first few animation hashes of the scene: %(h)s", │
│ 63 │ │ │ {"h": str(self.animations_hashes[:5])}, │
│ 64 │ │ ) │
│ ❱ 65 │ │ func(self, scene, *args, **kwargs) │
│ 66 │ │
│ 67 │ return wrapper │
│ 68 │
│ │
│ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\renderer\opengl_renderer.py:4 │
│ 39 in play │
│ │
│ 436 │ │ │ self.animation_elapsed_time = scene.duration │
│ 437 │ │ │
│ 438 │ │ else: │
│ ❱ 439 │ │ │ scene.play_internal() │
│ 440 │ │ │
│ 441 │ │ self.file_writer.end_animation(not self.skip_animations) │
│ 442 │ │ self.time += scene.duration │
│ │
│ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\scene\scene.py:1200 in │
│ play_internal │
│ │
│ 1197 │ │ for t in self.time_progression: │
│ 1198 │ │ │ self.update_to_time(t) │
│ 1199 │ │ │ if not skip_rendering and not self.skip_animation_preview: │
│ ❱ 1200 │ │ │ │ self.renderer.render(self, t, self.moving_mobjects) │
│ 1201 │ │ │ if self.stop_condition is not None and self.stop_condition(): │
│ 1202 │ │ │ │ self.time_progression.close() │
│ 1203 │ │ │ │ break │
│ │
│ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\renderer\opengl_renderer.py:4 │
│ 50 in render │
│ │
│ 447 │ │ self.window.swap_buffers() │
│ 448 │ │
│ 449 │ def render(self, scene, frame_offset, moving_mobjects): │
│ ❱ 450 │ │ self.update_frame(scene) │
│ 451 │ │ │
│ 452 │ │ if self.skip_animations: │
│ 453 │ │ │ return │
│ │
│ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\renderer\opengl_renderer.py:4 │
│ 70 in update_frame │
│ │
│ 467 │ │ for mobject in scene.mobjects: │
│ 468 │ │ │ if not mobject.should_render: │
│ 469 │ │ │ │ continue │
│ ❱ 470 │ │ │ self.render_mobject(mobject) │
│ 471 │ │ │
│ 472 │ │ for obj in scene.meshes: │
│ 473 │ │ │ for mesh in obj.get_meshes(): │
│ │
│ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\renderer\opengl_renderer.py:3 │
│ 75 in render_mobject │
│ │
│ 372 │ │ │ │ primitive=mobject.render_primitive, │
│ 373 │ │ │ ) │
│ 374 │ │ │ mesh.set_uniforms(self) │
│ ❱ 375 │ │ │ mesh.render() │
│ 376 │ │
│ 377 │ def get_texture_id(self, path): │
│ 378 │ │ if repr(path) not in self.path_to_texture_id: │
│ │
│ C:\Users\baichuanzhou\anaconda3\envs\manim\lib\site-packages\manim\renderer\shader.py:315 in │
│ render │
│ │
│ 312 │ │ else: │
│ 313 │ │ │ self.shader.context.disable(moderngl.DEPTH_TEST) │
│ 314 │ │ │
│ ❱ 315 │ │ from moderngl.program_members import Attribute │
│ 316 │ │ │
│ 317 │ │ shader_attributes = [] │
│ 318 │ │ for k, v in self.shader.shader_program._members.items(): │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ModuleNotFoundError: No module named 'moderngl.program_members'
```
<!-- Insert screenshots here (only when absolutely necessary, we prefer copy/pasted output!) -->
</details>
## System specifications
<details><summary>System Details</summary>
- OS (with version, e.g., Windows 10 v2004 or macOS 10.15 (Catalina)):
- RAM:
- Python version (`python/py/python3 --version`):
- Installed modules (provide output from `pip list`):
```
PASTE HERE
```
</details>
<details><summary>LaTeX details</summary>
+ LaTeX distribution (e.g. TeX Live 2020):
+ Installed LaTeX packages:
<!-- output of `tlmgr list --only-installed` for TeX Live or a screenshot of the Packages page for MikTeX -->
</details>
<details><summary>FFMPEG</summary>
Output of `ffmpeg -version`:
```
PASTE HERE
```
</details>
## Additional comments
<!-- Add further context that you think might be relevant for this issue here. -->
| [
{
"content": "from __future__ import annotations\n\nimport re\nimport textwrap\nfrom pathlib import Path\n\nimport moderngl\nimport numpy as np\n\nfrom .. import config\nfrom ..utils import opengl\nfrom ..utils.simple_functions import get_parameters\n\nSHADER_FOLDER = Path(__file__).parent / \"shaders\"\nshader... | [
{
"content": "from __future__ import annotations\n\nimport re\nimport textwrap\nfrom pathlib import Path\n\nimport moderngl\nimport numpy as np\n\nfrom .. import config\nfrom ..utils import opengl\nfrom ..utils.simple_functions import get_parameters\n\nSHADER_FOLDER = Path(__file__).parent / \"shaders\"\nshader... | diff --git a/manim/renderer/shader.py b/manim/renderer/shader.py
index dc28222489..892ccb5892 100644
--- a/manim/renderer/shader.py
+++ b/manim/renderer/shader.py
@@ -312,7 +312,7 @@ def render(self):
else:
self.shader.context.disable(moderngl.DEPTH_TEST)
- from moderngl.program_members import Attribute
+ from moderngl import Attribute
shader_attributes = []
for k, v in self.shader.shader_program._members.items():
|
mlflow__mlflow-6024 | [BUG] sqlite for backend store
### Willingness to contribute
Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.
### MLflow version
1.26
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: 20.04
- **Python version**: 3.8
- **yarn version, if running the dev UI**:
### Describe the problem
Upgraded from 1.23, setting sqlite as backend store an sqlalchemy.future library error is produced.
A similar issue with this one
https://stackoverflow.com/questions/72341647/mlflow-modulenotfounderror-no-module-named-sqlalchemy-future/72432684#72432684
### Tracking information
mlflow server --backend-store-uri sqlite:///mlflow.sqlite --default-artifact-root ./mlruns
### Code to reproduce issue
```
mlflow server --backend-store-uri sqlite:///mlflow.sqlite --default-artifact-root ./mlruns
```
### Other info / logs
```
2022/05/30 13:18:36 ERROR mlflow.cli: Error initializing backend store
2022/05/30 13:18:36 ERROR mlflow.cli: No module named 'sqlalchemy.future'
Traceback (most recent call last):
lib/python3.8/site-packages/mlflow/store/tracking/sqlalchemy_store.py", line 11, in <module>
from sqlalchemy.future import select
ModuleNotFoundError: No module named 'sqlalchemy.future'
```
### What component(s) does this bug affect?
- [X] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [X] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [X] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [ ] `area/server-infra`: MLflow Tracking server backend
- [X] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [X] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations
| [
{
"content": "import os\nimport logging\nimport distutils\n\nfrom importlib.machinery import SourceFileLoader\nfrom setuptools import setup, find_packages\n\n_MLFLOW_SKINNY_ENV_VAR = \"MLFLOW_SKINNY\"\n\nversion = (\n SourceFileLoader(\"mlflow.version\", os.path.join(\"mlflow\", \"version.py\")).load_module(... | [
{
"content": "import os\nimport logging\nimport distutils\n\nfrom importlib.machinery import SourceFileLoader\nfrom setuptools import setup, find_packages\n\n_MLFLOW_SKINNY_ENV_VAR = \"MLFLOW_SKINNY\"\n\nversion = (\n SourceFileLoader(\"mlflow.version\", os.path.join(\"mlflow\", \"version.py\")).load_module(... | diff --git a/setup.py b/setup.py
index adf20d1e6dcd5..8d73781b087ad 100644
--- a/setup.py
+++ b/setup.py
@@ -79,7 +79,7 @@ def package_files(directory):
# Pin sqlparse for: https://github.com/mlflow/mlflow/issues/3433
"sqlparse>=0.3.1",
# Required to run the MLflow server against SQL-backed storage
- "sqlalchemy",
+ "sqlalchemy>=1.4.0",
"waitress; platform_system == 'Windows'",
]
|
mitmproxy__mitmproxy-2142 | mitmdump -nr does not exit automatically
##### Steps to reproduce the problem:
1. Use `mitmdump -nr foo.mitm` to print some flows.
2. Mitmdump should exit automatically after printing, but it doesn't.
##### System information
Mitmproxy version: 3.0.0 (2.0.0dev0136-0x05e1154)
Python version: 3.5.2
Platform: Linux-3.4.0+-x86_64-with-Ubuntu-14.04-trusty
SSL version: OpenSSL 1.0.2g-fips 1 Mar 2016
Linux distro: Ubuntu 14.04 trusty
mitmdump -nr does not exit automatically
##### Steps to reproduce the problem:
1. Use `mitmdump -nr foo.mitm` to print some flows.
2. Mitmdump should exit automatically after printing, but it doesn't.
##### System information
Mitmproxy version: 3.0.0 (2.0.0dev0136-0x05e1154)
Python version: 3.5.2
Platform: Linux-3.4.0+-x86_64-with-Ubuntu-14.04-trusty
SSL version: OpenSSL 1.0.2g-fips 1 Mar 2016
Linux distro: Ubuntu 14.04 trusty
| [
{
"content": "from typing import Optional, Sequence\n\nfrom mitmproxy import optmanager\nfrom mitmproxy.net import tcp\n\n# We redefine these here for now to avoid importing Urwid-related guff on\n# platforms that don't support it, and circular imports. We can do better using\n# a lazy checker down the track.\n... | [
{
"content": "from typing import Optional, Sequence\n\nfrom mitmproxy import optmanager\nfrom mitmproxy.net import tcp\n\n# We redefine these here for now to avoid importing Urwid-related guff on\n# platforms that don't support it, and circular imports. We can do better using\n# a lazy checker down the track.\n... | diff --git a/mitmproxy/options.py b/mitmproxy/options.py
index 6dd8616be4..798d5b9cee 100644
--- a/mitmproxy/options.py
+++ b/mitmproxy/options.py
@@ -78,7 +78,7 @@ def __init__(self, **kwargs) -> None:
"Kill extra requests during replay."
)
self.add_option(
- "keepserving", bool, True,
+ "keepserving", bool, False,
"Continue serving after client playback or file read."
)
self.add_option(
|
weecology__retriever-1104 | Incorrectly lower casing table_name for csv
It looks like we're lower casing manually set table/directory names, at least for csv but probably for all flat file engines.
```
$ mkdir TESTER
$ retriever install csv mammal-masses --table_name TESTER/test.csv
=> Installing mammal-masses
[Errno 2] No such file or directory: 'tester/test.csv'
Done!
$ mkdir tester
$ retriever install csv mammal-masses --table_name TESTER/test.csv
=> Installing mammal-masses
Progress: 5731/5731 rows inserted into tester/test.csv totaling 5731:
Done!
```
This is causing issues for the R package, see https://github.com/ropensci/rdataretriever/issues/131, but is also a general problem since directory names are case sensitive for 2/3 OSs.
| [
{
"content": "\"\"\"Data Retriever Wizard\n\nRunning this module directly will launch the download wizard, allowing the user\nto choose from all scripts.\n\nThe main() function can be used for bootstrapping.\n\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nimport os\ni... | [
{
"content": "\"\"\"Data Retriever Wizard\n\nRunning this module directly will launch the download wizard, allowing the user\nto choose from all scripts.\n\nThe main() function can be used for bootstrapping.\n\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nimport os\ni... | diff --git a/retriever/__main__.py b/retriever/__main__.py
index a3ae1fa12..9971c7e9a 100644
--- a/retriever/__main__.py
+++ b/retriever/__main__.py
@@ -32,7 +32,6 @@
def main():
"""This function launches the Data Retriever."""
- sys.argv[1:] = [arg.lower() for arg in sys.argv[1:]]
if len(sys.argv) == 1:
# if no command line args are passed, show the help options
parser.parse_args(['-h'])
|
ivy-llc__ivy-16518 | uniform
| [
{
"content": "# global\n",
"path": "ivy/functional/frontends/paddle/tensor/random.py"
}
] | [
{
"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\nde... | diff --git a/ivy/functional/frontends/paddle/tensor/random.py b/ivy/functional/frontends/paddle/tensor/random.py
index 28ffa370ad668..62a91cfa2a7ce 100644
--- a/ivy/functional/frontends/paddle/tensor/random.py
+++ b/ivy/functional/frontends/paddle/tensor/random.py
@@ -1 +1,15 @@
# global
+import ivy
+from ivy.func_wrapper import with_supported_dtypes
+from ivy.functional.frontends.paddle.func_wrapper import (
+ to_ivy_arrays_and_back,
+)
+
+
+@with_supported_dtypes(
+ {"2.4.2 and below": ("float32", "float64")},
+ "paddle",
+)
+@to_ivy_arrays_and_back
+def uniform(shape, dtype=None, min=-1.0, max=1.0, seed=0, name=None):
+ return ivy.random_uniform(low=min, high=max, shape=shape, dtype=dtype, seed=seed)
diff --git a/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_paddle_random.py b/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_paddle_random.py
index 0a9c8754f4fed..0026c14ddbe41 100644
--- a/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_paddle_random.py
+++ b/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_paddle_random.py
@@ -1,3 +1,42 @@
# global
+from hypothesis import strategies as st
# local
+import ivy_tests.test_ivy.helpers as helpers
+from ivy_tests.test_ivy.helpers import handle_frontend_test
+
+
+@handle_frontend_test(
+ fn_tree="paddle.uniform",
+ input_dtypes=helpers.get_dtypes("float"),
+ shape=st.tuples(
+ st.integers(min_value=2, max_value=5), st.integers(min_value=2, max_value=5)
+ ),
+ dtype=helpers.get_dtypes("valid", full=False),
+ min=st.floats(allow_nan=False, allow_infinity=False, width=32),
+ max=st.floats(allow_nan=False, allow_infinity=False, width=32),
+ seed=st.integers(min_value=2, max_value=5),
+)
+def test_paddle_uniform(
+ input_dtypes,
+ shape,
+ dtype,
+ min,
+ max,
+ seed,
+ frontend,
+ test_flags,
+ fn_tree,
+):
+ helpers.test_frontend_function(
+ input_dtypes=input_dtypes,
+ frontend=frontend,
+ test_flags=test_flags,
+ fn_tree=fn_tree,
+ test_values=False,
+ shape=shape,
+ dtype=dtype[0],
+ min=min,
+ max=max,
+ seed=seed,
+ )
|
canonical__cloud-init-4422 | package-update-upgrade-install does not work on Gentoo
This bug was originally filed in Launchpad as [LP: #1799544](https://bugs.launchpad.net/cloud-init/+bug/1799544)
<details>
<summary>Launchpad details</summary>
<pre>
affected_projects = []
assignee = holmanb
assignee_name = Brett Holman
date_closed = 2022-07-21T15:16:56.010973+00:00
date_created = 2018-10-23T17:34:36.633424+00:00
date_fix_committed = 2022-07-21T15:16:56.010973+00:00
date_fix_released = 2022-07-21T15:16:56.010973+00:00
id = 1799544
importance = medium
is_complete = True
lp_url = https://bugs.launchpad.net/cloud-init/+bug/1799544
milestone = 22.2
owner = gilles-dartiguelongue
owner_name = Gilles Dartiguelongue
private = False
status = fix_released
submitter = gilles-dartiguelongue
submitter_name = Gilles Dartiguelongue
tags = ['gentoo']
duplicates = []
</pre>
</details>
_Launchpad user **Gilles Dartiguelongue(gilles-dartiguelongue)** wrote on 2018-10-23T17:34:36.633424+00:00_
I'm testing cloud-init in a nocloud setup. I'm trying to perform installation of packages using the appropriate module and after fixing some issues in Gentoo packaging, I hit an error in execution due to cmd = list('emerge') being interpreted as ['e', 'm', 'e', ...] while it was meant as ['emerge'].
| [
{
"content": "# Copyright (C) 2014 Rackspace, US Inc.\n# Copyright (C) 2016 Matthew Thode.\n#\n# Author: Nate House <nathan.house@rackspace.com>\n# Author: Matthew Thode <prometheanfire@gentoo.org>\n#\n# This file is part of cloud-init. See LICENSE file for license information.\n\nfrom cloudinit import distros,... | [
{
"content": "# Copyright (C) 2014 Rackspace, US Inc.\n# Copyright (C) 2016 Matthew Thode.\n#\n# Author: Nate House <nathan.house@rackspace.com>\n# Author: Matthew Thode <prometheanfire@gentoo.org>\n#\n# This file is part of cloud-init. See LICENSE file for license information.\n\nfrom cloudinit import distros,... | diff --git a/cloudinit/distros/gentoo.py b/cloudinit/distros/gentoo.py
index 37217fe4332..d364eae4b0e 100644
--- a/cloudinit/distros/gentoo.py
+++ b/cloudinit/distros/gentoo.py
@@ -218,7 +218,7 @@ def set_timezone(self, tz):
distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz))
def package_command(self, command, args=None, pkgs=None):
- cmd = list("emerge")
+ cmd = ["emerge"]
# Redirect output
cmd.append("--quiet")
|
django-hijack__django-hijack-383 | TypeError when releasing when LOGOUT_REDIRECT_URL is None
According to the Django docs LOGOUT_REDIRECT_URL can be set to None
https://docs.djangoproject.com/en/dev/ref/settings/#logout-redirect-url
When this is the case a TypeError can be raised [here](https://github.com/django-hijack/django-hijack/blob/master/hijack/views.py#L48) in the release view
Because `self.success_url` == `LOGOUT_REDIRECT_URL` == `None`
Passing None to `resolve_url` causes a TypeError
| [
{
"content": "from contextlib import contextmanager\n\nimport django\nfrom django.contrib.auth import BACKEND_SESSION_KEY, get_user_model, load_backend, login\nfrom django.contrib.auth.mixins import LoginRequiredMixin, UserPassesTestMixin\nfrom django.db import transaction\nfrom django.http import HttpResponseB... | [
{
"content": "from contextlib import contextmanager\n\nimport django\nfrom django.contrib.auth import BACKEND_SESSION_KEY, get_user_model, load_backend, login\nfrom django.contrib.auth.mixins import LoginRequiredMixin, UserPassesTestMixin\nfrom django.db import transaction\nfrom django.http import HttpResponseB... | diff --git a/hijack/views.py b/hijack/views.py
index 79cee93e..b3ca060e 100755
--- a/hijack/views.py
+++ b/hijack/views.py
@@ -45,7 +45,7 @@ class SuccessUrlMixin:
def get_success_url(self):
url = self.get_redirect_url()
- return url or resolve_url(self.success_url)
+ return url or resolve_url(self.success_url or "/")
def get_redirect_url(self):
"""Return the user-originating redirect URL if it's safe."""
|
ibis-project__ibis-2426 | fix bigquery version
https://dev.azure.com/ibis-project/ibis/_build/results?buildId=3396&view=logs&j=8f09edc2-e3b7-52de-126a-0225c4f3efa1&t=78a72aec-b398-558e-7c0d-2d33604b9e53
I think we need to limit the upper bound of bigquery library here.
| [
{
"content": "#!/usr/bin/env python\n\"\"\"Ibis setup module.\"\"\"\nimport pathlib\nimport sys\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\nLONG_DESCRIPTION = \"\"\"\nIbis is a productivity-centric Python big data framework.\n\nSee http://ibis-project.org\n\"\"\"\n\nVERSION = sys.vers... | [
{
"content": "#!/usr/bin/env python\n\"\"\"Ibis setup module.\"\"\"\nimport pathlib\nimport sys\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\nLONG_DESCRIPTION = \"\"\"\nIbis is a productivity-centric Python big data framework.\n\nSee http://ibis-project.org\n\"\"\"\n\nVERSION = sys.vers... | diff --git a/ci/deps/bigquery.yml b/ci/deps/bigquery.yml
index e3824aeb63d4..05a1632ad96f 100644
--- a/ci/deps/bigquery.yml
+++ b/ci/deps/bigquery.yml
@@ -1,2 +1,2 @@
-google-cloud-bigquery>=1.12.0
+google-cloud-bigquery-core >=1.12.0,<1.24.0dev
pydata-google-auth
diff --git a/setup.py b/setup.py
index 148bdf05dc33..812119a4be55 100644
--- a/setup.py
+++ b/setup.py
@@ -29,7 +29,10 @@
'clickhouse-driver>=0.1.3',
'clickhouse-cityhash',
]
-bigquery_requires = ['google-cloud-bigquery>=1.12.0', 'pydata-google-auth']
+bigquery_requires = [
+ 'google-cloud-bigquery[bqstorage,pandas]>=1.12.0,<2.0.0dev',
+ 'pydata-google-auth',
+]
hdf5_requires = ['tables>=3.0.0']
parquet_requires = ['pyarrow>=0.12.0']
|
svthalia__concrexit-2208 | Date format for events
<!--
Please add the appropriate label for what change should be made:
docs: changes to the documentation)
refactor: refactoring production code, eg. renaming a variable or rewriting a function
test: adding missing tests, refactoring tests; no production code change
chore: updating poetry etc; no production code change
-->
### Describe the change
Change the date format for events from { MMM. DD, YYYY, H A } (see additional context)
to { EEE, DD - MMMM - YYYY, HH:MM }.
### Motivation
The current dateformat is following the US style, according to the spelling in the styleguide we use British English (Colour), and we are EU based.
So the date format should also follow the British English date format (a.m./p.m. is mostly used in the US, Australia and Canada), throughout Europe 24-hour notation is used.
### Additional context
Current: 
| [
{
"content": "\"\"\"Django settings for concrexit.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/dev/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/dev/ref/settings/\n\"\"\"\n\nimport logging\n\nimport base64\nimport json... | [
{
"content": "\"\"\"Django settings for concrexit.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/dev/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/dev/ref/settings/\n\"\"\"\n\nimport logging\n\nimport base64\nimport json... | diff --git a/website/thaliawebsite/settings.py b/website/thaliawebsite/settings.py
index da955e0e5..912b48815 100644
--- a/website/thaliawebsite/settings.py
+++ b/website/thaliawebsite/settings.py
@@ -627,13 +627,15 @@ def from_env(
# Internationalization
# https://docs.djangoproject.com/en/dev/topics/i18n/
+DATETIME_FORMAT = "j M, Y, H:i"
+
LANGUAGE_CODE = "en"
TIME_ZONE = "Europe/Amsterdam"
USE_I18N = True
-USE_L10N = True
+USE_L10N = False
USE_TZ = True
|
litestar-org__litestar-1231 | Bug: LoggingMiddleware is sending obfuscated session id to client
**Describe the bug**
When using the logging middleware and session middleware together, the logging middleware's cookie obfuscation is overwriting the session name with "*****" and that name is being pushed down to the client.
The initial set-cookie has the correct session id but subsequent requests do not.
**To Reproduce**
I created a test function in tests/middleware/test_logging_middleware.py which I believe confirms the bug:
```python
def test_logging_with_session_middleware() -> None:
@post("/")
async def set_session(request: Request) -> None:
request.set_session({"hello": "world"})
@get("/")
async def get_session() -> None:
pass
logging_middleware_config = LoggingMiddlewareConfig()
session_config = MemoryBackendConfig()
with create_test_client(
[set_session, get_session],
logging_config=LoggingConfig(),
middleware=[logging_middleware_config.middleware, session_config.middleware],
) as client:
response = client.post("/")
assert response.status_code == HTTP_201_CREATED
assert len(client.cookies.get("session", "")) == 64
response = client.get("/")
assert response.status_code == HTTP_200_OK
assert len(client.cookies.get("session", "")) == 64
```
The test results in the following exception:
```
> assert len(client.cookies.get("session", "")) == 64
E AssertionError: assert 5 == 64
E + where 5 = len('*****')
E + where '*****' = <bound method Cookies.get of <Cookies[<Cookie session=***** for testserver.local />]>>('session', '')
E + where <bound method Cookies.get of <Cookies[<Cookie session=***** for testserver.local />]>> = <Cookies[<Cookie session=***** for testserver.local />]>.get
E + where <Cookies[<Cookie session=***** for testserver.local />]> = <starlite.testing.client.sync_client.TestClient object at 0x7f4cbf7bea40>.cookies
```
**Additional Context**
Starlite version: 1.51.4
| [
{
"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Callable, Coroutine, Literal, cast\n\nfrom typing_extensions import TypedDict\n\nfrom starlite.connection.request import Request\nfrom starlite.datastructures.upload_file import UploadFile\nfrom starlite.enums import Http... | [
{
"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Callable, Coroutine, Literal, cast\n\nfrom typing_extensions import TypedDict\n\nfrom starlite.connection.request import Request\nfrom starlite.datastructures.upload_file import UploadFile\nfrom starlite.enums import Http... | diff --git a/starlite/utils/extractors.py b/starlite/utils/extractors.py
index ec4c7d514c..fcb7b77881 100644
--- a/starlite/utils/extractors.py
+++ b/starlite/utils/extractors.py
@@ -25,10 +25,7 @@ def obfuscate(values: dict[str, Any], fields_to_obfuscate: set[str]) -> dict[str
Returns:
A dictionary with obfuscated strings
"""
- for key in values:
- if key.lower() in fields_to_obfuscate:
- values[key] = "*****"
- return values
+ return {key: "*****" if key.lower() in fields_to_obfuscate else value for key, value in values.items()}
RequestExtractorField = Literal[
diff --git a/tests/middleware/test_logging_middleware.py b/tests/middleware/test_logging_middleware.py
index a23cdeecb1..21c2d67fb9 100644
--- a/tests/middleware/test_logging_middleware.py
+++ b/tests/middleware/test_logging_middleware.py
@@ -7,14 +7,16 @@
from starlite import Response, get, post
from starlite.config.compression import CompressionConfig
from starlite.config.logging import LoggingConfig, StructLoggingConfig
+from starlite.connection import Request
from starlite.datastructures import Cookie
from starlite.middleware.logging import LoggingMiddlewareConfig
-from starlite.status_codes import HTTP_200_OK
+from starlite.status_codes import HTTP_200_OK, HTTP_201_CREATED
from starlite.testing import create_test_client
if TYPE_CHECKING:
from _pytest.logging import LogCaptureFixture
+ from starlite.middleware.session.server_side import ServerSideSessionConfig
from starlite.types.callable_types import GetLogger
@@ -210,3 +212,33 @@ def test_logging_middleware_log_fields(get_logger: "GetLogger", caplog: "LogCapt
assert caplog.messages[0] == "HTTP Request: path=/"
assert caplog.messages[1] == "HTTP Response: status_code=200"
+
+
+def test_logging_middleware_with_session_middleware(session_backend_config_memory: "ServerSideSessionConfig") -> None:
+ # https://github.com/starlite-api/starlite/issues/1228
+
+ @post("/")
+ async def set_session(request: Request) -> None:
+ request.set_session({"hello": "world"})
+
+ @get("/")
+ async def get_session() -> None:
+ pass
+
+ logging_middleware_config = LoggingMiddlewareConfig()
+
+ with create_test_client(
+ [set_session, get_session],
+ logging_config=LoggingConfig(),
+ middleware=[logging_middleware_config.middleware, session_backend_config_memory.middleware],
+ ) as client:
+ response = client.post("/")
+ assert response.status_code == HTTP_201_CREATED
+ assert "session" in client.cookies
+ assert client.cookies["session"] != "*****"
+ session_id = client.cookies["session"]
+
+ response = client.get("/")
+ assert response.status_code == HTTP_200_OK
+ assert "session" in client.cookies
+ assert client.cookies["session"] == session_id
|
kovidgoyal__kitty-2824 | path expanding on hints
**Is your feature request related to a problem? Please describe.**
for hints i would like to expand the path this is my current hint config
`map kitty_mod+p>n kitten hints --type=linenum --linenum-action=tab nvim {path}`
and my term text is
```
/home/becker on master [⇡»!?] via C base
✦ ❯ vi ~/.config/kitty/kitty.conf:5
/home/becker on master [⇡»!?] via C base took 16h49m54s
✦ ❯ /home/becker/.config/kitty/kitty.conf:5
```
the full path opens correct
the ~ path opens.. im not sure. when i try and write it vim says "E212: Can't open file for writing: no such file or directory"
If i run just
`vi ~/.config/kitty/kitty.conf` the activity shows nvim loading the full path.
**Describe the solution you'd like**
maybe an {expanded_path} ?
**Describe alternatives you've considered**
maybe {path} passed to nvim i just receiving the wrong thing?
**Additional context**
the running vim commands from manual vs the hint.

| [
{
"content": "#!/usr/bin/env python3\n# vim:fileencoding=utf-8\n# License: GPL v3 Copyright: 2018, Kovid Goyal <kovid at kovidgoyal.net>\n\nimport os\nimport re\nimport string\nimport sys\nfrom functools import lru_cache\nfrom gettext import gettext as _\nfrom itertools import repeat\nfrom typing import (\n ... | [
{
"content": "#!/usr/bin/env python3\n# vim:fileencoding=utf-8\n# License: GPL v3 Copyright: 2018, Kovid Goyal <kovid at kovidgoyal.net>\n\nimport os\nimport re\nimport string\nimport sys\nfrom functools import lru_cache\nfrom gettext import gettext as _\nfrom itertools import repeat\nfrom typing import (\n ... | diff --git a/kittens/hints/main.py b/kittens/hints/main.py
index 27d8ecb36b5..3c391498359 100644
--- a/kittens/hints/main.py
+++ b/kittens/hints/main.py
@@ -559,7 +559,7 @@ def linenum_handle_result(args: List[str], data: Dict[str, Any], target_window_i
for m, g in zip(data['match'], data['groupdicts']):
if m:
path, line = g['path'], g['line']
- path = path.split(':')[-1]
+ path = os.path.expanduser(path.split(':')[-1])
line = int(line)
break
else:
|
django-oscar__django-oscar-2066 | UnicodeCSVWriter raises AttributeError: 'NoneType' object has no attribute 'writerows'
when it used in a second variant in the `with` statement and a filename has passed to the constructor.
I've tried smth like:
<pre>
from oscar.core.compat import UnicodeCSVWriter
data = [[1, 2, 3], [4, 5, 6]]
with UnicodeCSVWriter('test.csv') as writer:
writer.writerows(data)`
</pre>
and have got AttributeError, while `test.csv` file was created but remains empty.
However, it works perfectly in the first variant: `writer = UnicodeCSVWriter(open_file=fhandler)`
It seems like `return self` should be in the end of the `__enter__` method (here: https://github.com/django-oscar/django-oscar/blob/master/src/oscar/core/compat.py#L154 )
| [
{
"content": "import csv\nimport sys\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.utils import six\n\nfrom oscar.core.loading import get_model\n\n# A setting that can be used in foreign key declarations\nAU... | [
{
"content": "import csv\nimport sys\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.utils import six\n\nfrom oscar.core.loading import get_model\n\n# A setting that can be used in foreign key declarations\nAU... | diff --git a/src/oscar/core/compat.py b/src/oscar/core/compat.py
index c1adc57a332..d6ec2c8408f 100644
--- a/src/oscar/core/compat.py
+++ b/src/oscar/core/compat.py
@@ -151,6 +151,7 @@ def __enter__(self):
encoding=self.encoding, newline='')
else:
self.f = open(self.filename, 'wb')
+ return self
def __exit__(self, type, value, traceback):
assert self.filename is not None
diff --git a/tests/unit/core/compat_tests.py b/tests/unit/core/compat_tests.py
index ad5e7862ec3..4086030efd6 100644
--- a/tests/unit/core/compat_tests.py
+++ b/tests/unit/core/compat_tests.py
@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
import datetime
+from tempfile import NamedTemporaryFile
from django.utils import six
from django.utils.six.moves import cStringIO
@@ -7,6 +8,19 @@
from oscar.core.compat import UnicodeCSVWriter, existing_user_fields
+
+class unicodeobj(object):
+
+ def __init__(self, s):
+ self.s = s
+
+ def __str__(self):
+ return self.s
+
+ def __unicode__(self):
+ return self.s
+
+
class TestExistingUserFields(TestCase):
def test_order(self):
@@ -19,15 +33,17 @@ class TestUnicodeCSVWriter(TestCase):
def test_can_write_different_values(self):
writer = UnicodeCSVWriter(open_file=cStringIO())
s = u'ünįcodē'
- class unicodeobj(object):
- def __str__(self):
- return s
- def __unicode__(self):
- return s
- rows = [[s, unicodeobj(), 123, datetime.date.today()], ]
+ rows = [[s, unicodeobj(s), 123, datetime.date.today()], ]
writer.writerows(rows)
self.assertRaises(TypeError, writer.writerows, [object()])
+ def test_context_manager(self):
+ tmp_file = NamedTemporaryFile()
+ with UnicodeCSVWriter(filename=tmp_file.name) as writer:
+ s = u'ünįcodē'
+ rows = [[s, unicodeobj(s), 123, datetime.date.today()], ]
+ writer.writerows(rows)
+
class TestPython3Compatibility(TestCase):
|
encode__django-rest-framework-2456 | Checking for request.version raises AttributeError with BrowsableAPIRenderer
I've encountered the following exception when using the Browsable API in conjunction with the new namespace versioning and HyperlinkedModelSerializers:
``` python
AttributeError: 'WSGIRequest' object has no attribute 'version'
```
I've implemented `get_serializer_class()` as specified in the documentation:
``` python
def get_serializer_class(self):
if self.request.version == 'v1':
return AccountSerializerVersion1
return AccountSerializer
```
I'm also using drf-nested-routers, and this is occurring on endpoints like /api/v1/car/1/tires/ where the Tire model has a ForeignKey to Car.
This only happens on the Browsable API; I can perform a GET request to the same endpoint using the JSONRenderer without exceptions.
Checking for request.version raises AttributeError with BrowsableAPIRenderer
I've encountered the following exception when using the Browsable API in conjunction with the new namespace versioning and HyperlinkedModelSerializers:
``` python
AttributeError: 'WSGIRequest' object has no attribute 'version'
```
I've implemented `get_serializer_class()` as specified in the documentation:
``` python
def get_serializer_class(self):
if self.request.version == 'v1':
return AccountSerializerVersion1
return AccountSerializer
```
I'm also using drf-nested-routers, and this is occurring on endpoints like /api/v1/car/1/tires/ where the Tire model has a ForeignKey to Car.
This only happens on the Browsable API; I can perform a GET request to the same endpoint using the JSONRenderer without exceptions.
| [
{
"content": "\"\"\"\nThe Request class is used as a wrapper around the standard request object.\n\nThe wrapped request then offers a richer API, in particular :\n\n - content automatically parsed according to `Content-Type` header,\n and available as `request.data`\n - full support of PUT method, in... | [
{
"content": "\"\"\"\nThe Request class is used as a wrapper around the standard request object.\n\nThe wrapped request then offers a richer API, in particular :\n\n - content automatically parsed according to `Content-Type` header,\n and available as `request.data`\n - full support of PUT method, in... | diff --git a/rest_framework/request.py b/rest_framework/request.py
index cfbbdeccdc..ce2fcb4768 100644
--- a/rest_framework/request.py
+++ b/rest_framework/request.py
@@ -107,6 +107,8 @@ def clone_request(request, method):
ret.accepted_renderer = request.accepted_renderer
if hasattr(request, 'accepted_media_type'):
ret.accepted_media_type = request.accepted_media_type
+ if hasattr(request, 'version'):
+ ret.version = request.version
return ret
diff --git a/tests/browsable_api/auth_urls.py b/tests/browsable_api/auth_urls.py
index bce7dcf919..97bc103604 100644
--- a/tests/browsable_api/auth_urls.py
+++ b/tests/browsable_api/auth_urls.py
@@ -3,6 +3,7 @@
from .views import MockView
+
urlpatterns = patterns(
'',
(r'^$', MockView.as_view()),
diff --git a/tests/test_metadata.py b/tests/test_metadata.py
index 972a896a46..bdc84edf12 100644
--- a/tests/test_metadata.py
+++ b/tests/test_metadata.py
@@ -1,6 +1,7 @@
from __future__ import unicode_literals
from rest_framework import exceptions, serializers, status, views
from rest_framework.request import Request
+from rest_framework.renderers import BrowsableAPIRenderer
from rest_framework.test import APIRequestFactory
request = Request(APIRequestFactory().options('/'))
@@ -168,3 +169,17 @@ def get_object(self):
response = view(request=request)
assert response.status_code == status.HTTP_200_OK
assert list(response.data['actions'].keys()) == ['POST']
+
+ def test_bug_2455_clone_request(self):
+ class ExampleView(views.APIView):
+ renderer_classes = (BrowsableAPIRenderer,)
+
+ def post(self, request):
+ pass
+
+ def get_serializer(self):
+ assert hasattr(self.request, 'version')
+ return serializers.Serializer()
+
+ view = ExampleView.as_view()
+ view(request=request)
|
pyodide__pyodide-325 | ValueError: invalid __array_struct__ when using js arrays of arrays and numpy
When using a matrix (array of array of numbers) in javascript and trying to convert that to a numpy array, it fails with the error `ValueError: invalid __array_struct__`
To reproduce:
JavaScript:
```
window.A = [[1,2,3],[4,5,6]];
```
Python:
```
import numpy
from js import A
m = numpy.array(A)
```
| [
{
"content": "\"\"\"\nA library of helper utilities for connecting Python to the browser environment.\n\"\"\"\n\nimport ast\nimport io\nfrom textwrap import dedent\n\n__version__ = '0.8.2'\n\n\ndef open_url(url):\n \"\"\"\n Fetches a given *url* and returns a io.StringIO to access its contents.\n \"\"\... | [
{
"content": "\"\"\"\nA library of helper utilities for connecting Python to the browser environment.\n\"\"\"\n\nimport ast\nimport io\nfrom textwrap import dedent\n\n__version__ = '0.8.2'\n\n\ndef open_url(url):\n \"\"\"\n Fetches a given *url* and returns a io.StringIO to access its contents.\n \"\"\... | diff --git a/docs/api_reference.md b/docs/api_reference.md
index fdbb6453b09..7e96ab5d0fe 100644
--- a/docs/api_reference.md
+++ b/docs/api_reference.md
@@ -42,6 +42,21 @@ some preprocessing on the Python code first.
Either the resulting object or `None`.
+### pyodide.as_nested_list(obj)
+
+Converts Javascript nested arrays to Python nested lists. This conversion can not
+be performed automatically, because Javascript Arrays and Objects can be combined
+in ways that are ambiguous.
+
+*Parameters*
+
+| name | type | description |
+|--------|-------|-----------------------|
+| *obj* | JS Object | The object to convert |
+
+*Returns*
+
+The object as nested Python lists.
## Javascript API
diff --git a/src/pyodide.py b/src/pyodide.py
index 5e45e8ed5c8..395a231ce6a 100644
--- a/src/pyodide.py
+++ b/src/pyodide.py
@@ -67,4 +67,16 @@ def find_imports(code):
return list(imports)
-__all__ = ['open_url', 'eval_code', 'find_imports']
+def as_nested_list(obj):
+ """
+ Assumes a Javascript object is made of (possibly nested) arrays and
+ converts them to nested Python lists.
+ """
+ try:
+ it = iter(obj)
+ return [as_nested_list(x) for x in it]
+ except TypeError:
+ return obj
+
+
+__all__ = ['open_url', 'eval_code', 'find_imports', 'as_nested_list']
|
DataBiosphere__toil-1003 | cwltoil's writeFile uses root logger
https://github.com/BD2KGenomics/toil/pull/867#issuecomment-227446745
cwltoil's writeFile uses root logger
https://github.com/BD2KGenomics/toil/pull/867#issuecomment-227446745
| [
{
"content": "# Implement support for Common Workflow Language (CWL) for Toil.\n#\n# Copyright (C) 2015 Curoverse, Inc\n# Copyright (C) 2016 UCSC Computational Genomics Lab\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.... | [
{
"content": "# Implement support for Common Workflow Language (CWL) for Toil.\n#\n# Copyright (C) 2015 Curoverse, Inc\n# Copyright (C) 2016 UCSC Computational Genomics Lab\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.... | diff --git a/src/toil/cwl/cwltoil.py b/src/toil/cwl/cwltoil.py
index afc7ffbe51..9b07411bf4 100755
--- a/src/toil/cwl/cwltoil.py
+++ b/src/toil/cwl/cwltoil.py
@@ -161,7 +161,7 @@ def writeFile(writeFunc, index, x):
try:
index[x] = (writeFunc(rp), os.path.basename(x))
except Exception as e:
- logging.error("Got exception '%s' while writing '%s'", e, x)
+ cwllogger.error("Got exception '%s' while copying '%s'", e, x)
raise
return index[x]
|
akvo__akvo-rsr-2584 | Indicator update - Actual Value Comment (IATI content) (estimate: 8)
Created via Reamaze:
Link: https://rsrsupport.reamaze.com/admin/conversations/deleted-result-comment-still-showing-and-missing-iati-content
Assignee: Geert Soet
Message:
Hi product team,
This morning I had a call with SNV Kenya and we noticed some things in Akvo RSR that seemed to be off:
1) Once you’ve deleted an indicator update, it still shows the comment of that update under actual value comment (see below)
2) In the hortimpact project also documents and images are included in the indicator updates, but the links to these images/docs don’t show up in the IATI report. Can we include these fields (in the same way as the main project picture) in the IATI file?
3) Another issue I found is that indicator descriptions do not show up in the IATI report. This is about project 3992:
First screen shot is of an indicator incl description, second screen shot is the same indicator in the IATI report.
Cheers,
Annabelle Poelert
Project Officer
Akvo • 's-Gravenhekje 1A • 1011 TG • Amsterdam (NL)
T +31 20 8200 175 • S Annabelle.akvo • T @annabellepoel I www.akvo.org <http://www.akvo.org/>
| [
{
"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom akvo.... | [
{
"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom akvo.... | diff --git a/akvo/rsr/models/indicator.py b/akvo/rsr/models/indicator.py
index 0ebdb5eecd..a31f155800 100644
--- a/akvo/rsr/models/indicator.py
+++ b/akvo/rsr/models/indicator.py
@@ -760,6 +760,7 @@ def delete(self, *args, **kwargs):
# In case the status was approved, recalculate the period
if old_status == self.STATUS_APPROVED_CODE:
self.period.recalculate_period()
+ self.period.update_actual_comment()
def clean(self):
"""
diff --git a/akvo/rsr/tests/models/test_indicator.py b/akvo/rsr/tests/models/test_indicator.py
index 211ad3c82d..6110ff0331 100644
--- a/akvo/rsr/tests/models/test_indicator.py
+++ b/akvo/rsr/tests/models/test_indicator.py
@@ -106,3 +106,20 @@ def test_multiple_period_data_updates_actual_comment(self):
self.assertIn(data_1.text, period.actual_comment)
# newer update's text appears before older one's
self.assertLess(period.actual_comment.index(data_2.text), period.actual_comment.index(data_1.text))
+
+ def test_period_data_deletion_updates_actual_comment(self):
+
+ # Given
+ period = self.period
+ user = self.user
+ data = IndicatorPeriodData.objects.create(text='period data comment',
+ period=period,
+ user=user,
+ status=IndicatorPeriodData.STATUS_APPROVED_CODE)
+
+ # When
+ data.delete()
+
+ # Then
+ period = IndicatorPeriod.objects.get(id=period.id)
+ self.assertNotIn(data.text, period.actual_comment)
|
OpenNMT__OpenNMT-py-342 | #layers for encoder equals #layers for decoder
I just noticed that for the default RNN encoder, the `enc_layers` parameter is ignored and the number of layers of the encoder is equal to the number of layers of the decoder (see [this line](https://github.com/OpenNMT/OpenNMT-py/blob/master/onmt/ModelConstructor.py#L70)).
Is there some reasoning behind this or is it an error?
#layers for encoder equals #layers for decoder
I just noticed that for the default RNN encoder, the `enc_layers` parameter is ignored and the number of layers of the encoder is equal to the number of layers of the decoder (see [this line](https://github.com/OpenNMT/OpenNMT-py/blob/master/onmt/ModelConstructor.py#L70)).
Is there some reasoning behind this or is it an error?
| [
{
"content": "\"\"\"\nThis file is for models creation, which consults options\nand creates each encoder and decoder accordingly.\n\"\"\"\nimport torch.nn as nn\n\nimport onmt\nimport onmt.Models\nimport onmt.modules\nfrom onmt.IO import ONMTDataset\nfrom onmt.Models import NMTModel, MeanEncoder, RNNEncoder, \\... | [
{
"content": "\"\"\"\nThis file is for models creation, which consults options\nand creates each encoder and decoder accordingly.\n\"\"\"\nimport torch.nn as nn\n\nimport onmt\nimport onmt.Models\nimport onmt.modules\nfrom onmt.IO import ONMTDataset\nfrom onmt.Models import NMTModel, MeanEncoder, RNNEncoder, \\... | diff --git a/onmt/ModelConstructor.py b/onmt/ModelConstructor.py
index ff2783f5d5..135ac679fa 100644
--- a/onmt/ModelConstructor.py
+++ b/onmt/ModelConstructor.py
@@ -67,7 +67,7 @@ def make_encoder(opt, embeddings):
return MeanEncoder(opt.enc_layers, embeddings)
else:
# "rnn" or "brnn"
- return RNNEncoder(opt.rnn_type, opt.brnn, opt.dec_layers,
+ return RNNEncoder(opt.rnn_type, opt.brnn, opt.enc_layers,
opt.rnn_size, opt.dropout, embeddings)
|
qtile__qtile-1644 | Can't use asyncio event loop in widgets
I am creating a widget that uses asyncio to run some external command (with `asyncio.create_subprocess_exec`). It doesn't work, and raises the `RuntimeError("Cannot add child handler, the child watcher does not have a loop attached")` exception instead.
If my understanding of the code is correct, calling `set_event_loop` after `new_event_loop` should fix this issue, but I'm not sure whether it will cause other problems.
| [
{
"content": "import asyncio\nimport os\n\nfrom libqtile import ipc\nfrom libqtile.backend import base\nfrom libqtile.core.manager import Qtile\n\n\nclass SessionManager:\n def __init__(\n self, kore: base.Core, config, *, fname: str = None, no_spawn=False, state=None\n ) -> None:\n \"\"\"Ma... | [
{
"content": "import asyncio\nimport os\n\nfrom libqtile import ipc\nfrom libqtile.backend import base\nfrom libqtile.core.manager import Qtile\n\n\nclass SessionManager:\n def __init__(\n self, kore: base.Core, config, *, fname: str = None, no_spawn=False, state=None\n ) -> None:\n \"\"\"Ma... | diff --git a/libqtile/core/session_manager.py b/libqtile/core/session_manager.py
index ce6eed19ca..0df811fe40 100644
--- a/libqtile/core/session_manager.py
+++ b/libqtile/core/session_manager.py
@@ -25,6 +25,7 @@ def __init__(
The state to restart the qtile instance with.
"""
eventloop = asyncio.new_event_loop()
+ asyncio.set_event_loop(eventloop)
self.qtile = Qtile(kore, config, eventloop, no_spawn=no_spawn, state=state)
|
litestar-org__litestar-1229 | Bug: LoggingMiddleware is sending obfuscated session id to client
**Describe the bug**
When using the logging middleware and session middleware together, the logging middleware's cookie obfuscation is overwriting the session name with "*****" and that name is being pushed down to the client.
The initial set-cookie has the correct session id but subsequent requests do not.
**To Reproduce**
I created a test function in tests/middleware/test_logging_middleware.py which I believe confirms the bug:
```python
def test_logging_with_session_middleware() -> None:
@post("/")
async def set_session(request: Request) -> None:
request.set_session({"hello": "world"})
@get("/")
async def get_session() -> None:
pass
logging_middleware_config = LoggingMiddlewareConfig()
session_config = MemoryBackendConfig()
with create_test_client(
[set_session, get_session],
logging_config=LoggingConfig(),
middleware=[logging_middleware_config.middleware, session_config.middleware],
) as client:
response = client.post("/")
assert response.status_code == HTTP_201_CREATED
assert len(client.cookies.get("session", "")) == 64
response = client.get("/")
assert response.status_code == HTTP_200_OK
assert len(client.cookies.get("session", "")) == 64
```
The test results in the following exception:
```
> assert len(client.cookies.get("session", "")) == 64
E AssertionError: assert 5 == 64
E + where 5 = len('*****')
E + where '*****' = <bound method Cookies.get of <Cookies[<Cookie session=***** for testserver.local />]>>('session', '')
E + where <bound method Cookies.get of <Cookies[<Cookie session=***** for testserver.local />]>> = <Cookies[<Cookie session=***** for testserver.local />]>.get
E + where <Cookies[<Cookie session=***** for testserver.local />]> = <starlite.testing.client.sync_client.TestClient object at 0x7f4cbf7bea40>.cookies
```
**Additional Context**
Starlite version: 1.51.4
| [
{
"content": "from typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Coroutine,\n Dict,\n Literal,\n Optional,\n Set,\n Tuple,\n Union,\n cast,\n)\n\nfrom typing_extensions import TypedDict\n\nfrom starlite.connection.request import Request\nfrom starlite.datastructures.uplo... | [
{
"content": "from typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Coroutine,\n Dict,\n Literal,\n Optional,\n Set,\n Tuple,\n Union,\n cast,\n)\n\nfrom typing_extensions import TypedDict\n\nfrom starlite.connection.request import Request\nfrom starlite.datastructures.uplo... | diff --git a/starlite/utils/extractors.py b/starlite/utils/extractors.py
index 6219352321..360130df8d 100644
--- a/starlite/utils/extractors.py
+++ b/starlite/utils/extractors.py
@@ -35,10 +35,7 @@ def obfuscate(values: Dict[str, Any], fields_to_obfuscate: Set[str]) -> Dict[str
Returns:
A dictionary with obfuscated strings
"""
- for key in values:
- if key.lower() in fields_to_obfuscate:
- values[key] = "*****"
- return values
+ return {key: "*****" if key.lower() in fields_to_obfuscate else value for key, value in values.items()}
RequestExtractorField = Literal[
diff --git a/tests/middleware/test_logging_middleware.py b/tests/middleware/test_logging_middleware.py
index 9d1fe88ebe..b83dc336ef 100644
--- a/tests/middleware/test_logging_middleware.py
+++ b/tests/middleware/test_logging_middleware.py
@@ -6,8 +6,10 @@
from starlite import Cookie, LoggingConfig, Response, StructLoggingConfig, get, post
from starlite.config.compression import CompressionConfig
+from starlite.connection import Request
from starlite.middleware import LoggingMiddlewareConfig
-from starlite.status_codes import HTTP_200_OK
+from starlite.middleware.session.memory_backend import MemoryBackendConfig
+from starlite.status_codes import HTTP_200_OK, HTTP_201_CREATED
from starlite.testing import create_test_client
if TYPE_CHECKING:
@@ -190,3 +192,34 @@ async def hello_world_handler() -> Dict[str, str]:
response = client.get("/")
assert response.status_code == HTTP_200_OK
assert len(caplog.messages) == 2
+
+
+def test_logging_middleware_with_session_middleware() -> None:
+ # https://github.com/starlite-api/starlite/issues/1228
+
+ @post("/")
+ async def set_session(request: Request) -> None:
+ request.set_session({"hello": "world"})
+
+ @get("/")
+ async def get_session() -> None:
+ pass
+
+ logging_middleware_config = LoggingMiddlewareConfig()
+ session_config = MemoryBackendConfig()
+
+ with create_test_client(
+ [set_session, get_session],
+ logging_config=LoggingConfig(),
+ middleware=[logging_middleware_config.middleware, session_config.middleware],
+ ) as client:
+ response = client.post("/")
+ assert response.status_code == HTTP_201_CREATED
+ assert "session" in client.cookies
+ assert client.cookies["session"] != "*****"
+ session_id = client.cookies["session"]
+
+ response = client.get("/")
+ assert response.status_code == HTTP_200_OK
+ assert "session" in client.cookies
+ assert client.cookies["session"] == session_id
|
frappe__frappe-20434 | Enable Scheduler from desk
Feature to enable scheduler from desk.
| [
{
"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# License: MIT. See LICENSE\n\"\"\"\nEvents:\n\talways\n\tdaily\n\tmonthly\n\tweekly\n\"\"\"\n\n# imports - standard imports\nimport os\nimport time\nfrom typing import NoReturn\n\n# imports - module imports\nimport frappe\nfrom... | [
{
"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# License: MIT. See LICENSE\n\"\"\"\nEvents:\n\talways\n\tdaily\n\tmonthly\n\tweekly\n\"\"\"\n\n# imports - standard imports\nimport os\nimport time\nfrom typing import NoReturn\n\n# imports - module imports\nimport frappe\nfrom... | diff --git a/frappe/core/doctype/rq_job/rq_job_list.js b/frappe/core/doctype/rq_job/rq_job_list.js
index 5f6646cd6561..fed56a16fe01 100644
--- a/frappe/core/doctype/rq_job/rq_job_list.js
+++ b/frappe/core/doctype/rq_job/rq_job_list.js
@@ -4,11 +4,15 @@ frappe.listview_settings["RQ Job"] = {
onload(listview) {
if (!has_common(frappe.user_roles, ["Administrator", "System Manager"])) return;
- listview.page.add_inner_button(__("Remove Failed Jobs"), () => {
- frappe.confirm(__("Are you sure you want to remove all failed jobs?"), () => {
- frappe.xcall("frappe.core.doctype.rq_job.rq_job.remove_failed_jobs");
- });
- });
+ listview.page.add_inner_button(
+ __("Remove Failed Jobs"),
+ () => {
+ frappe.confirm(__("Are you sure you want to remove all failed jobs?"), () => {
+ frappe.xcall("frappe.core.doctype.rq_job.rq_job.remove_failed_jobs");
+ });
+ },
+ __("Actions")
+ );
if (listview.list_view_settings) {
listview.list_view_settings.disable_count = 1;
@@ -20,6 +24,25 @@ frappe.listview_settings["RQ Job"] = {
listview.page.set_indicator(__("Scheduler: Active"), "green");
} else {
listview.page.set_indicator(__("Scheduler: Inactive"), "red");
+ listview.page.add_inner_button(
+ __("Enable Scheduler"),
+ () => {
+ frappe.confirm(__("Are you sure you want to re-enable scheduler?"), () => {
+ frappe
+ .xcall("frappe.utils.scheduler.activate_scheduler")
+ .then(() => {
+ frappe.show_alert(__("Enabled Scheduler"));
+ })
+ .catch((e) => {
+ frappe.show_alert({
+ message: __("Failed to enable scheduler: {0}", e),
+ indicator: "error",
+ });
+ });
+ });
+ },
+ __("Actions")
+ );
}
});
diff --git a/frappe/utils/scheduler.py b/frappe/utils/scheduler.py
index 8cda71ee9a0e..529a3c7bf717 100755
--- a/frappe/utils/scheduler.py
+++ b/frappe/utils/scheduler.py
@@ -176,6 +176,11 @@ def _get_last_modified_timestamp(doctype):
@frappe.whitelist()
def activate_scheduler():
+ frappe.only_for("Administrator")
+
+ if frappe.local.conf.maintenance_mode:
+ frappe.throw(frappe._("Scheduler can not be re-enabled when maintenance mode is active."))
+
if is_scheduler_disabled():
enable_scheduler()
if frappe.conf.pause_scheduler:
|
pypi__warehouse-6426 | Invalid HTML for select element
This html is generated by the Python form code.
template:
https://github.com/pypa/warehouse/blob/master/warehouse/templates/manage/roles.html
field:
`{{ form.role_name }}`
ERROR: The first child “option” element of a “select” element with a “required” attribute, and without a “multiple” attribute, and without a “size” attribute whose value is greater than “1”, must have either an empty “value” attribute, or must have no text content. Consider either adding a placeholder option label, or adding a “size” attribute with a value equal to the number of “option” elements. (433)
Reference:
https://maxdesign.com.au/articles/select-required/
| [
{
"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softw... | [
{
"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softw... | diff --git a/warehouse/manage/forms.py b/warehouse/manage/forms.py
index 25227ca778c4..64dcf92ee79e 100644
--- a/warehouse/manage/forms.py
+++ b/warehouse/manage/forms.py
@@ -31,7 +31,7 @@ class RoleNameMixin:
role_name = wtforms.SelectField(
"Select role",
- choices=[("Maintainer", "Maintainer"), ("Owner", "Owner")],
+ choices=[("", "Select role"), ("Maintainer", "Maintainer"), ("Owner", "Owner")],
validators=[wtforms.validators.DataRequired(message="Select role")],
)
|
getmoto__moto-2114 | Lambda publish_version returns wrong status code
In boto3,when lambda publish_version is success,boto3 returns Http status code 201.
But, moto returns Http status code 200
moto and boto version
```
boto3 1.9.71
botocore 1.12.71
moto 1.3.7
```
| [
{
"content": "from __future__ import unicode_literals\n\nimport json\n\ntry:\n from urllib import unquote\nexcept ImportError:\n from urllib.parse import unquote\n\nfrom moto.core.utils import amz_crc32, amzn_request_id, path_url\nfrom moto.core.responses import BaseResponse\nfrom .models import lambda_ba... | [
{
"content": "from __future__ import unicode_literals\n\nimport json\n\ntry:\n from urllib import unquote\nexcept ImportError:\n from urllib.parse import unquote\n\nfrom moto.core.utils import amz_crc32, amzn_request_id, path_url\nfrom moto.core.responses import BaseResponse\nfrom .models import lambda_ba... | diff --git a/moto/awslambda/responses.py b/moto/awslambda/responses.py
index d4eb73bc3137..1c43ef84bcf1 100644
--- a/moto/awslambda/responses.py
+++ b/moto/awslambda/responses.py
@@ -183,7 +183,7 @@ def _publish_function(self, request, full_url, headers):
fn = self.lambda_backend.publish_function(function_name)
if fn:
config = fn.get_configuration()
- return 200, {}, json.dumps(config)
+ return 201, {}, json.dumps(config)
else:
return 404, {}, "{}"
diff --git a/tests/test_awslambda/test_lambda.py b/tests/test_awslambda/test_lambda.py
index 7f3b44b79555..479aaaa8a17c 100644
--- a/tests/test_awslambda/test_lambda.py
+++ b/tests/test_awslambda/test_lambda.py
@@ -471,7 +471,8 @@ def test_publish():
function_list['Functions'].should.have.length_of(1)
latest_arn = function_list['Functions'][0]['FunctionArn']
- conn.publish_version(FunctionName='testFunction')
+ res = conn.publish_version(FunctionName='testFunction')
+ assert res['ResponseMetadata']['HTTPStatusCode'] == 201
function_list = conn.list_functions()
function_list['Functions'].should.have.length_of(2)
@@ -853,8 +854,8 @@ def test_list_versions_by_function():
Publish=True,
)
- conn.publish_version(FunctionName='testFunction')
-
+ res = conn.publish_version(FunctionName='testFunction')
+ assert res['ResponseMetadata']['HTTPStatusCode'] == 201
versions = conn.list_versions_by_function(FunctionName='testFunction')
assert versions['Versions'][0]['FunctionArn'] == 'arn:aws:lambda:us-west-2:123456789012:function:testFunction:$LATEST'
|
aws__aws-cli-573 | aws ec2 replace-network-acl-entry --protocol ?
How can I specify a protocol? When I specify --protocol tcp or --protocol udp, the command fails:
A client error (InvalidParameterValue) occurred when calling the ReplaceNetworkAclEntry operation: Invalid value 'tcp' for IP protocol. Unknown protocol.
A client error (InvalidParameterValue) occurred when calling the ReplaceNetworkAclEntry operation: Invalid value 'udp' for IP protocol. Unknown protocol.
The command create-network-acl-entry accepts --protocol tcp or --protocol udp.
| [
{
"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#... | [
{
"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#... | diff --git a/awscli/customizations/ec2protocolarg.py b/awscli/customizations/ec2protocolarg.py
index f1fb4d46d418..d598ad87a8a4 100644
--- a/awscli/customizations/ec2protocolarg.py
+++ b/awscli/customizations/ec2protocolarg.py
@@ -29,7 +29,8 @@ def _fix_args(operation, endpoint, params, **kwargs):
def register_protocol_args(cli):
- ('before-parameter-build.ec2.RunInstances', _fix_args),
cli.register('before-parameter-build.ec2.CreateNetworkAclEntry',
_fix_args)
+ cli.register('before-parameter-build.ec2.ReplaceNetworkAclEntry',
+ _fix_args)
diff --git a/tests/unit/ec2/test_replace_network_acl_entry.py b/tests/unit/ec2/test_replace_network_acl_entry.py
new file mode 100644
index 000000000000..6198011616d6
--- /dev/null
+++ b/tests/unit/ec2/test_replace_network_acl_entry.py
@@ -0,0 +1,120 @@
+#!/usr/bin/env python
+# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"). You
+# may not use this file except in compliance with the License. A copy of
+# the License is located at
+#
+# http://aws.amazon.com/apache2.0/
+#
+# or in the "license" file accompanying this file. This file is
+# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
+# ANY KIND, either express or implied. See the License for the specific
+# language governing permissions and limitations under the License.
+from tests.unit import BaseAWSCommandParamsTest
+
+
+class TestReplaceNetworkACLEntry(BaseAWSCommandParamsTest):
+
+ prefix = 'ec2 replace-network-acl-entry'
+
+ def test_tcp(self):
+ cmdline = self.prefix
+ cmdline += ' --network-acl-id acl-12345678'
+ cmdline += ' --rule-number 100'
+ cmdline += ' --protocol tcp'
+ cmdline += ' --rule-action allow'
+ cmdline += ' --ingress'
+ cmdline += ' --port-range From=22,To=22'
+ cmdline += ' --cidr-block 0.0.0.0/0'
+ result = {'NetworkAclId': 'acl-12345678',
+ 'RuleNumber': '100',
+ 'Protocol': '6',
+ 'RuleAction': 'allow',
+ 'Egress': 'false',
+ 'CidrBlock': '0.0.0.0/0',
+ 'PortRange.From': '22',
+ 'PortRange.To': '22'
+ }
+ self.assert_params_for_cmd(cmdline, result)
+
+ def test_udp(self):
+ cmdline = self.prefix
+ cmdline += ' --network-acl-id acl-12345678'
+ cmdline += ' --rule-number 100'
+ cmdline += ' --protocol udp'
+ cmdline += ' --rule-action allow'
+ cmdline += ' --ingress'
+ cmdline += ' --port-range From=22,To=22'
+ cmdline += ' --cidr-block 0.0.0.0/0'
+ result = {'NetworkAclId': 'acl-12345678',
+ 'RuleNumber': '100',
+ 'Protocol': '17',
+ 'RuleAction': 'allow',
+ 'Egress': 'false',
+ 'CidrBlock': '0.0.0.0/0',
+ 'PortRange.From': '22',
+ 'PortRange.To': '22'
+ }
+ self.assert_params_for_cmd(cmdline, result)
+
+ def test_icmp(self):
+ cmdline = self.prefix
+ cmdline += ' --network-acl-id acl-12345678'
+ cmdline += ' --rule-number 100'
+ cmdline += ' --protocol icmp'
+ cmdline += ' --rule-action allow'
+ cmdline += ' --ingress'
+ cmdline += ' --port-range From=22,To=22'
+ cmdline += ' --cidr-block 0.0.0.0/0'
+ result = {'NetworkAclId': 'acl-12345678',
+ 'RuleNumber': '100',
+ 'Protocol': '1',
+ 'RuleAction': 'allow',
+ 'Egress': 'false',
+ 'CidrBlock': '0.0.0.0/0',
+ 'PortRange.From': '22',
+ 'PortRange.To': '22'
+ }
+ self.assert_params_for_cmd(cmdline, result)
+
+ def test_all(self):
+ cmdline = self.prefix
+ cmdline += ' --network-acl-id acl-12345678'
+ cmdline += ' --rule-number 100'
+ cmdline += ' --protocol all'
+ cmdline += ' --rule-action allow'
+ cmdline += ' --ingress'
+ cmdline += ' --port-range From=22,To=22'
+ cmdline += ' --cidr-block 0.0.0.0/0'
+ result = {'NetworkAclId': 'acl-12345678',
+ 'RuleNumber': '100',
+ 'Protocol': '-1',
+ 'RuleAction': 'allow',
+ 'Egress': 'false',
+ 'CidrBlock': '0.0.0.0/0',
+ 'PortRange.From': '22',
+ 'PortRange.To': '22'
+ }
+ self.assert_params_for_cmd(cmdline, result)
+
+ def test_number(self):
+ cmdline = self.prefix
+ cmdline += ' --network-acl-id acl-12345678'
+ cmdline += ' --rule-number 100'
+ cmdline += ' --protocol 99'
+ cmdline += ' --rule-action allow'
+ cmdline += ' --ingress'
+ cmdline += ' --port-range From=22,To=22'
+ cmdline += ' --cidr-block 0.0.0.0/0'
+ result = {'NetworkAclId': 'acl-12345678',
+ 'RuleNumber': '100',
+ 'Protocol': '99',
+ 'RuleAction': 'allow',
+ 'Egress': 'false',
+ 'CidrBlock': '0.0.0.0/0',
+ 'PortRange.From': '22',
+ 'PortRange.To': '22'
+ }
+ self.assert_params_for_cmd(cmdline, result)
+
|
bentoml__BentoML-922 | Yatai does not handle the STS assumed role in operator.get_arn_role_from_current_aws_user()
**Describe the bug**
Yatai causes the error:
```
Error: sagemaker deploy failed: INTERNAL:Not supported role type assumed-role; sts arn is arn:aws:sts::103365315157:assumed-role/<rolename>/<username>
```
because [bentoml/yatai/deployment/sagemaker/operator.py](https://github.com/bentoml/BentoML/blob/master/bentoml/yatai/deployment/sagemaker/operator.py) does not handle assumed role by checking type_role[0] is either "user", "root", or "role".
```
def get_arn_role_from_current_aws_user():
sts_client = boto3.client("sts")
identity = sts_client.get_caller_identity()
sts_arn = identity["Arn"]
sts_arn_list = sts_arn.split(":")
type_role = sts_arn_list[-1].split("/")
iam_client = boto3.client("iam")
if type_role[0] in ("user", "root"):
role_list = iam_client.list_roles()
arn = None
for role in role_list["Roles"]:
policy_document = role["AssumeRolePolicyDocument"]
statement = policy_document["Statement"][0]
if (
statement["Effect"] == "Allow"
and statement["Principal"].get("Service", None)
== "sagemaker.amazonaws.com"
):
arn = role["Arn"]
if arn is None:
raise YataiDeploymentException(
"Can't find proper Arn role for Sagemaker, please create one and try "
"again"
)
return arn
elif type_role[0] == "role":
role_response = iam_client.get_role(RoleName=type_role[1])
return role_response["Role"]["Arn"]
raise YataiDeploymentException(
"Not supported role type {}; sts arn is {}".format(type_role[0], sts_arn) # <-----
)
```
However, as in [Boto3 STS get_caller_identity] (https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html#STS.Client.get_caller_identity), type_role[0] can be "assumed-role". Which is common in AWS when a user need to switch roles depending on the environment.
```
response = client.get_caller_identity(
)
print(response)
-----
Expected Output: {
'Account': '123456789012',
'Arn': 'arn:aws:sts::123456789012:assumed-role/my-role-name/my-role-session-name', <----- "assumed-role"
'UserId': 'AKIAI44QH8DHBEXAMPLE:my-role-session-name',
'ResponseMetadata': {
'...': '...',
},
}
```
**To Reproduce**
Steps to reproduce the behavior:
1. With AWS CLI, assume role (https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role.html)
2. Run Sagemaker deployment.
```
$ bentoml sagemaker deploy my-first-sagemaker-deployment -b IrisClassifier:20200713155751_57AC77 --api-name predict
```
3. See error
**Expected behavior**
Assumed role situation is handled and causes no error at deployment.
**Screenshots/Logs**
If applicable, add screenshots, logs or error outputs to help explain your problem.
```
$ bentoml sagemaker deploy my-first-sagemaker-deployment -b IrisClassifier:20200713155751_57AC77 --api-name predict
...
==> WARNING: A newer version of conda exists. <==
current version: 4.8.2
latest version: 4.8.3
Please update conda by running
$ conda update -n base -c defaults conda
[2020-07-14 10:47:28,586] INFO - #
# To activate this environment, use
#
# $ conda activate base
#
# To deactivate an active environment, use
#
# $ conda deactivate
|[2020-07-14 10:47:28,921] INFO - + pip install -r ./requirements.txt
/[2020-07-14 10:47:29,645] INFO - Collecting scikit-learn
|[2020-07-14 10:47:29,694] INFO - Downloading scikit_learn-0.23.1-cp37-cp37m-manylinux1_x86_64.whl (6.8 MB)
\[2020-07-14 10:47:31,479] INFO - Collecting pandas
[2020-07-14 10:47:31,494] INFO - Downloading pandas-1.0.5-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
/[2020-07-14 10:47:33,745] INFO - Requirement already satisfied: bentoml==0.8.3 in /opt/conda/lib/python3.7/site-packages (from -r ./requirements.txt (line 3)) (0.8.3)
\[2020-07-14 10:47:33,969] INFO - Collecting threadpoolctl>=2.0.0
[2020-07-14 10:47:33,981] INFO - Downloading threadpoolctl-2.1.0-py3-none-any.whl (12 kB)
|[2020-07-14 10:47:34,294] INFO - Collecting scipy>=0.19.1
[2020-07-14 10:47:34,308] INFO - Downloading scipy-1.5.1-cp37-cp37m-manylinux1_x86_64.whl (25.9 MB)
|[2020-07-14 10:47:40,018] INFO - Collecting joblib>=0.11
[2020-07-14 10:47:40,031] INFO - Downloading joblib-0.16.0-py3-none-any.whl (300 kB)
\[2020-07-14 10:47:40,134] INFO - Requirement already satisfied: numpy>=1.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn->-r ./requirements.txt (line 1)) (1.19.0)
[2020-07-14 10:47:40,136] INFO - Requirement already satisfied: python-dateutil>=2.6.1 in /opt/conda/lib/python3.7/site-packages (from pandas->-r ./requirements.txt (line 2)) (2.8.0)
/[2020-07-14 10:47:40,325] INFO - Collecting pytz>=2017.2
[2020-07-14 10:47:40,339] INFO - Downloading pytz-2020.1-py2.py3-none-any.whl (510 kB)
\[2020-07-14 10:47:40,544] INFO - Requirement already satisfied: gunicorn in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (20.0.4)
[2020-07-14 10:47:40,551] INFO - Requirement already satisfied: prometheus-client in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.8.0)
[2020-07-14 10:47:40,555] INFO - Requirement already satisfied: psutil in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (5.7.0)
[2020-07-14 10:47:40,559] INFO - Requirement already satisfied: alembic in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.4.2)
[2020-07-14 10:47:40,564] INFO - Requirement already satisfied: ruamel.yaml>=0.15.0 in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.16.10)
[2020-07-14 10:47:40,575] INFO - Requirement already satisfied: tabulate in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.8.7)
[2020-07-14 10:47:40,579] INFO - Requirement already satisfied: aiohttp in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (3.6.2)
[2020-07-14 10:47:40,592] INFO - Requirement already satisfied: certifi in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (2020.6.20)
[2020-07-14 10:47:40,594] INFO - Requirement already satisfied: python-json-logger in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.1.11)
[2020-07-14 10:47:40,595] INFO - Requirement already satisfied: requests in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (2.22.0)
[2020-07-14 10:47:40,611] INFO - Requirement already satisfied: sqlalchemy-utils in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.36.7)
-[2020-07-14 10:47:40,674] INFO - Requirement already satisfied: sqlalchemy>=1.3.0 in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.3.18)
[2020-07-14 10:47:40,688] INFO - Requirement already satisfied: grpcio<=1.27.2 in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.27.2)
[2020-07-14 10:47:40,692] INFO - Requirement already satisfied: click>=7.0 in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (7.1.2)
[2020-07-14 10:47:40,693] INFO - Requirement already satisfied: py-zipkin in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.20.0)
[2020-07-14 10:47:40,698] INFO - Requirement already satisfied: configparser in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (5.0.0)
[2020-07-14 10:47:40,707] INFO - Requirement already satisfied: flask in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.1.2)
[2020-07-14 10:47:40,724] INFO - Requirement already satisfied: packaging in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (20.4)
[2020-07-14 10:47:40,728] INFO - Requirement already satisfied: humanfriendly in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (8.2)
[2020-07-14 10:47:40,733] INFO - Requirement already satisfied: docker in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (4.2.2)
/[2020-07-14 10:47:40,747] INFO - Requirement already satisfied: boto3 in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.14.17)
[2020-07-14 10:47:40,751] INFO - Requirement already satisfied: cerberus in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.3.2)
[2020-07-14 10:47:40,754] INFO - Requirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from bentoml==0.8.3->-r ./requirements.txt (line 3)) (3.12.2)
[2020-07-14 10:47:40,757] INFO - Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.7/site-packages (from python-dateutil>=2.6.1->pandas->-r ./requirements.txt (line 2)) (1.14.0)
[2020-07-14 10:47:40,759] INFO - Requirement already satisfied: setuptools>=3.0 in /opt/conda/lib/python3.7/site-packages (from gunicorn->bentoml==0.8.3->-r ./requirements.txt (line 3)) (45.2.0.post20200210)
[2020-07-14 10:47:40,770] INFO - Requirement already satisfied: python-editor>=0.3 in /opt/conda/lib/python3.7/site-packages (from alembic->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.0.4)
[2020-07-14 10:47:40,771] INFO - Requirement already satisfied: Mako in /opt/conda/lib/python3.7/site-packages (from alembic->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.1.3)
[2020-07-14 10:47:40,777] INFO - Requirement already satisfied: ruamel.yaml.clib>=0.1.2; platform_python_implementation == "CPython" and python_version < "3.9" in /opt/conda/lib/python3.7/site-packages (from ruamel.yaml>=0.15.0->bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.2.0)
[2020-07-14 10:47:40,779] INFO - Requirement already satisfied: chardet<4.0,>=2.0 in /opt/conda/lib/python3.7/site-packages (from aiohttp->bentoml==0.8.3->-r ./requirements.txt (line 3)) (3.0.4)
[2020-07-14 10:47:40,781] INFO - Requirement already satisfied: multidict<5.0,>=4.5 in /opt/conda/lib/python3.7/site-packages (from aiohttp->bentoml==0.8.3->-r ./requirements.txt (line 3)) (4.7.6)
[2020-07-14 10:47:40,784] INFO - Requirement already satisfied: attrs>=17.3.0 in /opt/conda/lib/python3.7/site-packages (from aiohttp->bentoml==0.8.3->-r ./requirements.txt (line 3)) (19.3.0)
[2020-07-14 10:47:40,827] INFO - Requirement already satisfied: async-timeout<4.0,>=3.0 in /opt/conda/lib/python3.7/site-packages (from aiohttp->bentoml==0.8.3->-r ./requirements.txt (line 3)) (3.0.1)
[2020-07-14 10:47:40,829] INFO - Requirement already satisfied: yarl<2.0,>=1.0 in /opt/conda/lib/python3.7/site-packages (from aiohttp->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.4.2)
[2020-07-14 10:47:40,836] INFO - Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.25.8)
|[2020-07-14 10:47:40,850] INFO - Requirement already satisfied: idna<2.9,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests->bentoml==0.8.3->-r ./requirements.txt (line 3)) (2.8)
[2020-07-14 10:47:40,853] INFO - Requirement already satisfied: thriftpy2>=0.4.0 in /opt/conda/lib/python3.7/site-packages (from py-zipkin->bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.4.11)
[2020-07-14 10:47:40,866] INFO - Requirement already satisfied: Werkzeug>=0.15 in /opt/conda/lib/python3.7/site-packages (from flask->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.0.1)
[2020-07-14 10:47:40,878] INFO - Requirement already satisfied: itsdangerous>=0.24 in /opt/conda/lib/python3.7/site-packages (from flask->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.1.0)
[2020-07-14 10:47:40,880] INFO - Requirement already satisfied: Jinja2>=2.10.1 in /opt/conda/lib/python3.7/site-packages (from flask->bentoml==0.8.3->-r ./requirements.txt (line 3)) (2.11.2)
[2020-07-14 10:47:40,885] INFO - Requirement already satisfied: pyparsing>=2.0.2 in /opt/conda/lib/python3.7/site-packages (from packaging->bentoml==0.8.3->-r ./requirements.txt (line 3)) (2.4.7)
[2020-07-14 10:47:40,887] INFO - Requirement already satisfied: websocket-client>=0.32.0 in /opt/conda/lib/python3.7/site-packages (from docker->bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.57.0)
[2020-07-14 10:47:40,891] INFO - Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /opt/conda/lib/python3.7/site-packages (from boto3->bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.10.0)
[2020-07-14 10:47:40,893] INFO - Requirement already satisfied: botocore<1.18.0,>=1.17.17 in /opt/conda/lib/python3.7/site-packages (from boto3->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.17.17)
[2020-07-14 10:47:40,903] INFO - Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /opt/conda/lib/python3.7/site-packages (from boto3->bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.3.3)
[2020-07-14 10:47:40,907] INFO - Requirement already satisfied: MarkupSafe>=0.9.2 in /opt/conda/lib/python3.7/site-packages (from Mako->alembic->bentoml==0.8.3->-r ./requirements.txt (line 3)) (1.1.1)
[2020-07-14 10:47:40,910] INFO - Requirement already satisfied: ply<4.0,>=3.4 in /opt/conda/lib/python3.7/site-packages (from thriftpy2>=0.4.0->py-zipkin->bentoml==0.8.3->-r ./requirements.txt (line 3)) (3.11)
[2020-07-14 10:47:40,912] INFO - Requirement already satisfied: docutils<0.16,>=0.10 in /opt/conda/lib/python3.7/site-packages (from botocore<1.18.0,>=1.17.17->boto3->bentoml==0.8.3->-r ./requirements.txt (line 3)) (0.15.2)
\[2020-07-14 10:47:41,027] INFO - Installing collected packages: threadpoolctl, scipy, joblib, scikit-learn, pytz, pandas
-[2020-07-14 10:47:49,756] INFO - Successfully installed joblib-0.16.0 pandas-1.0.5 pytz-2020.1 scikit-learn-0.23.1 scipy-1.5.1 threadpoolctl-2.1.0
|[2020-07-14 10:47:49,994] INFO - + for filename in ./bundled_pip_dependencies/*.tar.gz
+ '[' -e './bundled_pip_dependencies/*.tar.gz' ']'
+ continue
|[2020-07-14 10:47:53,748] INFO - ---> 895e4a390376
[2020-07-14 10:47:53,749] INFO - Step 9/9 : ENV PATH="/bento:$PATH"
[2020-07-14 10:47:53,749] INFO -
\[2020-07-14 10:47:53,797] INFO - ---> Running in 2c5c72de1601
-[2020-07-14 10:47:53,890] INFO - ---> 05b6fd2ed048
[2020-07-14 10:47:53,892] INFO - Successfully built 05b6fd2ed048
[2020-07-14 10:47:53,898] INFO - Successfully tagged 103365315157.dkr.ecr.ap-southeast-2.amazonaws.com/irisclassifier-sagemaker:20200713155751_57AC77
Error: sagemaker deploy failed: INTERNAL:Not supported role type assumed-role; sts arn is arn:aws:sts::103365315157:assumed-role/<rolename>/<username>
```
To give us more information for diagnosing the issue, make sure to enable debug logging:
Add the following lines to your Python code before invoking BentoML:
```python
import bentoml
import logging
bentoml.config().set('core', 'debug', 'true')
bentoml.configure_logging(logging.DEBUG)
```
And use the `--verbose` option when running `bentoml` CLI command, e.g.:
```bash
bentoml get IrisClassifier --verbose
```
**Environment:**
- OS: [e.g. MacOS 10.15.5]
- Python/BentoML Version [e.g. Python 3.7.6, BentoML-0.8.3]
| [
{
"content": "import base64\nimport json\nimport logging\nimport os\nimport shutil\nfrom urllib.parse import urlparse\n\nimport boto3\nimport docker\nfrom botocore.exceptions import ClientError\n\nfrom bentoml.exceptions import (\n YataiDeploymentException,\n AWSServiceError,\n InvalidArgument,\n Be... | [
{
"content": "import base64\nimport json\nimport logging\nimport os\nimport shutil\nfrom urllib.parse import urlparse\n\nimport boto3\nimport docker\nfrom botocore.exceptions import ClientError\n\nfrom bentoml.exceptions import (\n YataiDeploymentException,\n AWSServiceError,\n InvalidArgument,\n Be... | diff --git a/bentoml/yatai/deployment/sagemaker/operator.py b/bentoml/yatai/deployment/sagemaker/operator.py
index 0529a8e83db..39cbfe268b9 100644
--- a/bentoml/yatai/deployment/sagemaker/operator.py
+++ b/bentoml/yatai/deployment/sagemaker/operator.py
@@ -97,7 +97,7 @@ def get_arn_role_from_current_aws_user():
"again"
)
return arn
- elif type_role[0] == "role":
+ elif type_role[0] in ["role", "assumed-role"]:
role_response = iam_client.get_role(RoleName=type_role[1])
return role_response["Role"]["Arn"]
|
ros__ros_comm-1835 | rosparam still uses unsafe yaml.load
https://github.com/ros/ros_comm/blob/5da095d06bccbea708394b399215d8a066797266/tools/rosparam/src/rosparam/__init__.py#L371
| [
{
"content": "# Software License Agreement (BSD License)\n#\n# Copyright (c) 2008, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of so... | [
{
"content": "# Software License Agreement (BSD License)\n#\n# Copyright (c) 2008, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of so... | diff --git a/tools/rosparam/src/rosparam/__init__.py b/tools/rosparam/src/rosparam/__init__.py
index 3279ab97d5..fd8b0569f3 100644
--- a/tools/rosparam/src/rosparam/__init__.py
+++ b/tools/rosparam/src/rosparam/__init__.py
@@ -368,7 +368,7 @@ def set_param(param, value, verbose=False):
:param param: parameter name, ``str``
:param value: yaml-encoded value, ``str``
"""
- set_param_raw(param, yaml.load(value), verbose=verbose)
+ set_param_raw(param, yaml.safe_load(value), verbose=verbose)
def upload_params(ns, values, verbose=False):
"""
|
mitmproxy__mitmproxy-2793 | MITM Proxy Crashes on when editing a "form"
Stack trace:
```
Traceback (most recent call last):
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/master.py", line 202, in run
self.loop.run()
File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 278, in run
self._run()
File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 376, in _run
self.event_loop.run()
File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 682, in run
self._loop()
File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 719, in _loop
self._watch_files[fd]()
File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/raw_display.py", line 393, in <lambda>
event_loop, callback, self.get_available_raw_input())
File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/raw_display.py", line 493, in parse_input
callback(processed, processed_codes)
File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 403, in _update
self.process_input(keys)
File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 503, in process_input
k = self._topmost_widget.keypress(self.screen_size, k)
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/window.py", line 274, in keypress
k = fs.keypress(size, k)
File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/container.py", line 592, in keypress
*self.calculate_padding_filler(size, True)), key)
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/overlay.py", line 117, in keypress
key = self.master.keymap.handle("chooser", key)
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/keymap.py", line 123, in handle
return self.executor(b.command)
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/commandeditor.py", line 24, in __call__
ret = self.master.commands.call(cmd)
File "/Users/ograff/mitmproxy/mitmproxy/command.py", line 144, in call
return self.call_args(parts[0], parts[1:])
File "/Users/ograff/mitmproxy/mitmproxy/command.py", line 135, in call_args
return self.commands[path].call(args)
File "/Users/ograff/mitmproxy/mitmproxy/command.py", line 106, in call
ret = self.func(*pargs)
File "/Users/ograff/mitmproxy/mitmproxy/command.py", line 197, in wrapper
return function(*args, **kwargs)
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/consoleaddons.py", line 125, in nav_select
self.master.inject_key("m_select")
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/master.py", line 178, in inject_key
self.loop.process_input([key])
File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/main_loop.py", line 503, in process_input
k = self._topmost_widget.keypress(self.screen_size, k)
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/window.py", line 274, in keypress
k = fs.keypress(size, k)
File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/urwid/container.py", line 592, in keypress
*self.calculate_padding_filler(size, True)), key)
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/overlay.py", line 120, in keypress
signals.pop_view_state.send(self)
File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/blinker/base.py", line 267, in send
for receiver in self.receivers_for(sender)]
File "/Users/ograff/mitmproxy/venv/lib/python3.6/site-packages/blinker/base.py", line 267, in <listcomp>
for receiver in self.receivers_for(sender)]
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/window.py", line 207, in pop
if self.focus_stack().pop():
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/window.py", line 81, in pop
self.call("layout_popping")
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/window.py", line 93, in call
getattr(self.top_window(), name)(*args, **kwargs)
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/grideditor/base.py", line 463, in layout_popping
self.call(self._w, "layout_popping")
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/grideditor/base.py", line 441, in call
f(*args, **kwargs)
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/grideditor/base.py", line 313, in layout_popping
self.callback(self.data_out(res), *self.cb_args, **self.cb_kwargs)
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/grideditor/base.py", line 456, in set_data_update
self.set_data(vals, flow)
File "/Users/ograff/mitmproxy/mitmproxy/tools/console/grideditor/editors.py", line 64, in set_data
flow.request.urlencoded_form = vals
File "/Users/ograff/mitmproxy/mitmproxy/net/http/request.py", line 462, in urlencoded_form
self._set_urlencoded_form(value)
File "/Users/ograff/mitmproxy/mitmproxy/net/http/request.py", line 444, in _set_urlencoded_form
self.content = mitmproxy.net.http.url.encode(form_data, self.content.decode()).encode()
File "/Users/ograff/mitmproxy/mitmproxy/net/http/url.py", line 81, in encode
if encoded[-1] == '=':
IndexError: string index out of range
```
mitmproxy has crashed!
Please lodge a bug report at:
https://github.com/mitmproxy/mitmproxy
Shutting down...
##### Steps to reproduce the problem:
1. Enter a flow
2. Press "e"
3. Select "form"
##### Any other comments? What have you tried so far?
`encoded` is an empty string but probably shouldn't be.
Changing `mitmproxy/net/http/url.py` to just check for an empty string and not index into it if its empty results in an empty request body after exiting the editor.
##### System information
```
Mitmproxy version: 3.0.0 (2.0.0dev0631-0x30927468)
Python version: 3.6.0
Platform: Darwin-16.5.0-x86_64-i386-64bit
SSL version: OpenSSL 1.1.0f 25 May 2017
Mac version: 10.12.4 ('', '', '') x86_64
```
| [
{
"content": "import urllib.parse\nfrom typing import Sequence\nfrom typing import Tuple\n\nfrom mitmproxy.net import check\n\n\ndef parse(url):\n \"\"\"\n URL-parsing function that checks that\n - port is an integer 0-65535\n - host is a valid IDNA-encoded hostname with no null-... | [
{
"content": "import urllib.parse\nfrom typing import Sequence\nfrom typing import Tuple\n\nfrom mitmproxy.net import check\n\n\ndef parse(url):\n \"\"\"\n URL-parsing function that checks that\n - port is an integer 0-65535\n - host is a valid IDNA-encoded hostname with no null-... | diff --git a/mitmproxy/net/http/url.py b/mitmproxy/net/http/url.py
index 86f65cfdc8..f938cb12d4 100644
--- a/mitmproxy/net/http/url.py
+++ b/mitmproxy/net/http/url.py
@@ -76,7 +76,7 @@ def encode(s: Sequence[Tuple[str, str]], similar_to: str=None) -> str:
encoded = urllib.parse.urlencode(s, False, errors="surrogateescape")
- if remove_trailing_equal:
+ if encoded and remove_trailing_equal:
encoded = encoded.replace("=&", "&")
if encoded[-1] == '=':
encoded = encoded[:-1]
diff --git a/test/mitmproxy/net/http/test_url.py b/test/mitmproxy/net/http/test_url.py
index 2064aab8d1..c9f61fafdf 100644
--- a/test/mitmproxy/net/http/test_url.py
+++ b/test/mitmproxy/net/http/test_url.py
@@ -108,6 +108,7 @@ def test_empty_key_trailing_equal_sign():
def test_encode():
assert url.encode([('foo', 'bar')])
assert url.encode([('foo', surrogates)])
+ assert not url.encode([], similar_to="justatext")
def test_decode():
|
StackStorm__st2-4234 | Missing [workflow_engine] in st2.conf.sample
##### SUMMARY
https://github.com/StackStorm/st2/blob/master/conf/st2.conf.sample is missing a new section for `[workflow_engine]`
Also, shouldn't this section be named `[workflowengine]` to go along with the "style" of the other sections like `[resultstracker]` , `[garbagecollector]`, etc
##### ISSUE TYPE
- Bug Report
- Feature Idea
##### STACKSTORM VERSION
2.8
##### EXPECTED RESULTS
https://github.com/StackStorm/st2/blob/master/conf/st2.conf.sample contains a section for `[workflow_engine]`
| [
{
"content": "#!/usr/bin/env python\n# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, V... | [
{
"content": "#!/usr/bin/env python\n# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, V... | diff --git a/conf/st2.conf.sample b/conf/st2.conf.sample
index 73fdf0d8bd..27645c77a4 100644
--- a/conf/st2.conf.sample
+++ b/conf/st2.conf.sample
@@ -324,3 +324,7 @@ local_timezone = America/Los_Angeles
# Base https URL to access st2 Web UI. This is used to construct history URLs that are sent out when chatops is used to kick off executions.
webui_base_url = https://localhost
+[workflow_engine]
+# Location of the logging configuration file.
+logging = conf/logging.workflowengine.conf
+
diff --git a/tools/config_gen.py b/tools/config_gen.py
index 7c54fec30b..89ad584ffd 100755
--- a/tools/config_gen.py
+++ b/tools/config_gen.py
@@ -27,6 +27,7 @@
CONFIGS = ['st2actions.config',
'st2actions.notifier.config',
'st2actions.resultstracker.config',
+ 'st2actions.workflows.config',
'st2api.config',
'st2stream.config',
'st2auth.config',
|
coala__coala-3908 | Fail to install and py.test on docker environment.
<!-- Hello! If you're filing a bug, please include every step so as to help us reproduce it on our machines. If you're unsure about how to file an issue, use the issue template. If you need any help regarding usage of coala, check out the documentation or hit us up on chat. You can ignore or delete this text, it is commented and won't appear when the issue is submitted or previewed.
Chat: https://coala.io/chat
Issue Template: https://github.com/coala/coala/blob/master/CONTRIBUTING.rst#filing-issues
Documentation: https://docs.coala.io
-->
When I try to install by `python setup.py install`, it is failed with this message.
`UnicodeEncodeError: 'ascii' codec can't encode character '\xfc' in position 15224: ordinal not in range(128)`
Also, the same happening when I try to run unit test on local.
It needs to be fixed.
| [
{
"content": "#!/usr/bin/env python3\n\nimport datetime\nimport locale\nimport platform\nimport sys\nfrom os import getenv\nfrom subprocess import call\n\nimport setuptools.command.build_py\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\nfrom coalib impor... | [
{
"content": "#!/usr/bin/env python3\n\nimport datetime\nimport locale\nimport platform\nimport sys\nfrom os import getenv\nfrom subprocess import call\n\nimport setuptools.command.build_py\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\nfrom coalib impor... | diff --git a/setup.py b/setup.py
index 7ff6d8d1c2..76cd0c43c7 100755
--- a/setup.py
+++ b/setup.py
@@ -15,7 +15,10 @@
from coalib.misc.BuildManPage import BuildManPage
try:
- locale.getlocale()
+ lc = locale.getlocale()
+ pf = platform.system()
+ if pf != 'Windows' and lc == (None, None):
+ locale.setlocale(locale.LC_ALL, 'C.UTF-8')
except (ValueError, UnicodeError):
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
diff --git a/tests/__init__.py b/tests/__init__.py
index e69de29bb2..2b212be3e6 100644
--- a/tests/__init__.py
+++ b/tests/__init__.py
@@ -0,0 +1,11 @@
+import locale
+import platform
+
+
+try:
+ lc = locale.getlocale()
+ ps = platform.system()
+ if ps != 'Windows' and lc == (None, None):
+ locale.setlocale(locale.LC_ALL, 'C.UTF-8')
+except (ValueError, UnicodeError):
+ locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
|
django-oscar__django-oscar-3214 | Request not being passed to authenticate method
### Issue Summary
Django-oscar's [`EmailAuthenticationForm`](https://github.com/django-oscar/django-oscar/blob/master/src/oscar/apps/customer/forms.py#L76) is a subclass of Django's [`AuthenticationForm`](https://github.com/django/django/blob/master/django/contrib/auth/forms.py#L163).
When calling `clean` method, django `authenticate` is called without `request` object.
Looking [at the code](https://github.com/django-oscar/django-oscar/blob/master/src/oscar/apps/customer/views.py#L136), the `AccountAuthView` is not passing the `request` object to the Form.
Some custom backend, like django-axes's, requires that in order to work properly.
That's why [this issue](https://github.com/django-oscar/django-oscar/issues/2111) was happening
### Steps to Reproduce
1. Create a simple authentication backend that requires the `request` object, and put it before django-oscar's backend:
```
# project/backends.py
from django.contrib.auth.backends import ModelBackend
class CustomBackend(ModelBackend):
def authenticate(self, request, username: str = None, password: str = None, **kwargs: dict):
if request is None:
raise Exception('CustomBackend requires a request as an argument to authenticate')
# some logic
# On settings
AUTHENTICATION_BACKENDS = [
'project.backends.CustomBackend',
'oscar.apps.customer.auth_backends.EmailBackend',
]
```
2. Try to login
3. Observe `Exception('CustomBackend requires a request as an argument to authenticate')` is raised
### Technical details
```
$ python --version
Python 3.7.4
$ pip freeze | grep Django
Django==2.2.6
$ pip freeze | grep django-oscar
django-oscar==2.0.3
```
| [
{
"content": "from django import http\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth import login as auth_login\nfrom django.contrib.auth import logout as auth_logout\nfrom django.contrib.auth import update_session_auth_hash\nfrom django.contrib.auth.forms impor... | [
{
"content": "from django import http\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth import login as auth_login\nfrom django.contrib.auth import logout as auth_logout\nfrom django.contrib.auth import update_session_auth_hash\nfrom django.contrib.auth.forms impor... | diff --git a/src/oscar/apps/customer/views.py b/src/oscar/apps/customer/views.py
index e04ccf81b58..56e6fb62adb 100644
--- a/src/oscar/apps/customer/views.py
+++ b/src/oscar/apps/customer/views.py
@@ -135,6 +135,7 @@ def get_login_form(self, bind_data=False):
def get_login_form_kwargs(self, bind_data=False):
kwargs = {}
+ kwargs['request'] = self.request
kwargs['host'] = self.request.get_host()
kwargs['prefix'] = self.login_prefix
kwargs['initial'] = {
diff --git a/tests/unit/customer/test_views.py b/tests/unit/customer/test_views.py
new file mode 100644
index 00000000000..41775878ede
--- /dev/null
+++ b/tests/unit/customer/test_views.py
@@ -0,0 +1,28 @@
+from unittest.mock import Mock, patch
+
+from django.test import Client, TestCase
+from django.urls import reverse
+
+from oscar.apps.customer.forms import EmailAuthenticationForm
+
+
+class TestAccountAuthView(TestCase):
+
+ def setUp(self):
+ self.client = Client()
+
+ def test_request_is_passed_to_form(self):
+ form_class = Mock(wraps=EmailAuthenticationForm)
+ data = {"login_submit": ["1"]}
+ initial = {'redirect_url': ''}
+ with patch("oscar.apps.customer.views.AccountAuthView.login_form_class", new=form_class):
+ response = self.client.post(reverse("customer:login"), data=data)
+ assert form_class.called
+ form_class.assert_called_with(
+ data=data,
+ files={},
+ host="testserver",
+ initial=initial,
+ prefix='login',
+ request=response.wsgi_request,
+ )
|
ansible__ansible-43500 | Task name is overridden by include_role, and is not evaluated in output
<!---
Verify first that your issue/request is not already reported on GitHub.
THIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.
Also test if the latest release, and devel branch are affected too.
ALWAYS add information AFTER (OUTSIDE) these html comments.
Otherwise it may end up being automatically closed by our bot. -->
##### SUMMARY
<!--- Explain the problem briefly -->
When using `include_role`, the `name` parameter given to it appears to override the `name` parameter given to the task itself. Additionally, if jinja2 was being used to determine which role to include, that is not evaluated and is printed raw, which is not useful to an observer.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.
Do not include extra details here, e.g. "vyos_command" not "the network module vyos_command" or the full path-->
include_role
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
ansible 2.6.1
```
##### CONFIGURATION
<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).-->
##### OS / ENVIRONMENT
<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.-->
Red Hat Enterprise Linux Server release 7.4 (Maipo)
##### STEPS TO REPRODUCE
<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used. -->
<!--- Paste example playbooks or commands between quotes below -->
Playbook
```yaml
- name: "test"
hosts: localhost
gather_facts: no
connection: local
vars:
role_type: a
tasks:
- name: Role inclusion test
include_role:
name: "role-{{ role_type }}"
```
roles/role-a/tasks/main.yml
```yaml
---
- debug: msg="This is Role A"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
```
PLAY [test] ********************************************************************
TASK [Role inclusion test] *****************************************************
TASK [role-a : debug] **********************************************************
[...]
```
Or, less preferably:
```
PLAY [test] ********************************************************************
TASK [include_role : role-a *************************************
TASK [role-a : debug] **********************************************************
[...]
```
##### ACTUAL RESULTS
```
PLAY [test] ********************************************************************
TASK [include_role : role-{{ role_type }}] *************************************
TASK [role-a : debug] **********************************************************
[...]
```
| [
{
"content": "\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansibl... | [
{
"content": "\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansibl... | diff --git a/lib/ansible/playbook/role_include.py b/lib/ansible/playbook/role_include.py
index 37fc0ad68363a0..bef6ca65a363bd 100644
--- a/lib/ansible/playbook/role_include.py
+++ b/lib/ansible/playbook/role_include.py
@@ -68,7 +68,7 @@ def __init__(self, block=None, role=None, task_include=None):
def get_name(self):
''' return the name of the task '''
- return "%s : %s" % (self.action, self._role_name)
+ return self.name or "%s : %s" % (self.action, self._role_name)
def get_block_list(self, play=None, variable_manager=None, loader=None):
|
vacanza__python-holidays-1775 | Mississippi Holiday - Confederate Memorial Day - Calculation incorrect
Library version: holidays 0.47
The date for Confederate Memorial Day this year is 2024-04-29.
The python-holidays library calculated it to be today, 2024-04-22.
https://www.sos.ms.gov/communications-publications/state-holidays
https://law.justia.com/codes/mississippi/2020/title-3/chapter-3/section-3-3-7/
the last Monday of April (Confederate Memorial Day);
`import holidays`
`#days = holidays.USA(years=[2024], subdiv='MS')`
`days = holidays.country_holidays('USA', subdiv='MS', years=[2024])`
`print(days)`
Output:
{datetime.date(2024, 1, 1): "New Year's Day", datetime.date(2024, 5, 27): 'Memorial Day', datetime.date(2024, 6, 19): 'Juneteenth National Independence Day', datetime.date(2024, 7, 4): 'Independence Day', datetime.date(2024, 9, 2): 'Labor Day', datetime.date(2024, 11, 11): 'Veterans Day', datetime.date(2024, 11, 28): 'Thanksgiving', datetime.date(2024, 12, 25): 'Christmas Day', datetime.date(2024, 2, 19): "Washington's Birthday", datetime.date(2024, 1, 15): "Dr. Martin Luther King Jr. and Robert E. Lee's Birthdays", **datetime.date(2024, 4, 22): 'Confederate Memorial Day'**}
| [
{
"content": "# holidays\n# --------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Authors: Vacanza Team and individual cont... | [
{
"content": "# holidays\n# --------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Authors: Vacanza Team and individual cont... | diff --git a/holidays/countries/united_states.py b/holidays/countries/united_states.py
index f38ca5fa5..4089308d9 100644
--- a/holidays/countries/united_states.py
+++ b/holidays/countries/united_states.py
@@ -609,7 +609,7 @@ def _populate_subdiv_ms_public_holidays(self):
# Confederate Memorial Day
if self._year >= 1866:
- self._add_holiday_4th_mon_of_apr("Confederate Memorial Day")
+ self._add_holiday_last_mon_of_apr("Confederate Memorial Day")
def _populate_subdiv_mt_public_holidays(self):
# Election Day
diff --git a/snapshots/countries/US_MS.json b/snapshots/countries/US_MS.json
index 8752f6455..bfcf64923 100644
--- a/snapshots/countries/US_MS.json
+++ b/snapshots/countries/US_MS.json
@@ -19,7 +19,7 @@
"1951-02-14": "Valentine's Day",
"1951-02-22": "Washington's Birthday",
"1951-03-17": "St. Patrick's Day",
- "1951-04-23": "Confederate Memorial Day",
+ "1951-04-30": "Confederate Memorial Day",
"1951-05-30": "Memorial Day",
"1951-07-04": "Independence Day",
"1951-09-03": "Labor Day",
@@ -92,7 +92,7 @@
"1956-02-14": "Valentine's Day",
"1956-02-22": "Washington's Birthday",
"1956-03-17": "St. Patrick's Day",
- "1956-04-23": "Confederate Memorial Day",
+ "1956-04-30": "Confederate Memorial Day",
"1956-05-30": "Memorial Day",
"1956-07-04": "Independence Day",
"1956-09-03": "Labor Day",
@@ -107,7 +107,7 @@
"1957-02-14": "Valentine's Day",
"1957-02-22": "Washington's Birthday",
"1957-03-17": "St. Patrick's Day",
- "1957-04-22": "Confederate Memorial Day",
+ "1957-04-29": "Confederate Memorial Day",
"1957-05-30": "Memorial Day",
"1957-07-04": "Independence Day",
"1957-09-02": "Labor Day",
@@ -177,7 +177,7 @@
"1962-02-14": "Valentine's Day",
"1962-02-22": "Washington's Birthday",
"1962-03-17": "St. Patrick's Day",
- "1962-04-23": "Confederate Memorial Day",
+ "1962-04-30": "Confederate Memorial Day",
"1962-05-30": "Memorial Day",
"1962-07-04": "Independence Day",
"1962-09-03": "Labor Day",
@@ -191,7 +191,7 @@
"1963-02-14": "Valentine's Day",
"1963-02-22": "Washington's Birthday",
"1963-03-17": "St. Patrick's Day",
- "1963-04-22": "Confederate Memorial Day",
+ "1963-04-29": "Confederate Memorial Day",
"1963-05-30": "Memorial Day",
"1963-07-04": "Independence Day",
"1963-09-02": "Labor Day",
@@ -264,7 +264,7 @@
"1968-02-14": "Valentine's Day",
"1968-02-22": "Washington's Birthday",
"1968-03-17": "St. Patrick's Day",
- "1968-04-22": "Confederate Memorial Day",
+ "1968-04-29": "Confederate Memorial Day",
"1968-05-30": "Memorial Day",
"1968-07-04": "Independence Day",
"1968-09-02": "Labor Day",
@@ -335,7 +335,7 @@
"1973-02-14": "Valentine's Day",
"1973-02-19": "Washington's Birthday",
"1973-03-17": "St. Patrick's Day",
- "1973-04-23": "Confederate Memorial Day",
+ "1973-04-30": "Confederate Memorial Day",
"1973-05-28": "Memorial Day",
"1973-07-04": "Independence Day",
"1973-09-03": "Labor Day",
@@ -348,7 +348,7 @@
"1974-02-14": "Valentine's Day",
"1974-02-18": "Washington's Birthday",
"1974-03-17": "St. Patrick's Day",
- "1974-04-22": "Confederate Memorial Day",
+ "1974-04-29": "Confederate Memorial Day",
"1974-05-27": "Memorial Day",
"1974-07-04": "Independence Day",
"1974-09-02": "Labor Day",
@@ -420,7 +420,7 @@
"1979-02-14": "Valentine's Day",
"1979-02-19": "Washington's Birthday",
"1979-03-17": "St. Patrick's Day",
- "1979-04-23": "Confederate Memorial Day",
+ "1979-04-30": "Confederate Memorial Day",
"1979-05-28": "Memorial Day",
"1979-07-04": "Independence Day",
"1979-09-03": "Labor Day",
@@ -493,7 +493,7 @@
"1984-02-14": "Valentine's Day",
"1984-02-20": "Washington's Birthday",
"1984-03-17": "St. Patrick's Day",
- "1984-04-23": "Confederate Memorial Day",
+ "1984-04-30": "Confederate Memorial Day",
"1984-05-28": "Memorial Day",
"1984-07-04": "Independence Day",
"1984-09-03": "Labor Day",
@@ -508,7 +508,7 @@
"1985-02-14": "Valentine's Day",
"1985-02-18": "Washington's Birthday",
"1985-03-17": "St. Patrick's Day",
- "1985-04-22": "Confederate Memorial Day",
+ "1985-04-29": "Confederate Memorial Day",
"1985-05-27": "Memorial Day",
"1985-07-04": "Independence Day",
"1985-09-02": "Labor Day",
@@ -583,7 +583,7 @@
"1990-02-14": "Valentine's Day",
"1990-02-19": "Washington's Birthday",
"1990-03-17": "St. Patrick's Day",
- "1990-04-23": "Confederate Memorial Day",
+ "1990-04-30": "Confederate Memorial Day",
"1990-05-28": "Memorial Day",
"1990-07-04": "Independence Day",
"1990-09-03": "Labor Day",
@@ -598,7 +598,7 @@
"1991-02-14": "Valentine's Day",
"1991-02-18": "Washington's Birthday",
"1991-03-17": "St. Patrick's Day",
- "1991-04-22": "Confederate Memorial Day",
+ "1991-04-29": "Confederate Memorial Day",
"1991-05-27": "Memorial Day",
"1991-07-04": "Independence Day",
"1991-09-02": "Labor Day",
@@ -676,7 +676,7 @@
"1996-02-14": "Valentine's Day",
"1996-02-19": "Washington's Birthday",
"1996-03-17": "St. Patrick's Day",
- "1996-04-22": "Confederate Memorial Day",
+ "1996-04-29": "Confederate Memorial Day",
"1996-05-27": "Memorial Day",
"1996-07-04": "Independence Day",
"1996-09-02": "Labor Day",
@@ -753,7 +753,7 @@
"2001-02-14": "Valentine's Day",
"2001-02-19": "Washington's Birthday",
"2001-03-17": "St. Patrick's Day",
- "2001-04-23": "Confederate Memorial Day",
+ "2001-04-30": "Confederate Memorial Day",
"2001-05-28": "Memorial Day",
"2001-07-04": "Independence Day",
"2001-09-03": "Labor Day",
@@ -768,7 +768,7 @@
"2002-02-14": "Valentine's Day",
"2002-02-18": "Washington's Birthday",
"2002-03-17": "St. Patrick's Day",
- "2002-04-22": "Confederate Memorial Day",
+ "2002-04-29": "Confederate Memorial Day",
"2002-05-27": "Memorial Day",
"2002-07-04": "Independence Day",
"2002-09-02": "Labor Day",
@@ -845,7 +845,7 @@
"2007-02-14": "Valentine's Day",
"2007-02-19": "Washington's Birthday",
"2007-03-17": "St. Patrick's Day",
- "2007-04-23": "Confederate Memorial Day",
+ "2007-04-30": "Confederate Memorial Day",
"2007-05-28": "Memorial Day",
"2007-07-04": "Independence Day",
"2007-09-03": "Labor Day",
@@ -923,7 +923,7 @@
"2012-02-14": "Valentine's Day",
"2012-02-20": "Washington's Birthday",
"2012-03-17": "St. Patrick's Day",
- "2012-04-23": "Confederate Memorial Day",
+ "2012-04-30": "Confederate Memorial Day",
"2012-05-28": "Memorial Day",
"2012-07-04": "Independence Day",
"2012-09-03": "Labor Day",
@@ -939,7 +939,7 @@
"2013-02-14": "Valentine's Day",
"2013-02-18": "Washington's Birthday",
"2013-03-17": "St. Patrick's Day",
- "2013-04-22": "Confederate Memorial Day",
+ "2013-04-29": "Confederate Memorial Day",
"2013-05-27": "Memorial Day",
"2013-07-04": "Independence Day",
"2013-09-02": "Labor Day",
@@ -1014,7 +1014,7 @@
"2018-02-14": "Valentine's Day",
"2018-02-19": "Washington's Birthday",
"2018-03-17": "St. Patrick's Day",
- "2018-04-23": "Confederate Memorial Day",
+ "2018-04-30": "Confederate Memorial Day",
"2018-05-28": "Memorial Day",
"2018-07-04": "Independence Day",
"2018-09-03": "Labor Day",
@@ -1029,7 +1029,7 @@
"2019-02-14": "Valentine's Day",
"2019-02-18": "Washington's Birthday",
"2019-03-17": "St. Patrick's Day",
- "2019-04-22": "Confederate Memorial Day",
+ "2019-04-29": "Confederate Memorial Day",
"2019-05-27": "Memorial Day",
"2019-07-04": "Independence Day",
"2019-09-02": "Labor Day",
@@ -1112,7 +1112,7 @@
"2024-02-14": "Valentine's Day",
"2024-02-19": "Washington's Birthday",
"2024-03-17": "St. Patrick's Day",
- "2024-04-22": "Confederate Memorial Day",
+ "2024-04-29": "Confederate Memorial Day",
"2024-05-27": "Memorial Day",
"2024-06-19": "Juneteenth National Independence Day",
"2024-07-04": "Independence Day",
@@ -1195,7 +1195,7 @@
"2029-02-14": "Valentine's Day",
"2029-02-19": "Washington's Birthday",
"2029-03-17": "St. Patrick's Day",
- "2029-04-23": "Confederate Memorial Day",
+ "2029-04-30": "Confederate Memorial Day",
"2029-05-28": "Memorial Day",
"2029-06-19": "Juneteenth National Independence Day",
"2029-07-04": "Independence Day",
@@ -1211,7 +1211,7 @@
"2030-02-14": "Valentine's Day",
"2030-02-18": "Washington's Birthday",
"2030-03-17": "St. Patrick's Day",
- "2030-04-22": "Confederate Memorial Day",
+ "2030-04-29": "Confederate Memorial Day",
"2030-05-27": "Memorial Day",
"2030-06-19": "Juneteenth National Independence Day",
"2030-07-04": "Independence Day",
@@ -1295,7 +1295,7 @@
"2035-02-14": "Valentine's Day",
"2035-02-19": "Washington's Birthday",
"2035-03-17": "St. Patrick's Day",
- "2035-04-23": "Confederate Memorial Day",
+ "2035-04-30": "Confederate Memorial Day",
"2035-05-28": "Memorial Day",
"2035-06-19": "Juneteenth National Independence Day",
"2035-07-04": "Independence Day",
@@ -1380,7 +1380,7 @@
"2040-02-14": "Valentine's Day",
"2040-02-20": "Washington's Birthday",
"2040-03-17": "St. Patrick's Day",
- "2040-04-23": "Confederate Memorial Day",
+ "2040-04-30": "Confederate Memorial Day",
"2040-05-28": "Memorial Day",
"2040-06-19": "Juneteenth National Independence Day",
"2040-07-04": "Independence Day",
@@ -1397,7 +1397,7 @@
"2041-02-14": "Valentine's Day",
"2041-02-18": "Washington's Birthday",
"2041-03-17": "St. Patrick's Day",
- "2041-04-22": "Confederate Memorial Day",
+ "2041-04-29": "Confederate Memorial Day",
"2041-05-27": "Memorial Day",
"2041-06-19": "Juneteenth National Independence Day",
"2041-07-04": "Independence Day",
@@ -1478,7 +1478,7 @@
"2046-02-14": "Valentine's Day",
"2046-02-19": "Washington's Birthday",
"2046-03-17": "St. Patrick's Day",
- "2046-04-23": "Confederate Memorial Day",
+ "2046-04-30": "Confederate Memorial Day",
"2046-05-28": "Memorial Day",
"2046-06-19": "Juneteenth National Independence Day",
"2046-07-04": "Independence Day",
@@ -1494,7 +1494,7 @@
"2047-02-14": "Valentine's Day",
"2047-02-18": "Washington's Birthday",
"2047-03-17": "St. Patrick's Day",
- "2047-04-22": "Confederate Memorial Day",
+ "2047-04-29": "Confederate Memorial Day",
"2047-05-27": "Memorial Day",
"2047-06-19": "Juneteenth National Independence Day",
"2047-07-04": "Independence Day",
diff --git a/tests/countries/test_united_states.py b/tests/countries/test_united_states.py
index b404b0595..a62eccae4 100644
--- a/tests/countries/test_united_states.py
+++ b/tests/countries/test_united_states.py
@@ -1119,11 +1119,33 @@ def test_confederate_memorial_day(self):
"2022-04-25",
"2023-04-24",
)
- for subdiv in ("AL", "MS", "SC"):
+ for subdiv in ("AL", "SC"):
self.assertHolidayName(name, self.state_hols[subdiv], range(1866, 2050))
self.assertNoHolidayName(name, self.state_hols[subdiv], range(1865, 1866))
self.assertHolidayName(name, self.state_hols[subdiv], dt)
+ self.assertHolidayName(
+ name,
+ self.state_hols["MS"],
+ "2010-04-26",
+ "2011-04-25",
+ "2012-04-30",
+ "2013-04-29",
+ "2014-04-28",
+ "2015-04-27",
+ "2016-04-25",
+ "2017-04-24",
+ "2018-04-30",
+ "2019-04-29",
+ "2020-04-27",
+ "2021-04-26",
+ "2022-04-25",
+ "2023-04-24",
+ "2024-04-29",
+ )
+ self.assertHolidayName(name, self.state_hols["MS"], range(1866, 2050))
+ self.assertNoHolidayName(name, self.state_hols["MS"], range(1865, 1866))
+
self.assertHolidayName(
name, self.state_hols["TX"], (f"{year}-01-19" for year in range(1931, 2050))
)
|
PaddlePaddle__PaddleNLP-6607 | [Question]: GPT-3预训练的eval阶段出错
### GPT-3预训练的eval阶段出错
环境:
(1)8卡A100
(2)python -m pip install paddlepaddle-gpu==0.0.0.post112 -f https://www.paddlepaddle.org.cn/whl/linux/gpu/develop.html
报错内容:
采用8卡tensor_parallel预训练GPT-13B,在eval阶段报错:

| [
{
"content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.... | [
{
"content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.... | diff --git a/llm/gpt-3/modeling.py b/llm/gpt-3/modeling.py
index 968b48f33c75..3c45721c01d3 100644
--- a/llm/gpt-3/modeling.py
+++ b/llm/gpt-3/modeling.py
@@ -827,7 +827,6 @@ def forward(
loss = None
if labels is not None:
loss = self.criterion(logits, labels)
- return loss
# outputs = [output, all_hidden_states, new_caches, all_self_attentions]
if not return_dict:
|
aio-libs-abandoned__aioredis-py-535 | Add a BUSYGROUP reply error
The XGROUP CREATE command can return a BUSYGROUP error when a group already exists: https://redis.io/commands/xgroup
I think the `ReplyError` subclass for matching it would look like this:
```py
class BusyGroupError(ReplyError):
MATCH_REPLY = "BUSYGROUP Consumer Group name already exists"
```
| [
{
"content": "__all__ = [\n 'RedisError',\n 'ProtocolError',\n 'ReplyError',\n 'MaxClientsError',\n 'AuthError',\n 'PipelineError',\n 'MultiExecError',\n 'WatchVariableError',\n 'ChannelClosedError',\n 'ConnectionClosedError',\n 'ConnectionForcedCloseError',\n 'PoolClosedErro... | [
{
"content": "__all__ = [\n 'RedisError',\n 'ProtocolError',\n 'ReplyError',\n 'MaxClientsError',\n 'AuthError',\n 'PipelineError',\n 'MultiExecError',\n 'WatchVariableError',\n 'ChannelClosedError',\n 'ConnectionClosedError',\n 'ConnectionForcedCloseError',\n 'PoolClosedErro... | diff --git a/aioredis/errors.py b/aioredis/errors.py
index b73e2e424..504c6ce06 100644
--- a/aioredis/errors.py
+++ b/aioredis/errors.py
@@ -50,6 +50,12 @@ class AuthError(ReplyError):
MATCH_REPLY = ("NOAUTH ", "ERR invalid password")
+class BusyGroupError(ReplyError):
+ """Raised if Consumer Group name already exists."""
+
+ MATCH_REPLY = "BUSYGROUP Consumer Group name already exists"
+
+
class PipelineError(RedisError):
"""Raised if command within pipeline raised error."""
diff --git a/tests/stream_commands_test.py b/tests/stream_commands_test.py
index 6a7adbd0f..d29178be3 100644
--- a/tests/stream_commands_test.py
+++ b/tests/stream_commands_test.py
@@ -4,7 +4,7 @@
from collections import OrderedDict
from unittest import mock
-from aioredis import ReplyError
+from aioredis.errors import BusyGroupError
from _testutils import redis_version
pytestmark = redis_version(
@@ -314,7 +314,7 @@ async def test_xgroup_create_mkstream(redis, server_bin):
async def test_xgroup_create_already_exists(redis, server_bin):
await redis.xadd('test_stream', {'a': 1})
await redis.xgroup_create('test_stream', 'test_group')
- with pytest.raises(ReplyError):
+ with pytest.raises(BusyGroupError):
await redis.xgroup_create('test_stream', 'test_group')
|
ansible__ansible-18194 | serial with % groups task per host
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
serial
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
devel
ansible 2.2.0.0 (detached HEAD eafb4043c9) last updated 2016/10/25 13:47:30 (GMT +200)
```
##### CONFIGURATION
n/a
##### OS / ENVIRONMENT
n/a
##### SUMMARY
When using serial with `%`, there is a suspicious grouping per host for every task, while I don't see such a grouping when using serial with a number:
##### STEPS TO REPRODUCE
See gist https://gist.github.com/resmo/c650dc1846c14cdccbc41d509c92f4c0
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
```
TASK [command] *****************************************************************
changed: [three] => (item=1)
changed: [one] => (item=1)
changed: [two] => (item=1)
changed: [three] => (item=1)
changed: [one] => (item=1)
changed: [two] => (item=1)
changed: [three] => (item=1)
changed: [one] => (item=1)
changed: [two] => (item=1)
changed: [three] => (item=1)
changed: [one] => (item=1)
changed: [two] => (item=1)
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
TASK [command] *****************************************************************
changed: [two] => (item=1)
changed: [two] => (item=1)
changed: [two] => (item=1)
changed: [two] => (item=1)
changed: [one] => (item=1)
changed: [one] => (item=1)
changed: [one] => (item=1)
changed: [one] => (item=1)
changed: [three] => (item=1)
changed: [three] => (item=1)
changed: [three] => (item=1)
changed: [three] => (item=1)
```
| [
{
"content": "# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the Licen... | [
{
"content": "# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the Licen... | diff --git a/lib/ansible/executor/task_queue_manager.py b/lib/ansible/executor/task_queue_manager.py
index 2e6948f1e0c869..08b7dd0f6716ee 100644
--- a/lib/ansible/executor/task_queue_manager.py
+++ b/lib/ansible/executor/task_queue_manager.py
@@ -222,7 +222,7 @@ def run(self, play):
)
# Fork # of forks, # of hosts or serial, whichever is lowest
- num_hosts = len(self._inventory.get_hosts(new_play.hosts))
+ num_hosts = len(self._inventory.get_hosts(new_play.hosts, ignore_restrictions=True))
max_serial = 0
if new_play.serial:
|
freedomofpress__securedrop-6408 | Test securedrop-admin with Tails 5.0
## Description
https://tails.boum.org/news/test_5.0-beta1/
Tails 5.0 is based on Debian Bullseye, which means it's using a newer Python version (3.9) among plenty of other things.
It's probably worth walking through a full SD install + backup/restore to make sure it works as expected.
| [
{
"content": "# -*- mode: python; coding: utf-8 -*-\n#\n# Copyright (C) 2013-2018 Freedom of the Press Foundation & al\n# Copyright (C) 2018 Loic Dachary <loic@dachary.org>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as... | [
{
"content": "# -*- mode: python; coding: utf-8 -*-\n#\n# Copyright (C) 2013-2018 Freedom of the Press Foundation & al\n# Copyright (C) 2018 Loic Dachary <loic@dachary.org>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as... | diff --git a/admin/bootstrap.py b/admin/bootstrap.py
index 1506f9a943..598cf25fa1 100755
--- a/admin/bootstrap.py
+++ b/admin/bootstrap.py
@@ -138,7 +138,6 @@ def install_apt_dependencies(args: argparse.Namespace) -> None:
python3-virtualenv \
python3-yaml \
python3-pip \
- ccontrol \
virtualenv \
libffi-dev \
libssl-dev \
diff --git a/install_files/ansible-base/roles/tails-config/tasks/configure_network_hook.yml b/install_files/ansible-base/roles/tails-config/tasks/configure_network_hook.yml
index 870e5133cf..85031bcf66 100644
--- a/install_files/ansible-base/roles/tails-config/tasks/configure_network_hook.yml
+++ b/install_files/ansible-base/roles/tails-config/tasks/configure_network_hook.yml
@@ -22,4 +22,4 @@
- name: Run SecureDrop network hook
# Writes files to /etc, so elevated privileges are required.
become: yes
- command: python "{{ tails_config_securedrop_dotfiles }}/securedrop_init.py"
+ command: python3 "{{ tails_config_securedrop_dotfiles }}/securedrop_init.py"
|
napari__napari-1241 | Some keyboard combinations crash napari
## 🐛 Bug
On Linux, I can kill napari by pressing and releasing the Super key. I get the following error:
```pytb
WARNING: Traceback (most recent call last):
File "/home/jni/miniconda3/envs/all/lib/python3.8/site-packages/vispy/app/backends/_qt.py", line 505, in keyReleaseEvent
self._keyEvent(self._vispy_canvas.events.key_release, ev)
File "/home/jni/miniconda3/envs/all/lib/python3.8/site-packages/vispy/app/backends/_qt.py", line 551, in _keyEvent
func(native=ev, key=key, text=text_type(ev.text()), modifiers=mod)
File "/home/jni/miniconda3/envs/all/lib/python3.8/site-packages/vispy/util/event.py", line 455, in __call__
self._invoke_callback(cb, event)
File "/home/jni/miniconda3/envs/all/lib/python3.8/site-packages/vispy/util/event.py", line 473, in _invoke_callback
_handle_exception(self.ignore_callback_errors,
File "/home/jni/miniconda3/envs/all/lib/python3.8/site-packages/vispy/util/event.py", line 471, in _invoke_callback
cb(event)
File "/home/jni/miniconda3/envs/all/lib/python3.8/site-packages/napari/_qt/qt_viewer.py", line 574, in on_key_release
combo = components_to_key_combo(event.key.name, event.modifiers)
AttributeError: 'NoneType' object has no attribute 'name'
```
This might be specific to i3.
| [
{
"content": "from pathlib import Path\n\nfrom qtpy.QtCore import QCoreApplication, Qt, QSize\nfrom qtpy.QtWidgets import (\n QWidget,\n QVBoxLayout,\n QFileDialog,\n QSplitter,\n QMessageBox,\n)\nfrom qtpy.QtGui import QCursor, QGuiApplication\nfrom qtpy.QtCore import QThreadPool\nfrom ..utils.i... | [
{
"content": "from pathlib import Path\n\nfrom qtpy.QtCore import QCoreApplication, Qt, QSize\nfrom qtpy.QtWidgets import (\n QWidget,\n QVBoxLayout,\n QFileDialog,\n QSplitter,\n QMessageBox,\n)\nfrom qtpy.QtGui import QCursor, QGuiApplication\nfrom qtpy.QtCore import QThreadPool\nfrom ..utils.i... | diff --git a/napari/_qt/qt_viewer.py b/napari/_qt/qt_viewer.py
index 66931fd3d07..ad4aad9ccb9 100644
--- a/napari/_qt/qt_viewer.py
+++ b/napari/_qt/qt_viewer.py
@@ -571,6 +571,8 @@ def on_key_release(self, event):
event : qtpy.QtCore.QEvent
Event from the Qt context.
"""
+ if event.key is None:
+ return
combo = components_to_key_combo(event.key.name, event.modifiers)
self.viewer.release_key(combo)
|
avocado-framework__avocado-4869 | TAP parser: warnings are being marked as error
.avocado.hint:
```
1 [kinds]
2 tap = ./scripts/*/*.t
3
4 [tap]
5 uri = $testpath
6 kwargs = PERL5LIB=./lib,LIBVIRT_TCK_CONFIG=./conf/default.yml,LIBVIRT_TCK_AUTOCLEAN=1
```
test.t:
```bash
#!/bin/bash
echo "1..4
warning: foo
ok 1 - started persistent domain object
ok 2 - dynamic domain label type is svirt_tcg_t
ok 3 - dynamic image label type is svirt_image_t
ok 4 - Domain MCS c40,c302 == Image MCS c40,c302"
```
| [
{
"content": "import enum\nimport re\nfrom collections import namedtuple\n\n\n@enum.unique\nclass TestResult(enum.Enum):\n PASS = 'PASS'\n SKIP = 'SKIP'\n FAIL = 'FAIL'\n XFAIL = 'XFAIL'\n XPASS = 'XPASS'\n\n\n# TapParser is based on Meson's TAP parser, which were licensed under the\n# MIT (X11) ... | [
{
"content": "import enum\nimport re\nfrom collections import namedtuple\n\n\n@enum.unique\nclass TestResult(enum.Enum):\n PASS = 'PASS'\n SKIP = 'SKIP'\n FAIL = 'FAIL'\n XFAIL = 'XFAIL'\n XPASS = 'XPASS'\n\n\n# TapParser is based on Meson's TAP parser, which were licensed under the\n# MIT (X11) ... | diff --git a/avocado/core/tapparser.py b/avocado/core/tapparser.py
index 7572492d5f..2389148307 100644
--- a/avocado/core/tapparser.py
+++ b/avocado/core/tapparser.py
@@ -153,8 +153,6 @@ def parse(self):
if line == '':
continue
- yield self.Error('unexpected input at line %d' % (lineno,))
-
if state == self._YAML:
yield self.Error('YAML block not terminated (started on line %d)' % (yaml_lineno,))
diff --git a/selftests/unit/test_tap.py b/selftests/unit/test_tap.py
index f703dd7fb4..40913c4440 100644
--- a/selftests/unit/test_tap.py
+++ b/selftests/unit/test_tap.py
@@ -259,7 +259,6 @@ def test_empty_line(self):
def test_unexpected(self):
events = self.parse_tap('1..1\ninvalid\nok 1')
self.assert_plan(events, count=1, late=False)
- self.assert_error(events)
self.assert_test(events, number=1, name='', result=TestResult.PASS)
self.assert_last(events)
|
Pyomo__pyomo-351 | GAMS stream output (tee) not working after logfile option addition
```
from pyomo.environ import *
m = ConcreteModel()
m.x = Var()
m.c = Constraint(expr=m.x >= 2)
m.o = Objective(expr=m.x)
SolverFactory('gams').solve(m, tee=True)
```
is failing to stream GAMS output after the merge of #302. Expected behavior is restored by checking out the preceding bcb316cca7f539ebe0f25f7321b5a28ccc964d51 commit.
| [
{
"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineerin... | [
{
"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineerin... | diff --git a/pyomo/solvers/plugins/solvers/GAMS.py b/pyomo/solvers/plugins/solvers/GAMS.py
index 7e72710d2e0..3139d827934 100644
--- a/pyomo/solvers/plugins/solvers/GAMS.py
+++ b/pyomo/solvers/plugins/solvers/GAMS.py
@@ -753,7 +753,7 @@ def solve(self, *args, **kwds):
command.append("lf=" + str(logfile))
try:
- rc, _ = pyutilib.subprocess.run(command)
+ rc, _ = pyutilib.subprocess.run(command, tee=tee)
if keepfiles:
print("\nGAMS WORKING DIRECTORY: %s\n" % tmpdir)
diff --git a/pyomo/solvers/tests/checks/test_GAMS.py b/pyomo/solvers/tests/checks/test_GAMS.py
index 180658b5539..17661957c3e 100644
--- a/pyomo/solvers/tests/checks/test_GAMS.py
+++ b/pyomo/solvers/tests/checks/test_GAMS.py
@@ -11,7 +11,7 @@
import pyutilib.th as unittest
import pyutilib.subprocess
-from pyutilib.misc import setup_redirect, reset_redirect
+from pyutilib.misc import capture_output
from pyomo.environ import *
from six import StringIO
import contextlib, sys, os, shutil
@@ -223,28 +223,28 @@ class GAMSLogfileGmsTests(GAMSLogfileTestBase):
def test_no_tee(self):
with SolverFactory("gams", solver_io="gms") as opt:
- with redirected_subprocess_run() as output:
+ with capture_output() as output:
opt.solve(self.m, tee=False)
self._check_stdout(output.getvalue(), exists=False)
self._check_logfile(exists=False)
def test_tee(self):
with SolverFactory("gams", solver_io="gms") as opt:
- with redirected_subprocess_run() as output:
+ with capture_output() as output:
opt.solve(self.m, tee=True)
self._check_stdout(output.getvalue(), exists=True)
self._check_logfile(exists=False)
def test_logfile(self):
with SolverFactory("gams", solver_io="gms") as opt:
- with redirected_subprocess_run() as output:
+ with capture_output() as output:
opt.solve(self.m, logfile=self.logfile)
self._check_stdout(output.getvalue(), exists=False)
self._check_logfile(exists=True)
def test_tee_and_logfile(self):
with SolverFactory("gams", solver_io="gms") as opt:
- with redirected_subprocess_run() as output:
+ with capture_output() as output:
opt.solve(self.m, logfile=self.logfile, tee=True)
self._check_stdout(output.getvalue(), exists=True)
self._check_logfile(exists=True)
@@ -261,69 +261,33 @@ class GAMSLogfilePyTests(GAMSLogfileTestBase):
def test_no_tee(self):
with SolverFactory("gams", solver_io="python") as opt:
- with redirected_stdout() as output:
+ with capture_output() as output:
opt.solve(self.m, tee=False)
self._check_stdout(output.getvalue(), exists=False)
self._check_logfile(exists=False)
def test_tee(self):
with SolverFactory("gams", solver_io="python") as opt:
- with redirected_stdout() as output:
+ with capture_output() as output:
opt.solve(self.m, tee=True)
self._check_stdout(output.getvalue(), exists=True)
self._check_logfile(exists=False)
def test_logfile(self):
with SolverFactory("gams", solver_io="python") as opt:
- with redirected_stdout() as output:
+ with capture_output() as output:
opt.solve(self.m, logfile=self.logfile)
self._check_stdout(output.getvalue(), exists=False)
self._check_logfile(exists=True)
def test_tee_and_logfile(self):
with SolverFactory("gams", solver_io="python") as opt:
- with redirected_stdout() as output:
+ with capture_output() as output:
opt.solve(self.m, logfile=self.logfile, tee=True)
self._check_stdout(output.getvalue(), exists=True)
self._check_logfile(exists=True)
-@contextlib.contextmanager
-def redirected_stdout():
- """Temporarily redirect stdout into a string buffer."""
- output = StringIO()
- try:
- setup_redirect(output)
- yield output
- finally:
- reset_redirect()
-
-
-@contextlib.contextmanager
-def redirected_subprocess_run():
- """Temporarily redirect subprocess calls stdout into a string buffer."""
- output = StringIO()
- old_call = pyutilib.subprocess.run_command
-
- def run(*args, **kwargs):
- returncode, out = old_call(*args, **kwargs)
- output.write("\n".join(
- [
- s for s in out.splitlines()
- if not s.startswith("*** Could not write to console: /dev/tty")
- ]
- ))
- output
- return returncode, out
-
- try:
- pyutilib.subprocess.run_command = run
- pyutilib.subprocess.run = run
- yield output
- finally:
- pyutilib.subprocess.run_command = old_call
- pyutilib.subprocess.run = old_call
-
if __name__ == "__main__":
unittest.main()
|
zenml-io__zenml-1301 | [BUG]: unable to successfully deploy azure stack recipe
### Contact Details [Optional]
_No response_
### System Information
ZenML version: 0.33.0
Install path: /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/zenml
Python version: 3.10.8
Platform information: {'os': 'mac', 'mac_version': '13.2'}
Environment: native
Integrations: ['azure', 'github', 'kaniko', 'mlflow', 'pillow', 'scipy', 'sklearn']
[requirements.txt](https://github.com/zenml-io/zenml/files/10683614/requirements.txt)
### What happened?
`zenml stack recipe destroy azureml-minimal` doesn't recognize the "azureml-minimal" stack recipe name and throws the following error:
`TypeError: destroy() got multiple values for argument 'stack_recipe_name'`
Trying with quotes
`zenml stack recipe destroy "azureml-minimal"`
and variations of it, i.e.
`cd ./path/to/recipe && zenml stack recipe destroy .``
lead to the same error.
### Reproduction steps
1. zenml stack recipe pull azureml-minimal
2. update values
3. zenml stack recipe deploy azureml-minimal
4. zenml stack recipe destroy azureml-minimal
### Relevant log output
```shell
❯ zenml stack recipe destroy azureml-minimal
Using the default local database.
Running with active workspace: 'default' (repository)
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /Users/fschulz/dev/learn-zenml/.venv/bin/zenml:8 in <module> │
│ │
│ 5 from zenml.cli.cli import cli │
│ 6 if __name__ == '__main__': │
│ 7 │ sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) │
│ ❱ 8 │ sys.exit(cli()) │
│ 9 │
│ │
│ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:1130 in __call__ │
│ │
│ 1127 │ │
│ 1128 │ def __call__(self, *args: t.Any, **kwargs: t.Any) -> t.Any: │
│ 1129 │ │ """Alias for :meth:`main`.""" │
│ ❱ 1130 │ │ return self.main(*args, **kwargs) │
│ 1131 │
│ 1132 │
│ 1133 class Command(BaseCommand): │
│ │
│ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:1055 in main │
│ │
│ 1052 │ │ try: │
│ 1053 │ │ │ try: │
│ 1054 │ │ │ │ with self.make_context(prog_name, args, **extra) as ctx: │
│ ❱ 1055 │ │ │ │ │ rv = self.invoke(ctx) │
│ 1056 │ │ │ │ │ if not standalone_mode: │
│ 1057 │ │ │ │ │ │ return rv │
│ 1058 │ │ │ │ │ # it's not safe to `ctx.exit(rv)` here! │
│ │
│ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:1657 in invoke │
│ │
│ 1654 │ │ │ │ super().invoke(ctx) │
│ 1655 │ │ │ │ sub_ctx = cmd.make_context(cmd_name, args, parent=ctx) │
│ 1656 │ │ │ │ with sub_ctx: │
│ ❱ 1657 │ │ │ │ │ return _process_result(sub_ctx.command.invoke(sub_ctx)) │
│ 1658 │ │ │
│ 1659 │ │ # In chain mode we create the contexts step by step, but after the │
│ 1660 │ │ # base command has been invoked. Because at that point we do not │
│ │
│ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:1657 in invoke │
│ │
│ 1654 │ │ │ │ super().invoke(ctx) │
│ 1655 │ │ │ │ sub_ctx = cmd.make_context(cmd_name, args, parent=ctx) │
│ 1656 │ │ │ │ with sub_ctx: │
│ ❱ 1657 │ │ │ │ │ return _process_result(sub_ctx.command.invoke(sub_ctx)) │
│ 1658 │ │ │
│ 1659 │ │ # In chain mode we create the contexts step by step, but after the │
│ 1660 │ │ # base command has been invoked. Because at that point we do not │
│ │
│ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:1657 in invoke │
│ │
│ 1654 │ │ │ │ super().invoke(ctx) │
│ 1655 │ │ │ │ sub_ctx = cmd.make_context(cmd_name, args, parent=ctx) │
│ 1656 │ │ │ │ with sub_ctx: │
│ ❱ 1657 │ │ │ │ │ return _process_result(sub_ctx.command.invoke(sub_ctx)) │
│ 1658 │ │ │
│ 1659 │ │ # In chain mode we create the contexts step by step, but after the │
│ 1660 │ │ # base command has been invoked. Because at that point we do not │
│ │
│ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:1404 in invoke │
│ │
│ 1401 │ │ │ echo(style(message, fg="red"), err=True) │
│ 1402 │ │ │
│ 1403 │ │ if self.callback is not None: │
│ ❱ 1404 │ │ │ return ctx.invoke(self.callback, **ctx.params) │
│ 1405 │ │
│ 1406 │ def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: │
│ 1407 │ │ """Return a list of completions for the incomplete value. Looks │
│ │
│ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:760 in invoke │
│ │
│ 757 │ │ │
│ 758 │ │ with augment_usage_errors(__self): │
│ 759 │ │ │ with ctx: │
│ ❱ 760 │ │ │ │ return __callback(*args, **kwargs) │
│ 761 │ │
│ 762 │ def forward( │
│ 763 │ │ __self, __cmd: "Command", *args: t.Any, **kwargs: t.Any # noqa: B902 │
│ │
│ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/decorators.py:84 in │
│ new_func │
│ │
│ 81 │ │ │ │ │ " existing." │
│ 82 │ │ │ │ ) │
│ 83 │ │ │ │
│ ❱ 84 │ │ │ return ctx.invoke(f, obj, *args, **kwargs) │
│ 85 │ │ │
│ 86 │ │ return update_wrapper(t.cast(F, new_func), f) │
│ 87 │
│ │
│ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/core.py:760 in invoke │
│ │
│ 757 │ │ │
│ 758 │ │ with augment_usage_errors(__self): │
│ 759 │ │ │ with ctx: │
│ ❱ 760 │ │ │ │ return __callback(*args, **kwargs) │
│ 761 │ │
│ 762 │ def forward( │
│ 763 │ │ __self, __cmd: "Command", *args: t.Any, **kwargs: t.Any # noqa: B902 │
│ │
│ /Users/fschulz/dev/learn-zenml/.venv/lib/python3.10/site-packages/click/decorators.py:26 in │
│ new_func │
│ │
│ 23 │ """ │
│ 24 │ │
│ 25 │ def new_func(*args, **kwargs): # type: ignore │
│ ❱ 26 │ │ return f(get_current_context(), *args, **kwargs) │
│ 27 │ │
│ 28 │ return update_wrapper(t.cast(F, new_func), f) │
│ 29 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: destroy() got multiple values for argument 'stack_recipe_name'
```
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
| [
{
"content": "# Copyright (c) ZenML GmbH 2022. All Rights Reserved.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at:\n\n# https://www.apache.org/licenses/LICENSE-2.0\n\... | [
{
"content": "# Copyright (c) ZenML GmbH 2022. All Rights Reserved.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at:\n\n# https://www.apache.org/licenses/LICENSE-2.0\n\... | diff --git a/src/zenml/cli/stack_recipes.py b/src/zenml/cli/stack_recipes.py
index e028bffd570..63c54e6773f 100644
--- a/src/zenml/cli/stack_recipes.py
+++ b/src/zenml/cli/stack_recipes.py
@@ -1080,7 +1080,6 @@ def zen_server_exists() -> bool:
help="Relative path at which you want to install the stack_recipe(s)",
)
@pass_git_stack_recipes_handler
-@click.pass_context
def destroy(
git_stack_recipes_handler: GitStackRecipesHandler,
stack_recipe_name: str,
|
ivy-llc__ivy-20554 | rfftn
| [
{
"content": "# global\nimport ivy\nfrom ivy.functional.frontends.scipy.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n# fft\n@to_ivy_arrays_and_back\ndef fft(x, n=None, axis=-1, norm=\"backward\", overwrite_x=False):\n return ivy.fft(x, axis, norm=norm, n=n)\n\n\n# ifft\n@to_ivy_arrays_and_back\... | [
{
"content": "# global\nimport ivy\nfrom ivy.functional.frontends.scipy.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n# fft\n@to_ivy_arrays_and_back\ndef fft(x, n=None, axis=-1, norm=\"backward\", overwrite_x=False):\n return ivy.fft(x, axis, norm=norm, n=n)\n\n\n# ifft\n@to_ivy_arrays_and_back\... | diff --git a/ivy/functional/frontends/scipy/fft/fft.py b/ivy/functional/frontends/scipy/fft/fft.py
index f4e5866a82f4b..27480b2c42eac 100644
--- a/ivy/functional/frontends/scipy/fft/fft.py
+++ b/ivy/functional/frontends/scipy/fft/fft.py
@@ -40,3 +40,10 @@ def ifftn(
x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None
):
return ivy.ifftn(x, s=s, dim=axes, norm=norm)
+
+
+@to_ivy_arrays_and_back
+def rfftn(
+ x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None
+):
+ return ivy.rfftn(x, s=s, dim=axes, norm=norm)
diff --git a/ivy_tests/test_ivy/test_frontends/test_scipy/test_fft/test_fft.py b/ivy_tests/test_ivy/test_frontends/test_scipy/test_fft/test_fft.py
index bd664b4a6964a..d0f602d627cf8 100644
--- a/ivy_tests/test_ivy/test_frontends/test_scipy/test_fft/test_fft.py
+++ b/ivy_tests/test_ivy/test_frontends/test_scipy/test_fft/test_fft.py
@@ -305,3 +305,31 @@
# norm=norm,
# workers=workers,
# )
+
+
+# # rfftn
+# @handle_frontend_test(
+# fn_tree="scipy.fft.rfftn",
+# d_x_d_s_n_workers=x_and_ifftn(),
+# test_with_out=st.just(False),
+# )
+# def test_scipy_rfftn(
+# d_x_d_s_n_workers,
+# frontend,
+# test_flags,
+# fn_tree,
+# on_device,
+# ):
+# dtype, x, s, ax, norm, workers = d_x_d_s_n_workers
+# helpers.test_frontend_function(
+# input_dtypes=dtype,
+# frontend=frontend,
+# test_flags=test_flags,
+# fn_tree=fn_tree,
+# on_device=on_device,
+# x=x[0],
+# s=s,
+# axes=ax,
+# norm=norm,
+# workers=workers,
+# )
|
archlinux__archinstall-763 | in xfce, it need xarchiver for create archive & extract here-to
in xfce, it need xarchiver for create archive & extract here-to
in xfce, it need xarchiver for create archive & extract here-to
in xfce, it need xarchiver for create archive & extract here-to
| [
{
"content": "# A desktop environment using \"Xfce4\"\n\nimport archinstall\n\nis_top_level_profile = False\n\n__packages__ = [\n\t\"xfce4\",\n\t\"xfce4-goodies\",\n\t\"pavucontrol\",\n\t\"lightdm\",\n\t\"lightdm-gtk-greeter\",\n\t\"gvfs\",\n\t\"network-manager-applet\",\n]\n\n\ndef _prep_function(*args, **kwar... | [
{
"content": "# A desktop environment using \"Xfce4\"\n\nimport archinstall\n\nis_top_level_profile = False\n\n__packages__ = [\n\t\"xfce4\",\n\t\"xfce4-goodies\",\n\t\"pavucontrol\",\n\t\"lightdm\",\n\t\"lightdm-gtk-greeter\",\n\t\"gvfs\",\n\t\"network-manager-applet\",\n\t\"xarchiver\"\n]\n\n\ndef _prep_funct... | diff --git a/profiles/kde.py b/profiles/kde.py
index c58f4f45dd..0679859372 100644
--- a/profiles/kde.py
+++ b/profiles/kde.py
@@ -9,6 +9,7 @@
"konsole",
"kate",
"dolphin",
+ "ark",
"sddm",
"plasma-wayland-session",
"egl-wayland",
diff --git a/profiles/xfce4.py b/profiles/xfce4.py
index 89c04f7cb5..2a4280864c 100644
--- a/profiles/xfce4.py
+++ b/profiles/xfce4.py
@@ -12,6 +12,7 @@
"lightdm-gtk-greeter",
"gvfs",
"network-manager-applet",
+ "xarchiver"
]
|
iterative__dvc-2457 | dvc remove CLI documentation inconsistency
`dvc remove` (without `targets`) prints help which states that `targets` are optional, and if not specified will remove all DVC-files. Clearly not the case.
```bash
$ dvc remove
[...]
targets DVC-files to remove. Optional. (Finds all DVC-files in the
workspace by default.)
```
| [
{
"content": "from __future__ import unicode_literals\n\nimport argparse\nimport logging\n\nimport dvc.prompt as prompt\nfrom dvc.exceptions import DvcException\nfrom dvc.command.base import CmdBase, append_doc_link\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdRemove(CmdBase):\n def _is_outs_only(... | [
{
"content": "from __future__ import unicode_literals\n\nimport argparse\nimport logging\n\nimport dvc.prompt as prompt\nfrom dvc.exceptions import DvcException\nfrom dvc.command.base import CmdBase, append_doc_link\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdRemove(CmdBase):\n def _is_outs_only(... | diff --git a/dvc/command/remove.py b/dvc/command/remove.py
index b8e98200c9..814adcfae3 100644
--- a/dvc/command/remove.py
+++ b/dvc/command/remove.py
@@ -74,9 +74,6 @@ def add_parser(subparsers, parent_parser):
help="Force purge.",
)
remove_parser.add_argument(
- "targets",
- nargs="+",
- help="DVC-files to remove. Optional. "
- "(Finds all DVC-files in the workspace by default.)",
+ "targets", nargs="+", help="DVC-files to remove."
)
remove_parser.set_defaults(func=CmdRemove)
|
PrefectHQ__prefect-1168 | flow.update(flow) doesn't maintain mapped edges
```
from prefect import task, Flow, Parameter
@task
def add_one(x):
return x + 1
@task
def printit(p):
print(p)
with Flow("Test Flow") as test_flow:
test_list = Parameter("test_list")
add_one_list = add_one.map(test_list)
printit(add_one_list)
with Flow("Second Test") as second_test_flow:
second_test_flow.update(test_flow)
test_flow.run(test_list=[1, 2, 3])
second_test_flow.run(test_list=[1, 2, 3])
```
In this example, the `second_test_flow.run` will fail with the following error in the `add_one` task:
```
Unexpected error: TypeError('can only concatenate list (not "int") to list')
```
Is this the intended effect? If not, it should be fixable by updating `add_edge` call in `Flow.update` [here](https://github.com/PrefectHQ/prefect/blob/cfb186a4e6fb387e610d35cb530d10f4032f2da2/src/prefect/core/flow.py#L547-L552) to:
```
self.add_edge(
upstream_task=edge.upstream_task,
downstream_task=edge.downstream_task,
key=edge.key,
mapped=edge.mapped,
validate=validate,
)
```
| [
{
"content": "import collections\nimport copy\nimport functools\nimport inspect\nimport json\nimport os\nimport tempfile\nimport time\nimport uuid\nimport warnings\nfrom collections import Counter\nfrom typing import (\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Mapping,\n Optional,\n ... | [
{
"content": "import collections\nimport copy\nimport functools\nimport inspect\nimport json\nimport os\nimport tempfile\nimport time\nimport uuid\nimport warnings\nfrom collections import Counter\nfrom typing import (\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Mapping,\n Optional,\n ... | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 6aff68b6a3d3..50eddd5ca746 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -28,6 +28,7 @@ These changes are available in the [master branch](https://github.com/PrefectHQ/
- Fix issue with Result Handlers deserializing incorrectly in Cloud - [#1112](https://github.com/PrefectHQ/prefect/issues/1112)
- Fix issue caused by breaking change in `marshmallow==3.0.0rc7` - [#1151](https://github.com/PrefectHQ/prefect/pull/1151)
- Fix issue with passing results to Prefect signals - [#1163](https://github.com/PrefectHQ/prefect/issues/1163)
+- Fix issue with `flow.update` not preserving mapped edges - [#1164](https://github.com/PrefectHQ/prefect/issues/1164)
### Breaking Changes
diff --git a/src/prefect/core/flow.py b/src/prefect/core/flow.py
index b86c97db35fe..5a97c97e47a6 100644
--- a/src/prefect/core/flow.py
+++ b/src/prefect/core/flow.py
@@ -548,6 +548,7 @@ def update(self, flow: "Flow", validate: bool = None) -> None:
upstream_task=edge.upstream_task,
downstream_task=edge.downstream_task,
key=edge.key,
+ mapped=edge.mapped,
validate=validate,
)
diff --git a/tests/core/test_flow.py b/tests/core/test_flow.py
index aca3d7e38526..e03fbfc8bf1c 100644
--- a/tests/core/test_flow.py
+++ b/tests/core/test_flow.py
@@ -697,7 +697,7 @@ def test_equality_based_on_reference_tasks(self):
assert f1 == f2
-def test_merge():
+def test_update():
f1 = Flow(name="test")
f2 = Flow(name="test")
@@ -709,10 +709,27 @@ def test_merge():
f2.add_edge(t2, t3)
f2.update(f1)
- assert f2.tasks == set([t1, t2, t3])
+ assert f2.tasks == {t1, t2, t3}
assert len(f2.edges) == 2
+def test_update_with_mapped_edges():
+ t1 = Task()
+ t2 = Task()
+ t3 = Task()
+
+ with Flow(name="test") as f1:
+ m = t2.map(upstream_tasks=[t1])
+
+ f2 = Flow(name="test")
+ f2.add_edge(t2, t3)
+
+ f2.update(f1)
+ assert f2.tasks == {m, t1, t2, t3}
+ assert len(f2.edges) == 2
+ assert len([e for e in f2.edges if e.mapped]) == 1
+
+
def test_upstream_and_downstream_error_msgs_when_task_is_not_in_flow():
f = Flow(name="test")
t = Task()
|
Bitmessage__PyBitmessage-726 | Trouble sending on multicor machines on 0.4.4
I've seen this on both an OSX box (8 cores) and a linux box (4 cores). I was only able to do the full reproducible on linux, as my `keys.dat` file prevented me from going back to 0.4.3 on the OSX box.
1. Check out v0.4.3.
2. Open top
3. Open bitmessage.
4. Send a message.
5. Processes will start up for each core in top to calculate the PoW more quickly. Message will send.
6. Close bitmessage.
7. Check out `ProtoV3`
8. Send a message.
9. Processes will fire up in top. They'll consume 100% cpu for a few minutes. One by one, the CPU usage on each process will drop to zero.
10. The bitmessage app will still say that we're doing work to calculate the PoW. The message never sends.
| [
{
"content": "#!/usr/bin/env python2.7\n# Copyright (c) 2012 Jonathan Warren\n# Copyright (c) 2012 The Bitmessage developers\n# Distributed under the MIT/X11 software license. See the accompanying\n# file COPYING or http://www.opensource.org/licenses/mit-license.php.\n\n# Right now, PyBitmessage only support co... | [
{
"content": "#!/usr/bin/env python2.7\n# Copyright (c) 2012 Jonathan Warren\n# Copyright (c) 2012 The Bitmessage developers\n# Distributed under the MIT/X11 software license. See the accompanying\n# file COPYING or http://www.opensource.org/licenses/mit-license.php.\n\n# Right now, PyBitmessage only support co... | diff --git a/src/bitmessagemain.py b/src/bitmessagemain.py
index fffe99e722..491b82f045 100755
--- a/src/bitmessagemain.py
+++ b/src/bitmessagemain.py
@@ -172,7 +172,6 @@ def start(self, daemon=False):
curses = True
signal.signal(signal.SIGINT, helper_generic.signal_handler)
- signal.signal(signal.SIGTERM, helper_generic.signal_handler)
# signal.signal(signal.SIGINT, signal.SIG_DFL)
helper_bootstrap.knownNodes()
|
python-discord__bot-935 | Modlog Events Should Check for DM
At least one of the event handlers in the modlog do not properly account for messages being sent in DMs & attempt to perform a guild ID comparison, which doesn't exist in a DM. This should be guarded against across the cog.
```
AttributeError: 'NoneType' object has no attribute 'id'
File "discord/client.py", line 312, in _run_event
await coro(*args, **kwargs)
File "bot/cogs/moderation/modlog.py", line 555, in on_message_delete
if message.guild.id != GuildConstant.id or channel.id in GuildConstant.modlog_blacklist:
Unhandled exception in on_message_delete.
```
Sentry Issue: [BOT-44](https://sentry.io/organizations/python-discord/issues/1656406094/?referrer=github_integration)
| [
{
"content": "import asyncio\nimport difflib\nimport itertools\nimport logging\nimport typing as t\nfrom datetime import datetime\nfrom itertools import zip_longest\n\nimport discord\nfrom dateutil.relativedelta import relativedelta\nfrom deepdiff import DeepDiff\nfrom discord import Colour\nfrom discord.abc im... | [
{
"content": "import asyncio\nimport difflib\nimport itertools\nimport logging\nimport typing as t\nfrom datetime import datetime\nfrom itertools import zip_longest\n\nimport discord\nfrom dateutil.relativedelta import relativedelta\nfrom deepdiff import DeepDiff\nfrom discord import Colour\nfrom discord.abc im... | diff --git a/bot/cogs/moderation/modlog.py b/bot/cogs/moderation/modlog.py
index 9d28030d90..41472c64cc 100644
--- a/bot/cogs/moderation/modlog.py
+++ b/bot/cogs/moderation/modlog.py
@@ -555,6 +555,10 @@ async def on_message_delete(self, message: discord.Message) -> None:
channel = message.channel
author = message.author
+ # Ignore DMs.
+ if not message.guild:
+ return
+
if message.guild.id != GuildConstant.id or channel.id in GuildConstant.modlog_blacklist:
return
|
lightly-ai__lightly-656 | Incorrect inputsize for BarlowTwins Lightning Example Code
Should the input_size in [1] be `32` instead of `224`?
In [2], we use `input_size=32`.
[1] https://github.com/lightly-ai/lightly/blob/master/examples/pytorch_lightning/barlowtwins.py#L44
[2] https://github.com/lightly-ai/lightly/blob/master/examples/pytorch/barlowtwins.py#L35
| [
{
"content": "import torch\nfrom torch import nn\nimport torchvision\nimport pytorch_lightning as pl\n\nfrom lightly.data import LightlyDataset\nfrom lightly.data import ImageCollateFunction\nfrom lightly.loss import BarlowTwinsLoss\nfrom lightly.models.modules import BarlowTwinsProjectionHead\n\n\nclass Barlow... | [
{
"content": "import torch\nfrom torch import nn\nimport torchvision\nimport pytorch_lightning as pl\n\nfrom lightly.data import LightlyDataset\nfrom lightly.data import ImageCollateFunction\nfrom lightly.loss import BarlowTwinsLoss\nfrom lightly.models.modules import BarlowTwinsProjectionHead\n\n\nclass Barlow... | diff --git a/examples/pytorch_lightning/barlowtwins.py b/examples/pytorch_lightning/barlowtwins.py
index fa896134f..697c77bc4 100644
--- a/examples/pytorch_lightning/barlowtwins.py
+++ b/examples/pytorch_lightning/barlowtwins.py
@@ -41,7 +41,7 @@ def configure_optimizers(self):
# or create a dataset from a folder containing images or videos:
# dataset = LightlyDataset("path/to/folder")
-collate_fn = ImageCollateFunction(input_size=224)
+collate_fn = ImageCollateFunction(input_size=32)
dataloader = torch.utils.data.DataLoader(
dataset,
|
qtile__qtile-1624 | widget.WindowTabs default selected task indicator produces invalid pango markup
# Issue description
The default _selected task indicator_ (``("<", ">")``) for ``widget.WindowTabs`` produces invalid pango markup and thus the call to ``pango_parse_markup`` fails.
It leads to invalid tag names for single word window names (e.g. ``<terminal>``) or invalid syntax for multiword names (e.g. ``<qtile - Mozilla Firefox>``).
Possible fixes:
- change default to e.g. ``('[', ']')`` or different foreground color
- default to no markup
- at least add a note in the documentation, but defaults should be working
If this is wanted, I'm happy to prepare a PR based on the outcome of the discussion here.
# Qtile version
Qtile version ``0.15.1``. Also [latest revision of libqtile/widget/windowtabs.py](https://github.com/qtile/qtile/blob/d47347ad0f37b4a5735faa8b7061f484e8cf81d9/libqtile/widget/windowtabs.py) (d47347a)
# Configuration
Use default ``widget.WindowTabs()``
| [
{
"content": "# Copyright (c) 2012-2013 Craig Barnes\n# Copyright (c) 2012 roger\n# Copyright (c) 2012, 2014 Tycho Andersen\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Adi Sieker\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated docume... | [
{
"content": "# Copyright (c) 2012-2013 Craig Barnes\n# Copyright (c) 2012 roger\n# Copyright (c) 2012, 2014 Tycho Andersen\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Adi Sieker\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated docume... | diff --git a/libqtile/widget/windowtabs.py b/libqtile/widget/windowtabs.py
index d6ec7e4c8a..6261cb19d0 100644
--- a/libqtile/widget/windowtabs.py
+++ b/libqtile/widget/windowtabs.py
@@ -35,7 +35,7 @@ class WindowTabs(base._TextBox):
orientations = base.ORIENTATION_HORIZONTAL
defaults = [
("separator", " | ", "Task separator text."),
- ("selected", ("<", ">"), "Selected task indicator"),
+ ("selected", ("<b>", "</b>"), "Selected task indicator"),
]
def __init__(self, **config):
|
gratipay__gratipay.com-3792 | log spam during test
What's this `TypeError` about? Seems spurious ...
```
pid-13897 thread-4384100352 (Thread-1) Traceback (most recent call last):
pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/cron.py", line 26, in f
pid-13897 thread-4384100352 (Thread-1) func()
pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/main.py", line 82, in <lambda>
pid-13897 thread-4384100352 (Thread-1) cron(env.update_cta_every, lambda: utils.update_cta(website))
pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/utils/__init__.py", line 145, in update_cta
pid-13897 thread-4384100352 (Thread-1) website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0
pid-13897 thread-4384100352 (Thread-1) TypeError: unsupported operand type(s) for /: 'int' and 'tuple'
```
log spam during test
What's this `TypeError` about? Seems spurious ...
```
pid-13897 thread-4384100352 (Thread-1) Traceback (most recent call last):
pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/cron.py", line 26, in f
pid-13897 thread-4384100352 (Thread-1) func()
pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/main.py", line 82, in <lambda>
pid-13897 thread-4384100352 (Thread-1) cron(env.update_cta_every, lambda: utils.update_cta(website))
pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/utils/__init__.py", line 145, in update_cta
pid-13897 thread-4384100352 (Thread-1) website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0
pid-13897 thread-4384100352 (Thread-1) TypeError: unsupported operand type(s) for /: 'int' and 'tuple'
```
| [
{
"content": "# encoding: utf8\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom datetime import datetime, timedelta\n\nfrom aspen import Response, json\nfrom aspen.utils import to_rfc822, utcnow\nfrom dependency_injection import resolve_dependencies\nfrom postgres.cu... | [
{
"content": "# encoding: utf8\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom datetime import datetime, timedelta\n\nfrom aspen import Response, json\nfrom aspen.utils import to_rfc822, utcnow\nfrom dependency_injection import resolve_dependencies\nfrom postgres.cu... | diff --git a/gratipay/utils/__init__.py b/gratipay/utils/__init__.py
index ed2a5adb80..8624a6b076 100644
--- a/gratipay/utils/__init__.py
+++ b/gratipay/utils/__init__.py
@@ -136,7 +136,7 @@ def update_cta(website):
nusers = website.db.one("""
SELECT nusers FROM paydays
ORDER BY ts_end DESC LIMIT 1
- """, default=(0.0, 0))
+ """, default=0)
nreceiving_from = website.db.one("""
SELECT nreceiving_from
FROM teams
|
jazzband__django-oauth-toolkit-1090 | Default value for CLEAR_EXPIRED_TOKENS_BATCH_INTERVAL
@merito Sorry this is after the fact but wouldn't a default value of 0 be best, especially since the sleep is always executed even if the batch is tiny.
https://github.com/merito/django-oauth-toolkit/blob/725c3c9d8927379c9808abd1badb4fcd9ff1cbaa/oauth2_provider/models.py#L636
_Originally posted by @n2ygk in https://github.com/jazzband/django-oauth-toolkit/pull/969#discussion_r782459085_
| [
{
"content": "\"\"\"\nThis module is largely inspired by django-rest-framework settings.\n\nSettings for the OAuth2 Provider are all namespaced in the OAUTH2_PROVIDER setting.\nFor example your project's `settings.py` file might look like this:\n\nOAUTH2_PROVIDER = {\n \"CLIENT_ID_GENERATOR_CLASS\":\n ... | [
{
"content": "\"\"\"\nThis module is largely inspired by django-rest-framework settings.\n\nSettings for the OAuth2 Provider are all namespaced in the OAUTH2_PROVIDER setting.\nFor example your project's `settings.py` file might look like this:\n\nOAUTH2_PROVIDER = {\n \"CLIENT_ID_GENERATOR_CLASS\":\n ... | diff --git a/docs/settings.rst b/docs/settings.rst
index 49460bc0e..01baaaf4b 100644
--- a/docs/settings.rst
+++ b/docs/settings.rst
@@ -345,10 +345,13 @@ The size of delete batches used by ``cleartokens`` management command.
CLEAR_EXPIRED_TOKENS_BATCH_INTERVAL
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Default: ``0.1``
+Default: ``0``
Time of sleep in seconds used by ``cleartokens`` management command between batch deletions.
+Set this to a non-zero value (e.g. `0.1`) to add a pause between batch sizes to reduce system
+load when clearing large batches of expired tokens.
+
Settings imported from Django project
--------------------------
diff --git a/oauth2_provider/settings.py b/oauth2_provider/settings.py
index 22e067716..3b7dea3f8 100644
--- a/oauth2_provider/settings.py
+++ b/oauth2_provider/settings.py
@@ -102,7 +102,7 @@
# Should only be required in testing.
"ALWAYS_RELOAD_OAUTHLIB_CORE": False,
"CLEAR_EXPIRED_TOKENS_BATCH_SIZE": 10000,
- "CLEAR_EXPIRED_TOKENS_BATCH_INTERVAL": 0.1,
+ "CLEAR_EXPIRED_TOKENS_BATCH_INTERVAL": 0,
}
# List of settings that cannot be empty
|
googleapis__google-api-python-client-1629 | Python 3.10 compatibility issue
#### Environment details
- OS type and version: Windows 10
- Python version: `python --version` 3.10.1
- pip version: `pip --version` 21.2.4
- `google-api-python-client` version: `pip show google-api-python-client` - 2.33.0
uritemplate package 3.0.0 is not compatible with python 3.10. Need to update the requirements.
Partial Stack Trace
service = build('gmail', 'v1', credentials=creds)
File "C:\JA\Envs\GIC\lib\site-packages\googleapiclient\_helpers.py", line 130, in positional_wrapper
return wrapped(*args, **kwargs)
File "C:\JA\Envs\GIC\lib\site-packages\googleapiclient\discovery.py", line 219, in build
requested_url = uritemplate.expand(discovery_url, params)
File "C:\JA\Envs\GIC\lib\site-packages\uritemplate\api.py", line 33, in expand
return URITemplate(uri).expand(var_dict, **kwargs)
File "C:\JA\Envs\GIC\lib\site-packages\uritemplate\template.py", line 132, in expand
return self._expand(_merge(var_dict, kwargs), False)
File "C:\JA\Envs\GIC\lib\site-packages\uritemplate\template.py", line 97, in _expand
expanded.update(v.expand(expansion))
File "C:\JA\Envs\GIC\lib\site-packages\uritemplate\variable.py", line 338, in expand
expanded = expansion(name, value, opts['explode'], opts['prefix'])
File "C:\JA\Envs\GIC\lib\site-packages\uritemplate\variable.py", line 278, in _string_expansion
if dict_test(value) or tuples:
File "C:\JA\Envs\GIC\lib\site-packages\uritemplate\variable.py", line 363, in dict_test
return isinstance(value, (dict, collections.MutableMapping))
AttributeError: module 'collections' has no attribute 'MutableMapping'
| [
{
"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unles... | [
{
"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unles... | diff --git a/setup.py b/setup.py
index a311f168514..344ba624a40 100644
--- a/setup.py
+++ b/setup.py
@@ -42,7 +42,7 @@
# Until this issue is closed
# https://github.com/googleapis/google-cloud-python/issues/10566
"google-api-core>=1.21.0,<3.0.0dev",
- "uritemplate>=3.0.0,<5",
+ "uritemplate>=3.0.1,<5",
]
package_root = os.path.abspath(os.path.dirname(__file__))
diff --git a/testing/constraints-3.6.txt b/testing/constraints-3.6.txt
index 0c0e7a2e53b..35fb5748093 100644
--- a/testing/constraints-3.6.txt
+++ b/testing/constraints-3.6.txt
@@ -9,4 +9,4 @@ httplib2==0.15.0
google-auth==1.16.0
google-auth-httplib2==0.0.3
google-api-core==1.21.0
-uritemplate==3.0.0
\ No newline at end of file
+uritemplate==3.0.1
\ No newline at end of file
|
internetarchive__openlibrary-5923 | Reversion: author searches with wildcards fail
<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->
Wildcard asterisk fails in author name search, but works in "All" search. This used to work.
### Evidence / Screenshot (if possible)


### Relevant url?
<!-- `https://openlibrary.org/...` -->
https://openlibrary.org/search?q=Jon+kabat*&mode=everything
### Steps to Reproduce
<!-- What steps caused you to find the bug? -->
1. Go to search bar
2. Enter a partial name ending with *
3. Select "Author" as search type
4. Run query
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual: nothing found
* Expected: list of matching authors
### Details
- **Logged in (Y/N)?** y
- **Browser type/version?** Safari or Chrome
- **Operating system?** iPadOS
- **Environment (prod/dev/local)?** prod
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
| [
{
"content": "from datetime import datetime\nimport copy\nimport json\nimport logging\nimport random\nimport re\nimport string\nfrom typing import List, Tuple, Any, Union, Optional, Iterable, Dict\nfrom unicodedata import normalize\nfrom json import JSONDecodeError\nimport requests\nimport web\nfrom lxml.etree ... | [
{
"content": "from datetime import datetime\nimport copy\nimport json\nimport logging\nimport random\nimport re\nimport string\nfrom typing import List, Tuple, Any, Union, Optional, Iterable, Dict\nfrom unicodedata import normalize\nfrom json import JSONDecodeError\nimport requests\nimport web\nfrom lxml.etree ... | diff --git a/openlibrary/plugins/worksearch/code.py b/openlibrary/plugins/worksearch/code.py
index 7f1991f46bf..50c876c70d2 100644
--- a/openlibrary/plugins/worksearch/code.py
+++ b/openlibrary/plugins/worksearch/code.py
@@ -1108,7 +1108,7 @@ def get_results(self, q, offset=0, limit=100):
'work_count',
]
q = escape_colon(escape_bracket(q), valid_fields)
- q_has_fields = ':' in q.replace(r'\:', '')
+ q_has_fields = ':' in q.replace(r'\:', '') or '*' in q
d = run_solr_search(
solr_select_url,
|
getpelican__pelican-1426 | DOCUTILS_SETTINGS not documented nor initialized
DOCUTILS_SETTINGS was introduced in #864, but it has not been documented nor is it initialized in settings.py
| [
{
"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals, print_function\nimport six\n\nimport copy\nimport inspect\nimport os\nimport locale\nimport logging\n\ntry:\n # SourceFileLoader is the recommended way in 3.3+\n from importlib.machinery import SourceFileLoader\n load_sourc... | [
{
"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals, print_function\nimport six\n\nimport copy\nimport inspect\nimport os\nimport locale\nimport logging\n\ntry:\n # SourceFileLoader is the recommended way in 3.3+\n from importlib.machinery import SourceFileLoader\n load_sourc... | diff --git a/docs/settings.rst b/docs/settings.rst
index df2fa722a..3f4f21471 100644
--- a/docs/settings.rst
+++ b/docs/settings.rst
@@ -58,6 +58,10 @@ Setting name (followed by default value, if any)
``datetime.datetime`` constructor.
``DEFAULT_METADATA = ()`` The default metadata you want to use for all articles
and pages.
+``DOCUTILS_SETTINGS = {}`` Extra configuration settings for the docutils publisher
+ (applicable only to reStructuredText). See `Docutils
+ Configuration`_ settings for more details.
+
``FILENAME_METADATA =`` ``'(?P<date>\d{4}-\d{2}-\d{2}).*'`` The regexp that will be used to extract any metadata
from the filename. All named groups that are matched
will be set in the metadata object.
@@ -819,3 +823,4 @@ Example settings
.. _Jinja custom filters documentation: http://jinja.pocoo.org/docs/api/#custom-filters
+.. _Docutils Configuration: http://docutils.sourceforge.net/docs/user/config.html
diff --git a/pelican/settings.py b/pelican/settings.py
index c04cc5d04..a283b2bed 100644
--- a/pelican/settings.py
+++ b/pelican/settings.py
@@ -49,6 +49,7 @@
'SITENAME': 'A Pelican Blog',
'DISPLAY_PAGES_ON_MENU': True,
'DISPLAY_CATEGORIES_ON_MENU': True,
+ 'DOCUTILS_SETTINGS': {},
'OUTPUT_SOURCES': False,
'OUTPUT_SOURCES_EXTENSION': '.text',
'USE_FOLDER_AS_CATEGORY': True,
|
ethereum__web3.py-475 | web3.auto raises unclear exception if no client is live
* Version: 4.0.0-beta.1
* OS: linux
### What was wrong?
If no client is live, I expect w3 to return as `None` in this case, but instead I get an exception.
```
from web3.auto import w3
```
cc @Sebohe
> ~/code/ethtoken/ethtoken/main.py in eip20_token(address, w3, **kwargs)
> 23 '''
> 24 if w3 is None:
> ---> 25 from web3.auto import w3
> 26 if w3 is None:
> 27 raise RuntimeError("Could not auto-detect web3 connection, please supply it as arg w3")
>
> ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/auto/__init__.py in <module>()
> 2
> 3 for connector in ('ipc', 'http'):
> ----> 4 connection = importlib.import_module('web3.auto.' + connector)
> 5 if connection.w3:
> 6 w3 = connection.w3
>
> /usr/lib/python3.5/importlib/__init__.py in import_module(name, package)
> 124 break
> 125 level += 1
> --> 126 return _bootstrap._gcd_import(name[level:], package, level)
> 127
> 128
>
> ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/auto/ipc.py in <module>()
> 14
> 15
> ---> 16 w3 = connect()
>
> ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/auto/ipc.py in connect()
> 8 def connect():
> 9 w3 = Web3(IPCProvider(get_default_ipc_path()))
> ---> 10 if w3.isConnected():
> 11 return w3
> 12
>
> ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/main.py in isConnected(self)
> 155 def isConnected(self):
> 156 for provider in self.providers:
> --> 157 if provider.isConnected():
> 158 return True
> 159 else:
>
> ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/providers/base.py in isConnected(self)
> 73 def isConnected(self):
> 74 try:
> ---> 75 response = self.make_request('web3_clientVersion', [])
> 76 except IOError:
> 77 return False
>
> ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/providers/ipc.py in make_request(self, method, params)
> 139 request = self.encode_rpc_request(method, params)
> 140
> --> 141 with self._lock, self._socket as sock:
> 142 sock.sendall(request)
> 143 raw_response = b""
>
> ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/providers/ipc.py in __enter__(self)
> 37 def __enter__(self):
> 38 if not self.sock:
> ---> 39 self.sock = get_ipc_socket(self.ipc_path)
> 40 return self.sock
> 41
>
> ~/code/ethtoken/venv/lib/python3.5/site-packages/web3/providers/ipc.py in get_ipc_socket(ipc_path, timeout)
> 24 else:
> 25 sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
> ---> 26 sock.connect(ipc_path)
> 27 sock.settimeout(timeout)
> 28 return sock
>
> TypeError: a bytes-like object is required, not 'NoneType'
### How can it be fixed?
* Add a new test to verify the situation, and prevent regressions
* `isConnected` should short-circuit with something like: `if self.ipc_path is None: return False`
| [
{
"content": "import os\nimport socket\nimport sys\nimport threading\n\ntry:\n from json import JSONDecodeError\nexcept ImportError:\n JSONDecodeError = ValueError\n\nfrom web3.utils.threads import (\n Timeout,\n)\n\nfrom .base import JSONBaseProvider\n\n\ndef get_ipc_socket(ipc_path, timeout=0.1):\n ... | [
{
"content": "import os\nimport socket\nimport sys\nimport threading\n\ntry:\n from json import JSONDecodeError\nexcept ImportError:\n JSONDecodeError = ValueError\n\nfrom web3.utils.threads import (\n Timeout,\n)\n\nfrom .base import JSONBaseProvider\n\n\ndef get_ipc_socket(ipc_path, timeout=0.1):\n ... | diff --git a/tests/core/providers/test_ipc_provider.py b/tests/core/providers/test_ipc_provider.py
new file mode 100644
index 0000000000..9a445a1031
--- /dev/null
+++ b/tests/core/providers/test_ipc_provider.py
@@ -0,0 +1,11 @@
+from web3.providers.ipc import (
+ IPCProvider,
+)
+
+
+def test_ipc_no_path():
+ """
+ IPCProvider.isConnected() returns False when no path is supplied
+ """
+ ipc = IPCProvider(None)
+ assert ipc.isConnected() is False
diff --git a/web3/providers/ipc.py b/web3/providers/ipc.py
index 60173b7c5f..5dcbb1406d 100644
--- a/web3/providers/ipc.py
+++ b/web3/providers/ipc.py
@@ -35,6 +35,9 @@ def __init__(self, ipc_path):
self.ipc_path = ipc_path
def __enter__(self):
+ if not self.ipc_path:
+ raise FileNotFoundError("cannot connect to IPC socket at path: %r" % self.ipc_path)
+
if not self.sock:
self.sock = get_ipc_socket(self.ipc_path)
return self.sock
|
cocotb__cocotb-208 | Redhat 6.5 can no longer raise a TestError
Regressions report pass but number of tests has gone done on some simulators. Icarus for instance shows this.
```
0.00ns [34mINFO [39m cocotb.gpi gpi_embed.c:213 in embed_sim_init [34mRunning on Icarus Verilog version 0.10.0 (devel)[39m
0.00ns [34mINFO [39m cocotb.gpi gpi_embed.c:214 in embed_sim_init [34mPython interpreter initialised and cocotb loaded![39m
0.00ns [34mINFO [39m cocotb.gpi __init__.py:96 in _initialise_testbench [34mSeeding Python random module with 1421853826[39m
0.00ns [34mINFO [39m cocotb.gpi __init__.py:110 in _initialise_testbench [34mRunning tests with Cocotb v0.5a from /var/lib/jenkins/workspace/cocotb_icarus_x86_64[39m
0.00ns [31mERROR [39m cocotb.coroutine.fail decorators.py:99 in __init__ [31mtest_duplicate_yield isn't a value coroutine! Did you use the yield keyword?[39m
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/__init__.py", line 128, in _initialise_testbench
regression.initialise()
File "/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/regression.py", line 123, in initialise
test = thing(self._dut)
File "/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/decorators.py", line 356, in _wrapped_test
raise_error(self, str(e))
File "/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/result.py", line 42, in raise_error
if sys.version_info.major >= 3:
AttributeError: 'tuple' object has no attribute 'major'
```
| [
{
"content": "''' Copyright (c) 2013 Potential Ventures Ltd\nCopyright (c) 2013 SolarFlare Communications Inc\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n * Redistributions of source ... | [
{
"content": "''' Copyright (c) 2013 Potential Ventures Ltd\nCopyright (c) 2013 SolarFlare Communications Inc\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n * Redistributions of source ... | diff --git a/cocotb/result.py b/cocotb/result.py
index 8ff8d5f6b0..fe5b935e36 100644
--- a/cocotb/result.py
+++ b/cocotb/result.py
@@ -39,7 +39,8 @@ def raise_error(obj, msg):
msg is a string
"""
exc_type, exc_value, exc_traceback = sys.exc_info()
- if sys.version_info.major >= 3:
+ # 2.6 cannot use named access
+ if sys.version_info[0] >= 3:
buff = StringIO()
traceback.print_tb(exc_traceback, file=buff)
else:
|
readthedocs__readthedocs.org-10572 | Most recent available `mambaforge=4.10` is simply too old
Hello guys, just wanted to ask you if it's possible to have a more modern version available for `mambaforge` - the best and latest available to be sourced on RTD via the configuration file is 4.10 which is simply too old (maximum conda 4.10 and mamba 0.19) - updating to a modern mamba doesn't work, as you can see from me changing the conf file in https://github.com/ESMValGroup/ESMValTool/pull/3310/files with output in https://readthedocs.org/projects/esmvaltool/builds/21390633/ - mamba is stuck at 0.19.0, which, in turn, slows down the environment creation process to around 10 minutes (for more recent conda's, updating mamba to something like >=1.4.8 works very well, and updates conda to 23.3 or 23.4 too, but in this case the base versions are too old). If you need any help whatsoever, I offer to help, and once more, many thanks for your great work on RTD :beer:
| [
{
"content": "\"\"\"\nDefine constants here to allow import them without any external dependency.\n\nThere are situations where we want to have access to these values without Django installed\n(e.g. common/dockerfiles/tasks.py)\n\nNote these constants where previously defined as Django settings in ``readthedocs... | [
{
"content": "\"\"\"\nDefine constants here to allow import them without any external dependency.\n\nThere are situations where we want to have access to these values without Django installed\n(e.g. common/dockerfiles/tasks.py)\n\nNote these constants where previously defined as Django settings in ``readthedocs... | diff --git a/docs/user/config-file/v2.rst b/docs/user/config-file/v2.rst
index 6984e9298da..2e0a96e580f 100644
--- a/docs/user/config-file/v2.rst
+++ b/docs/user/config-file/v2.rst
@@ -330,6 +330,7 @@ You can use several interpreters and versions, from CPython, Miniconda, and Mamb
- ``3.11``
- ``miniconda3-4.7``
- ``mambaforge-4.10``
+ - ``mambaforge-22.9``
build.tools.nodejs
``````````````````
diff --git a/docs/user/guides/conda.rst b/docs/user/guides/conda.rst
index c1201e401f9..7f5f82c0b71 100644
--- a/docs/user/guides/conda.rst
+++ b/docs/user/guides/conda.rst
@@ -126,7 +126,7 @@ with these contents:
build:
os: "ubuntu-20.04"
tools:
- python: "mambaforge-4.10"
+ python: "mambaforge-22.9"
conda:
environment: environment.yml
diff --git a/readthedocs/builds/constants_docker.py b/readthedocs/builds/constants_docker.py
index c49434e58fd..4612ca02822 100644
--- a/readthedocs/builds/constants_docker.py
+++ b/readthedocs/builds/constants_docker.py
@@ -40,6 +40,7 @@
"3": "3.11.4",
"miniconda3-4.7": "miniconda3-4.7.12",
"mambaforge-4.10": "mambaforge-4.10.3-10",
+ "mambaforge-22.9": "mambaforge-22.9.0-3",
},
"nodejs": {
"14": "14.20.1",
diff --git a/readthedocs/rtd_tests/fixtures/spec/v2/schema.json b/readthedocs/rtd_tests/fixtures/spec/v2/schema.json
index 438a050753d..d51e2c2b97f 100644
--- a/readthedocs/rtd_tests/fixtures/spec/v2/schema.json
+++ b/readthedocs/rtd_tests/fixtures/spec/v2/schema.json
@@ -177,7 +177,8 @@
"3.10",
"3.11",
"miniconda3-4.7",
- "mambaforge-4.10"
+ "mambaforge-4.10",
+ "mambaforge-22.9"
]
},
"nodejs": {
|
pantsbuild__pants-4714 | Overlaid Config Files are applied in reverse order
http://www.pantsbuild.org/options.html#overlaying-config-files documents that one can do:
$ ./pants --pants-config-files=a.ini --pants-config-files=b.ini options --options-name="level"
level = info (from CONFIG in a.ini)
$ cat a.ini
[GLOBAL]
level: info
$ cat b.ini
[GLOBAL]
level: debug
According to the docs, the second --pants-config-files should overlay the earlier values, but this is not happening :/
| [
{
"content": "# coding=utf-8\n# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import (absolute_import, division, generators, nested_scopes, print_function,\n unicode_literals, with_state... | [
{
"content": "# coding=utf-8\n# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import (absolute_import, division, generators, nested_scopes, print_function,\n unicode_literals, with_state... | diff --git a/src/python/pants/option/config.py b/src/python/pants/option/config.py
index 10e3c3561d0..554e38aa4f3 100644
--- a/src/python/pants/option/config.py
+++ b/src/python/pants/option/config.py
@@ -218,7 +218,8 @@ def configs(self):
return self._configs
def sources(self):
- return list(itertools.chain.from_iterable(cfg.sources() for cfg in self._configs))
+ # NB: Present the sources in the order we were given them.
+ return list(itertools.chain.from_iterable(cfg.sources() for cfg in reversed(self._configs)))
def sections(self):
ret = OrderedSet()
diff --git a/tests/python/pants_test/option/test_config.py b/tests/python/pants_test/option/test_config.py
index 0f4df3cb5f7..5050806c138 100644
--- a/tests/python/pants_test/option/test_config.py
+++ b/tests/python/pants_test/option/test_config.py
@@ -56,6 +56,7 @@ def setUp(self):
"""))
ini2.close()
self.config = Config.load(configpaths=[ini1.name, ini2.name])
+ self.assertEqual([ini1.name, ini2.name], self.config.sources())
def test_getstring(self):
self.assertEquals('/a/b/42', self.config.get('a', 'path'))
diff --git a/tests/python/pants_test/option/test_options_bootstrapper.py b/tests/python/pants_test/option/test_options_bootstrapper.py
index 7effa6e2843..49d003ede47 100644
--- a/tests/python/pants_test/option/test_options_bootstrapper.py
+++ b/tests/python/pants_test/option/test_options_bootstrapper.py
@@ -50,7 +50,7 @@ def test_bootstrap_option_values(self):
def br(path):
# Returns the full path of the given path under the buildroot.
- return '{}/{}'.format(buildroot, path)
+ return os.path.join(buildroot, path)
self._do_test([br('.pants.d'), br('build-support'), br('dist')],
config=None, env={}, args=[])
@@ -134,7 +134,7 @@ def test_create_bootstrapped_options(self):
self.assertEquals('/qux/baz', opts.for_scope('foo').bar)
self.assertEquals('/pear/banana', opts.for_scope('fruit').apple)
- def test_create_bootstrapped_multiple_config_override(self):
+ def do_test_create_bootstrapped_multiple_config(self, create_options_bootstrapper):
# check with multiple config files, the latest values always get taken
# in this case worker_count will be overwritten, while fruit stays the same
with temporary_file() as fp:
@@ -147,16 +147,15 @@ def test_create_bootstrapped_multiple_config_override(self):
"""))
fp.close()
- args = ['--config-override={}'.format(fp.name)] + self._config_path(fp.name)
- bootstrapper_single_config = OptionsBootstrapper(args=args)
+ bootstrapper_single_config = create_options_bootstrapper(fp.name)
- opts_single_config = bootstrapper_single_config.get_full_options(known_scope_infos=[
+ opts_single_config = bootstrapper_single_config.get_full_options(known_scope_infos=[
ScopeInfo('', ScopeInfo.GLOBAL),
ScopeInfo('compile.apt', ScopeInfo.TASK),
ScopeInfo('fruit', ScopeInfo.TASK),
])
# So we don't choke on these on the cmd line.
- opts_single_config.register('', '--pants-config-files')
+ opts_single_config.register('', '--pants-config-files', type=list)
opts_single_config.register('', '--config-override', type=list)
opts_single_config.register('compile.apt', '--worker-count')
@@ -172,10 +171,7 @@ def test_create_bootstrapped_multiple_config_override(self):
"""))
fp2.close()
- args = ['--config-override={}'.format(fp.name),
- '--config-override={}'.format(fp2.name)] + self._config_path(fp.name)
-
- bootstrapper_double_config = OptionsBootstrapper(args=args)
+ bootstrapper_double_config = create_options_bootstrapper(fp.name, fp2.name)
opts_double_config = bootstrapper_double_config.get_full_options(known_scope_infos=[
ScopeInfo('', ScopeInfo.GLOBAL),
@@ -183,7 +179,7 @@ def test_create_bootstrapped_multiple_config_override(self):
ScopeInfo('fruit', ScopeInfo.TASK),
])
# So we don't choke on these on the cmd line.
- opts_double_config.register('', '--pants-config-files')
+ opts_double_config.register('', '--pants-config-files', type=list)
opts_double_config.register('', '--config-override', type=list)
opts_double_config.register('compile.apt', '--worker-count')
opts_double_config.register('fruit', '--apple')
@@ -191,6 +187,18 @@ def test_create_bootstrapped_multiple_config_override(self):
self.assertEquals('2', opts_double_config.for_scope('compile.apt').worker_count)
self.assertEquals('red', opts_double_config.for_scope('fruit').apple)
+ def test_create_bootstrapped_multiple_config_override(self):
+ def create_options_bootstrapper(*config_paths):
+ return OptionsBootstrapper(args=['--config-override={}'.format(cp) for cp in config_paths])
+
+ self.do_test_create_bootstrapped_multiple_config(create_options_bootstrapper)
+
+ def test_create_bootstrapped_multiple_pants_config_files(self):
+ def create_options_bootstrapper(*config_paths):
+ return OptionsBootstrapper(args=['--pants-config-files={}'.format(cp) for cp in config_paths])
+
+ self.do_test_create_bootstrapped_multiple_config(create_options_bootstrapper)
+
def test_full_options_caching(self):
with temporary_file_path() as config:
args = self._config_path(config)
|
numba__numba-5455 | bump max llvmlite version to accept 0.32.0
Now that llvmlite 0.32.0rc1 is released. We need to bump the accepted version to `0.33.0.dev0`
| [
{
"content": "from setuptools import setup, Extension, find_packages\nfrom distutils.command import build\nfrom distutils.spawn import spawn\nfrom distutils import sysconfig\nimport sys\nimport os\nimport platform\n\nimport versioneer\n\nmin_python_version = \"3.6\"\nmin_numpy_build_version = \"1.11\"\nmin_nump... | [
{
"content": "from setuptools import setup, Extension, find_packages\nfrom distutils.command import build\nfrom distutils.spawn import spawn\nfrom distutils import sysconfig\nimport sys\nimport os\nimport platform\n\nimport versioneer\n\nmin_python_version = \"3.6\"\nmin_numpy_build_version = \"1.11\"\nmin_nump... | diff --git a/README.rst b/README.rst
index e56331f9bda..9be382775d2 100644
--- a/README.rst
+++ b/README.rst
@@ -41,7 +41,7 @@ Dependencies
============
* Python versions: 3.6-3.8
-* llvmlite 0.31.*
+* llvmlite 0.32.*
* NumPy >=1.15 (can build with 1.11 for ABI compatibility)
Optionally:
diff --git a/buildscripts/condarecipe.local/meta.yaml b/buildscripts/condarecipe.local/meta.yaml
index 40a411e6341..cad87a5c872 100644
--- a/buildscripts/condarecipe.local/meta.yaml
+++ b/buildscripts/condarecipe.local/meta.yaml
@@ -32,7 +32,7 @@ requirements:
- numpy
- setuptools
# On channel https://anaconda.org/numba/
- - llvmlite 0.31.*
+ - llvmlite >=0.31,<0.33
# TBB devel version is to match TBB libs
- tbb-devel >=2019.5 # [not (armv6l or armv7l or aarch64 or linux32)]
run:
@@ -40,7 +40,7 @@ requirements:
- numpy >=1.15
- setuptools
# On channel https://anaconda.org/numba/
- - llvmlite 0.31.*
+ - llvmlite >=0.31,<0.33
run_constrained:
# If TBB is present it must be at least this version from Anaconda due to
# build flag issues triggering UB
diff --git a/requirements.txt b/requirements.txt
index 00c7dcbd001..6ae43a42963 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,4 +1,4 @@
setuptools
numpy>=1.10
-llvmlite==0.31.*
+llvmlite>=0.31,<0.33
argparse
diff --git a/setup.py b/setup.py
index 72ddf36e368..d9af1861140 100644
--- a/setup.py
+++ b/setup.py
@@ -12,7 +12,7 @@
min_numpy_build_version = "1.11"
min_numpy_run_version = "1.15"
min_llvmlite_version = "0.31.0.dev0"
-max_llvmlite_version = "0.32.0.dev0"
+max_llvmlite_version = "0.33.0.dev0"
if sys.platform.startswith('linux'):
# Patch for #2555 to make wheels without libpython
|
sopel-irc__sopel-1941 | settings: Bot assumes `+` for modes
The `core.modes` setting is assumed to be just a string of letters, and the leading `+` is assumed by `coretasks`: https://github.com/sopel-irc/sopel/blob/a33caf15090d61b90dc831f55cc195e56185dad3/sopel/coretasks.py#L155-L156
@cottongin rightly pointed out on IRC that sometimes it's desirable to _remove_ modes the IRC server sets by default. While some IRCds will happily accept `MODE nickname +-abc` (like freenode's), it's not a universal workaround.
I'm happy to add this to 7.1 because 1) it's a pretty trivial change to implement and 2) there's an obvious backward-compatible way to parse the setting.
Proposal: Add the leading `+` automatically only if the `core.modes` setting doesn't contain a prefix character (`+` or `-`).
| [
{
"content": "# coding=utf-8\n\"\"\"Tasks that allow the bot to run, but aren't user-facing functionality\n\nThis is written as a module to make it easier to extend to support more\nresponses to standard IRC codes without having to shove them all into the\ndispatch function in bot.py and making it easier to mai... | [
{
"content": "# coding=utf-8\n\"\"\"Tasks that allow the bot to run, but aren't user-facing functionality\n\nThis is written as a module to make it easier to extend to support more\nresponses to standard IRC codes without having to shove them all into the\ndispatch function in bot.py and making it easier to mai... | diff --git a/sopel/coretasks.py b/sopel/coretasks.py
index 38379c6c9e..047c6aef11 100644
--- a/sopel/coretasks.py
+++ b/sopel/coretasks.py
@@ -153,7 +153,11 @@ def startup(bot, trigger):
auth_after_register(bot)
modes = bot.config.core.modes
- bot.write(('MODE', '%s +%s' % (bot.nick, modes)))
+ if modes:
+ if not modes.startswith(('+', '-')):
+ # Assume "+" by default.
+ modes = '+' + modes
+ bot.write(('MODE', bot.nick, modes))
bot.memory['retry_join'] = dict()
|
svthalia__concrexit-1103 | Manually entered usernames not lowercased in registrations
### Describe the bug
Manually entered usernames are not lowercased in registrations.
### How to reproduce
Steps to reproduce the behaviour:
1. Create a registration
2. Enter a manual username that is not completely lowercase
3. Complete the registration
4. Try to log-in with the new user
5. It is not possible
### Expected behaviour
The username should have been lowercased, since it is not possible to login with a username that has capitalisaton of any kind.
| [
{
"content": "\"\"\"The services defined by the registrations package\"\"\"\nimport string\nimport unicodedata\nfrom typing import Union\n\nfrom django.conf import settings\nfrom django.contrib.admin.models import LogEntry, CHANGE\nfrom django.contrib.admin.options import get_content_type_for_model\nfrom django... | [
{
"content": "\"\"\"The services defined by the registrations package\"\"\"\nimport string\nimport unicodedata\nfrom typing import Union\n\nfrom django.conf import settings\nfrom django.contrib.admin.models import LogEntry, CHANGE\nfrom django.contrib.admin.options import get_content_type_for_model\nfrom django... | diff --git a/website/registrations/services.py b/website/registrations/services.py
index b06b2e09b..8eff2ba46 100644
--- a/website/registrations/services.py
+++ b/website/registrations/services.py
@@ -288,7 +288,7 @@ def _create_member_from_registration(registration: Registration) -> Member:
# Create user
user = get_user_model().objects.create_user(
- username=registration.username,
+ username=registration.username.lower(),
email=registration.email,
password=password,
first_name=registration.first_name,
diff --git a/website/registrations/tests/test_services.py b/website/registrations/tests/test_services.py
index 1201bc683..ec3b77dee 100644
--- a/website/registrations/tests/test_services.py
+++ b/website/registrations/tests/test_services.py
@@ -338,7 +338,9 @@ def test_create_payment_for_entry(self):
@mock.patch("registrations.services.check_unique_user")
def test_create_member_from_registration(self, check_unique_user):
- self.e1.username = "jdoe"
+ # We use capitalisation here because we want
+ # to test if the username is lowercased
+ self.e1.username = "JDoe"
self.e1.save()
check_unique_user.return_value = False
|
Pycord-Development__pycord-607 | @commands.bot_has_permissions always fails for moderate_members
### Summary
The `@commands.bot_has_permissions` (and possibly other similar decorators such as `has_permissions`) always result in `CheckFailure` if evaluating `moderate_members`.
### Reproduction Steps
Use the example code below in a bot to see the issue happen.
### Minimal Reproducible Code
```python
@commands.command()
@commands.bot_has_permissions(moderate_members=True)
@commands.guild_only()
async def timeout_test(self, ctx):
await ctx.send("haha permissions work as intended!")
```
### Expected Results
If the bot has either `Administrator` or `Moderate Members`, the message is sent.
### Actual Results
The command will always result in `CheckFailure`, regardless of the bot's permissions.

### Intents
```
discord.Intents(guilds = True, members = True, bans = True, emojis = True, messages = True, invites = True, reactions = True)
```
### System Information
```
- Python v3.10.1-final
- py-cord v2.0.0-alpha
- py-cord pkg_resources: v2.0.0a4580+g1d65214e
- aiohttp v3.7.4.post0
- system info: Linux 5.15.10-zen1-1-zen #1 ZEN SMP PREEMPT Fri, 17 Dec 2021 11:17:39 +0000
```
### Checklist
- [X] I have searched the open issues for duplicates.
- [X] I have shown the entire traceback, if possible.
- [X] I have removed my token from display, if visible.
### Additional Context
The timeout functionality works correctly if the bot has the permission to time out members, it is only the check that fails.
| [
{
"content": "\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2021 Rapptz\nCopyright (c) 2021-present Pycord Development\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without r... | [
{
"content": "\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2021 Rapptz\nCopyright (c) 2021-present Pycord Development\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without r... | diff --git a/discord/permissions.py b/discord/permissions.py
index 2aa4de1f95..41825b8007 100644
--- a/discord/permissions.py
+++ b/discord/permissions.py
@@ -148,7 +148,7 @@ def all(cls: Type[P]) -> P:
"""A factory method that creates a :class:`Permissions` with all
permissions set to ``True``.
"""
- return cls(0b111111111111111111111111111111111111111)
+ return cls(-1)
@classmethod
def all_channel(cls: Type[P]) -> P:
|
secdev__scapy-2255 | tcpdump check error in centos
#### Brief description
> I have installed tcpdump in PATH, but it reports:
scapy.error.Scapy_Exception: tcpdump is not available. Cannot use filter !
I found the code which check tcudmp in /opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/scapy/arch/common.py:
```
def _check_tcpdump():
"""
Return True if the tcpdump command can be started
"""
with open(os.devnull, 'wb') as devnull:
try:
proc = subprocess.Popen([conf.prog.tcpdump, "--version"],
stdout=devnull, stderr=subprocess.STDOUT)
except OSError:
return False
return proc.wait() == 0
```
the error is that tcpdump --version return 1 instead of 0
eg:
```
[root@localhost proxy]# tcpdump --version
tcpdump version 4.1-PRE-CVS_2017_03_21
libpcap version 1.4.0
Usage: tcpdump [-aAdDefhIJKlLnNOpqRStuUvxX] [ -B size ] [ -c count ]
[ -C file_size ] [ -E algo:secret ] [ -F file ] [ -G seconds ]
[ -i interface ] [ -j tstamptype ] [ -M secret ]
[ -Q|-P in|out|inout ]
[ -r file ] [ -s snaplen ] [ -T type ] [ -w file ]
[ -W filecount ] [ -y datalinktype ] [ -z command ]
[ -Z user ] [ expression ]
[root@localhost proxy]# echo $?
1
```
#### Environment
```
[root@localhost proxy]# python3.6 --version
Python 3.6.3
[root@localhost proxy]# pip3.6 freeze
certifi==2018.11.29
chardet==3.0.4
idna==2.8
protobuf==3.6.1
psutil==5.4.8
PyMySQL==0.9.3
redis==3.0.1
requests==2.21.0
s8-protocol==1.0
scapy==2.4.2
six==1.11.0
snakeMQ==1.6
urllib3==1.24.1
virtualenv==15.1.0
xlrd==1.2.0
You are using pip version 9.0.1, however version 19.0.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
[root@localhost proxy]# uname -a
Linux localhost.localdomain 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost proxy]# cat /etc/issue
CentOS release 6.5 (Final)
Kernel \r on an \m
```
| [
{
"content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Philippe Biondi <phil@secdev.org>\n# This program is published under a GPLv2 license\n\n\"\"\"\nFunctions common to different architectures\n\"\"\"\n\nimport ctypes\nimport os\nimport soc... | [
{
"content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Philippe Biondi <phil@secdev.org>\n# This program is published under a GPLv2 license\n\n\"\"\"\nFunctions common to different architectures\n\"\"\"\n\nimport ctypes\nimport os\nimport soc... | diff --git a/scapy/arch/common.py b/scapy/arch/common.py
index cf92cd9efa1..35276718062 100644
--- a/scapy/arch/common.py
+++ b/scapy/arch/common.py
@@ -42,7 +42,9 @@ def _check_tcpdump():
return False
# On some systems, --version does not exist on tcpdump
- return proc.returncode == 0 or output.startswith(b'Usage: tcpdump ')
+ return proc.returncode == 0 \
+ or output.startswith(b'Usage: tcpdump ') \
+ or output.startswith(b'tcpdump: unrecognized option')
# This won't be used on Windows
|
Mailu__Mailu-958 | Using external smtp relay server for outgoing emails
Hi,
I need to use mailchannels.com to relay all outgoing emails from my Mailu install. In this doc is what I need to change in Postfix:
https://mailchannels.zendesk.com/hc/en-us/articles/200262640-Setting-up-for-Postfix
Is there any way to do this in Mailu ?
Thanks,
| [
{
"content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport logging as log\nimport sys\nfrom mailustart import resolve, convert\n\nfrom podop import run_server\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\ndef start_podop... | [
{
"content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport logging as log\nimport sys\nfrom mailustart import resolve, convert\n\nfrom podop import run_server\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\ndef start_podop... | diff --git a/core/postfix/conf/main.cf b/core/postfix/conf/main.cf
index 7fb32b678..d7e3dca8f 100644
--- a/core/postfix/conf/main.cf
+++ b/core/postfix/conf/main.cf
@@ -27,6 +27,11 @@ mydestination =
# Relayhost if any is configured
relayhost = {{ RELAYHOST }}
+{% if RELAYUSER %}
+smtp_sasl_auth_enable = yes
+smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
+smtp_sasl_security_options = noanonymous
+{% endif %}
# Recipient delimiter for extended addresses
recipient_delimiter = {{ RECIPIENT_DELIMITER }}
diff --git a/core/postfix/conf/sasl_passwd b/core/postfix/conf/sasl_passwd
new file mode 100644
index 000000000..e19d0657d
--- /dev/null
+++ b/core/postfix/conf/sasl_passwd
@@ -0,0 +1 @@
+{{ RELAYHOST }} {{ RELAYUSER }}:{{ RELAYPASSWORD }}
\ No newline at end of file
diff --git a/core/postfix/start.py b/core/postfix/start.py
index 95c97fded..81849c5b2 100755
--- a/core/postfix/start.py
+++ b/core/postfix/start.py
@@ -48,6 +48,11 @@ def start_podop():
os.system("postmap {}".format(destination))
os.remove(destination)
+if "RELAYUSER" in os.environ:
+ path = "/etc/postfix/sasl_passwd"
+ convert("/conf/sasl_passwd", path)
+ os.system("postmap {}".format(path))
+
convert("/conf/rsyslog.conf", "/etc/rsyslog.conf")
# Run Podop and Postfix
diff --git a/docs/configuration.rst b/docs/configuration.rst
index e7dfa2af8..7b84d6fcf 100644
--- a/docs/configuration.rst
+++ b/docs/configuration.rst
@@ -57,7 +57,8 @@ Docker services' outbound mail to be relayed, you can set this to ``172.16.0.0/1
to include **all** Docker networks. The default is to leave this empty.
The ``RELAYHOST`` is an optional address of a mail server relaying all outgoing
-mail.
+mail in following format: ``[HOST]:PORT``.
+``RELAYUSER`` and ``RELAYPASSWORD`` can be used when authentication is needed.
The ``FETCHMAIL_DELAY`` is a delay (in seconds) for the fetchmail service to
go and fetch new email if available. Do not use too short delays if you do not
diff --git a/towncrier/newsfragments/958.feature b/towncrier/newsfragments/958.feature
new file mode 100644
index 000000000..ac02dec40
--- /dev/null
+++ b/towncrier/newsfragments/958.feature
@@ -0,0 +1 @@
+Relays with authentication
|
rucio__rucio-4790 | Fix setup_webui script
Motivation
----------
Script has a wrong import, needs to be fixed.
| [
{
"content": "# -*- coding: utf-8 -*-\n# Copyright 2015-2021 CERN\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unles... | [
{
"content": "# -*- coding: utf-8 -*-\n# Copyright 2015-2021 CERN\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unles... | diff --git a/setup_webui.py b/setup_webui.py
index ef91603cb5..65afdd4102 100644
--- a/setup_webui.py
+++ b/setup_webui.py
@@ -35,7 +35,7 @@
from setuputil import get_rucio_version
name = 'rucio-webui'
-packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.flask.common']
+packages = ['rucio', 'rucio.web', 'rucio.web.ui', 'rucio.web.ui.flask', 'rucio.web.ui.flask.common']
data_files = []
description = "Rucio WebUI Package"
|
scikit-image__scikit-image-3901 | ransac selects duplicate data points in random sample
## Description
I don't know if this behavior is intentional, but to me it seems a bit odd.
When the ransac-method selects the random sample to fit the model, it can select the same data point multiple times.
## Way to reproduce
```python
import numpy as np
from skimage.measure import ransac
np.random.seed(seed=0)
data = np.arange(10)
class Model(object):
"""Dummy model"""
def estimate(self, data):
if np.unique(data).size != data.size:
print("Duplicate points: ", data)
return True
def residuals(self, data):
return 1.0
ransac(data, Model, min_samples=3, residual_threshold=0.0, max_trials=10)
```
Which results in
```python
('Duplicate points: ', array([8, 8, 1]))
('Duplicate points: ', array([6, 7, 7]))
('Duplicate points: ', array([9, 8, 9]))
```
## Version information
```python
# Paste the output of the following python commands
2.7.13 (default, Sep 26 2018, 18:42:22)
[GCC 6.3.0 20170516]
Linux-4.9.0-8-amd64-x86_64-with-debian-9.4
scikit-image version: 0.14.2
numpy version: 1.16.0
```
| [
{
"content": "import math\nimport numpy as np\nfrom numpy.linalg import inv, pinv\nfrom scipy import optimize\nfrom .._shared.utils import check_random_state\n\n\ndef _check_data_dim(data, dim):\n if data.ndim != 2 or data.shape[1] != dim:\n raise ValueError('Input data must have shape (N, %d).' % dim... | [
{
"content": "import math\nimport numpy as np\nfrom numpy.linalg import inv, pinv\nfrom scipy import optimize\nfrom .._shared.utils import check_random_state\n\n\ndef _check_data_dim(data, dim):\n if data.ndim != 2 or data.shape[1] != dim:\n raise ValueError('Input data must have shape (N, %d).' % dim... | diff --git a/skimage/measure/fit.py b/skimage/measure/fit.py
index 2cb51426f3b..d88b79c4a0b 100644
--- a/skimage/measure/fit.py
+++ b/skimage/measure/fit.py
@@ -804,7 +804,8 @@ def ransac(data, model_class, min_samples, residual_threshold,
# choose random sample set
samples = []
- random_idxs = random_state.randint(0, num_samples, min_samples)
+ random_idxs = random_state.choice(num_samples, min_samples,
+ replace=False)
for d in data:
samples.append(d[random_idxs])
diff --git a/skimage/measure/tests/test_fit.py b/skimage/measure/tests/test_fit.py
index 0a2f069e5fe..c58934e155e 100644
--- a/skimage/measure/tests/test_fit.py
+++ b/skimage/measure/tests/test_fit.py
@@ -348,3 +348,23 @@ def test_ransac_invalid_input():
with testing.raises(ValueError):
ransac(np.zeros((10, 2)), None, min_samples=2,
residual_threshold=0, stop_probability=1.01)
+
+
+def test_ransac_sample_duplicates():
+ class DummyModel(object):
+
+ """Dummy model to check for duplicates."""
+
+ def estimate(self, data):
+ # Assert that all data points are unique.
+ assert_equal(np.unique(data).size, data.size)
+ return True
+
+ def residuals(self, data):
+ return 1.0
+
+ # Create dataset with four unique points. Force 10 iterations
+ # and check that there are no duplicated data points.
+ data = np.arange(4)
+ ransac(data, DummyModel, min_samples=3, residual_threshold=0.0,
+ max_trials=10)
|
mindsdb__mindsdb-1576 | Add new method to count number of rows for MySQL datasources :electric_plug: :1234:
When MindsDB creates a new MySQL datasource we get information for row counts by fetching all datasources. The problem here is that if datasource is big it takes a lot of time. We need a new get_row_count method to return the number of rows per datasource. The PR should include this method inside the PostgreSQL class .
## Steps :male_detective: :female_detective:
- Implement in https://github.com/mindsdb/mindsdb/blob/stable/mindsdb/integrations/mysql/mysql.py#L51
- Example method:
```py
def get_row_count(self, query):
result = conn.execute(query)
return len(query)
```
- Push to staging branch
## Additional rewards :1st_place_medal:
Each code PR brings :three: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/
| [
{
"content": "import os\nimport shutil\nimport tempfile\n\nfrom contextlib import closing\nimport mysql.connector\n\nfrom lightwood.api import dtype\nfrom mindsdb.integrations.base import Integration\nfrom mindsdb.utilities.log import log\n\n\nclass MySQLConnectionChecker:\n def __init__(self, **kwargs):\n ... | [
{
"content": "import os\nimport shutil\nimport tempfile\n\nfrom contextlib import closing\nimport mysql.connector\n\nfrom lightwood.api import dtype\nfrom mindsdb.integrations.base import Integration\nfrom mindsdb.utilities.log import log\n\n\nclass MySQLConnectionChecker:\n def __init__(self, **kwargs):\n ... | diff --git a/mindsdb/integrations/mysql/mysql.py b/mindsdb/integrations/mysql/mysql.py
index 14c6c81d603..2237cf8fae1 100644
--- a/mindsdb/integrations/mysql/mysql.py
+++ b/mindsdb/integrations/mysql/mysql.py
@@ -190,3 +190,10 @@ def unregister_predictor(self, name):
drop table if exists {self.mindsdb_database}.{self._escape_table_name(name)};
"""
self._query(q)
+
+ def get_row_count(self, query):
+ q = f"""
+ SELECT COUNT(*) as count
+ FROM ({query}) as query;"""
+ result = self._query(q)
+ return result[0]['count']
|
conan-io__conan-2763 | [bug] Linter "Unable to import" warning when importing a shared Python Conan package in the build() step
- [x] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
---
I followed the instructions on http://docs.conan.io/en/latest/howtos/python_code_reuse.html. When I get to the "Requiring a python conan package" step, the linter gives me a warning about importing the shared package:
$ git clone https://github.com/smokris/conan-test-library
$ cd conan-test-library
$ conan export . me/testing
$ cd ..
$ git clone https://github.com/smokris/conan-test-consumer
$ cd conan-test-consumer
$ conan create . me/testing
HelloPyReuse/1.0@me/testing: Exporting package recipe
Linter warnings
WARN: Linter. Line 9: Unable to import 'hello'
…
HelloPyReuse/1.0@me/testing: Calling build()
Hello World from Python!
…
(The imported package works fine; the problem is just that the linter is emitting a warning. I'd prefer that the linter not show this false-positive warning, to improve the linter's signal-to-noise ratio.)
I'm able to reproduce this using:
- Conan 1.1.1 on my local macOS 10.13.3 system
- Conan 1.1.1 on Travis CI's Mac OS 10.10.5 image
- Conan 1.1.1 on Travis CI's Ubuntu 14.04.5 image
- Conan 1.2.0 on CentOS 7.4
| [
{
"content": "import json\nimport os\nimport sys\n\nimport platform\n\nfrom conans.client.output import Color\nfrom conans.errors import ConanException\nfrom subprocess import PIPE, Popen\nfrom conans import __path__ as root_path\n\n\ndef conan_linter(conanfile_path, out):\n if getattr(sys, 'frozen', False):... | [
{
"content": "import json\nimport os\nimport sys\n\nimport platform\n\nfrom conans.client.output import Color\nfrom conans.errors import ConanException\nfrom subprocess import PIPE, Popen\nfrom conans import __path__ as root_path\n\n\ndef conan_linter(conanfile_path, out):\n if getattr(sys, 'frozen', False):... | diff --git a/conans/client/cmd/export_linter.py b/conans/client/cmd/export_linter.py
index 112b73013a7..4084cdb44a0 100644
--- a/conans/client/cmd/export_linter.py
+++ b/conans/client/cmd/export_linter.py
@@ -76,6 +76,8 @@ def _accept_message(msg):
return False
if symbol in ("bare-except", "broad-except"): # No exception type(s) specified
return False
+ if symbol == "import-error" and msg.get("column") > 3: # Import of a conan python package
+ return False
return True
diff --git a/conans/test/integration/python_build_test.py b/conans/test/integration/python_build_test.py
index c641ad0b5ba..0fc9dec5f3f 100644
--- a/conans/test/integration/python_build_test.py
+++ b/conans/test/integration/python_build_test.py
@@ -89,7 +89,7 @@ def reuse_build_test(self):
client = TestClient()
client.save({CONANFILE: conanfile, "__init__.py": "", "mytest.py": test})
client.run("export . lasote/stable")
- reuse = """from conans import ConanFile, tools
+ reuse = """from conans import ConanFile
class ToolsTest(ConanFile):
name = "Consumer"
version = "0.1"
@@ -102,13 +102,14 @@ def build(self):
client.save({CONANFILE: reuse}, clean_first=True)
client.run("create . conan/testing")
self.assertIn("Consumer/0.1@conan/testing: Hello Foo", client.out)
+ self.assertNotIn("WARN: Linter. Line 8: Unable to import 'mytest'", client.out)
def reuse_source_test(self):
# https://github.com/conan-io/conan/issues/2644
client = TestClient()
client.save({CONANFILE: conanfile, "__init__.py": "", "mytest.py": test})
client.run("export . lasote/stable")
- reuse = """from conans import ConanFile, tools
+ reuse = """from conans import ConanFile
class ToolsTest(ConanFile):
name = "Consumer"
version = "0.1"
@@ -121,6 +122,7 @@ def source(self):
client.save({CONANFILE: reuse}, clean_first=True)
client.run("create . conan/testing")
self.assertIn("Consumer/0.1@conan/testing: Hello Baz", client.out)
+ self.assertNotIn("WARN: Linter. Line 8: Unable to import 'mytest'", client.out)
def reuse_test(self):
client = TestClient()
@@ -179,6 +181,7 @@ def basic_install_test(self):
client.save({CONANFILE: reuse}, clean_first=True)
client.run("export . lasote/stable")
+ self.assertNotIn("Unable to import 'mytest'", client.out)
client.run("install Consumer/0.1@lasote/stable --build")
lines = [line.split(":")[1] for line in str(client.user_io.out).splitlines()
if line.startswith("Consumer/0.1@lasote/stable: Hello")]
|
buildbot__buildbot-5912 | AttributeError: module 'sqlalchemy.engine.strategies' has no attribute 'PlainEngineStrategy'
On buildbot version 3.0.1, since upgrading SQLAlchemy to version 1.4, `buildbot create-master` now throws an error:
```
Traceback (most recent call last):
File "/opt/buildbot/master/venv/bin/buildbot", line 8, in <module>
sys.exit(run())
File "/opt/buildbot/master/venv/lib/python3.8/site-packages/buildbot/scripts/runner.py", line 772, in run
subcommandFunction = reflect.namedObject(subconfig.subcommandFunction)
File "/opt/buildbot/master/venv/lib/python3.8/site-packages/twisted/python/reflect.py", line 170, in namedObject
module = namedModule(".".join(classSplit[:-1]))
File "/opt/buildbot/master/venv/lib/python3.8/site-packages/twisted/python/reflect.py", line 157, in namedModule
topLevel = __import__(name)
File "/opt/buildbot/master/venv/lib/python3.8/site-packages/buildbot/scripts/create_master.py", line 25, in <module>
from buildbot.master import BuildMaster
File "/opt/buildbot/master/venv/lib/python3.8/site-packages/buildbot/master.py", line 35, in <module>
from buildbot.db import connector as dbconnector
File "/opt/buildbot/master/venv/lib/python3.8/site-packages/buildbot/db/connector.py", line 30, in <module>
from buildbot.db import enginestrategy
File "/opt/buildbot/master/venv/lib/python3.8/site-packages/buildbot/db/enginestrategy.py", line 154, in <module>
class BuildbotEngineStrategy(strategies.PlainEngineStrategy):
AttributeError: module 'sqlalchemy.engine.strategies' has no attribute 'PlainEngineStrategy'
````
Restricting the sqlalchemy version to 1.3.23 seems to resolve this issue.
Seems related to the removal of this interface in SQLAlchemy 1.4: https://github.com/sqlalchemy/sqlalchemy/commit/dfb20f07d8796ec27732df84c40b4ce4857fd83b
| [
{
"content": "#!/usr/bin/env python\n#\n# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it wil... | [
{
"content": "#!/usr/bin/env python\n#\n# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it wil... | diff --git a/master/buildbot/newsfragments/sqlalchemy-1-4-compatibility.bugfix b/master/buildbot/newsfragments/sqlalchemy-1-4-compatibility.bugfix
new file mode 100644
index 000000000000..d5d482db09cb
--- /dev/null
+++ b/master/buildbot/newsfragments/sqlalchemy-1-4-compatibility.bugfix
@@ -0,0 +1 @@
+Updated Buildbot requirements to specify sqlalchemy 1.4 and newer as not supported yet.
diff --git a/master/setup.py b/master/setup.py
index 19a31aa844cd..f21f720459a4 100755
--- a/master/setup.py
+++ b/master/setup.py
@@ -491,7 +491,7 @@ def define_plugin_entries(groups):
'Jinja2 >= 2.1',
# required for tests, but Twisted requires this anyway
'zope.interface >= 4.1.1',
- 'sqlalchemy>=1.2.0',
+ 'sqlalchemy >= 1.2.0, < 1.4',
'sqlalchemy-migrate>=0.13',
'python-dateutil>=1.5',
'txaio ' + txaio_ver,
|
MongoEngine__mongoengine-1454 | Rename modifier missing from update
Not sure if this is intentional or not but it would be useful to have the `$rename` operator (or "modifier" for the update method for QuerySet and Document) available.
I'm currently working around it with `exec_js`, like so:
``` python
Document.objects.exec_js("""
function() {
db[collection].update({}, {$rename: {foo: 'bar'}});
}""")
```
| [
{
"content": "from mongoengine.errors import NotRegistered\n\n__all__ = ('UPDATE_OPERATORS', 'get_document', '_document_registry')\n\n\nUPDATE_OPERATORS = set(['set', 'unset', 'inc', 'dec', 'pop', 'push',\n 'push_all', 'pull', 'pull_all', 'add_to_set',\n 'set_on_ins... | [
{
"content": "from mongoengine.errors import NotRegistered\n\n__all__ = ('UPDATE_OPERATORS', 'get_document', '_document_registry')\n\n\nUPDATE_OPERATORS = set(['set', 'unset', 'inc', 'dec', 'pop', 'push',\n 'push_all', 'pull', 'pull_all', 'add_to_set',\n 'set_on_ins... | diff --git a/mongoengine/base/common.py b/mongoengine/base/common.py
index da2b8b68b..b9971ff71 100644
--- a/mongoengine/base/common.py
+++ b/mongoengine/base/common.py
@@ -5,7 +5,7 @@
UPDATE_OPERATORS = set(['set', 'unset', 'inc', 'dec', 'pop', 'push',
'push_all', 'pull', 'pull_all', 'add_to_set',
- 'set_on_insert', 'min', 'max'])
+ 'set_on_insert', 'min', 'max', 'rename'])
_document_registry = {}
diff --git a/tests/document/instance.py b/tests/document/instance.py
index b187f766b..9b52c809a 100644
--- a/tests/document/instance.py
+++ b/tests/document/instance.py
@@ -1232,6 +1232,19 @@ def test_update(self):
self.assertEqual(person.name, None)
self.assertEqual(person.age, None)
+ def test_update_rename_operator(self):
+ """Test the $rename operator."""
+ coll = self.Person._get_collection()
+ doc = self.Person(name='John').save()
+ raw_doc = coll.find_one({'_id': doc.pk})
+ self.assertEqual(set(raw_doc.keys()), set(['_id', '_cls', 'name']))
+
+ doc.update(rename__name='first_name')
+ raw_doc = coll.find_one({'_id': doc.pk})
+ self.assertEqual(set(raw_doc.keys()),
+ set(['_id', '_cls', 'first_name']))
+ self.assertEqual(raw_doc['first_name'], 'John')
+
def test_inserts_if_you_set_the_pk(self):
p1 = self.Person(name='p1', id=bson.ObjectId()).save()
p2 = self.Person(name='p2')
|
weni-ai__bothub-engine-230 | Updating settings and remove sentences, training keeps enabled
Reported by @johncordeiro in https://github.com/Ilhasoft/bothub/issues/44
| [
{
"content": "import uuid\nimport base64\nimport requests\n\nfrom functools import reduce\nfrom django.db import models\nfrom django.utils.translation import gettext as _\nfrom django.utils import timezone\nfrom django.conf import settings\nfrom django.core.validators import RegexValidator, _lazy_re_compile\nfr... | [
{
"content": "import uuid\nimport base64\nimport requests\n\nfrom functools import reduce\nfrom django.db import models\nfrom django.utils.translation import gettext as _\nfrom django.utils import timezone\nfrom django.conf import settings\nfrom django.core.validators import RegexValidator, _lazy_re_compile\nfr... | diff --git a/bothub/common/models.py b/bothub/common/models.py
index 74711b74..44872001 100644
--- a/bothub/common/models.py
+++ b/bothub/common/models.py
@@ -460,6 +460,9 @@ def ready_for_train(self):
if self.training_started_at:
return False
+ if len(self.requirements_to_train) > 0:
+ return False
+
previous_update = self.repository.updates.filter(
language=self.language,
by__isnull=False,
diff --git a/bothub/common/tests.py b/bothub/common/tests.py
index 441e2aa1..d4d05f31 100644
--- a/bothub/common/tests.py
+++ b/bothub/common/tests.py
@@ -767,7 +767,8 @@ def setUp(self):
owner=self.owner,
name='Test',
slug='test',
- language=languages.LANGUAGE_EN)
+ language=languages.LANGUAGE_EN,
+ use_language_model_featurizer=False)
def test_be_true(self):
RepositoryExample.objects.create(
@@ -871,6 +872,19 @@ def test_entity_dont_have_min_examples(self):
entity='hi')
self.assertTrue(self.repository.current_update().ready_for_train)
+ def test_settings_change_exists_requirements(self):
+ self.repository.current_update().start_training(self.owner)
+ self.repository.use_language_model_featurizer = True
+ self.repository.save()
+ RepositoryExample.objects.create(
+ repository_update=self.repository.current_update(),
+ text='hello',
+ intent='greet')
+ self.assertEqual(
+ len(self.repository.current_update().requirements_to_train),
+ 1)
+ self.assertFalse(self.repository.current_update().ready_for_train)
+
def test_no_examples(self):
example = RepositoryExample.objects.create(
repository_update=self.repository.current_update(),
|
hydroshare__hydroshare-2690 | Hyperlink DOIs against preferred resolver
Hello :-)
The DOI foundation recommends [this new, secure resolver](https://www.doi.org/doi_handbook/3_Resolution.html#3.8). Would a PR that updates all static links & the code that generates new ones, plus the test cases be welcome?
Cheers!
| [
{
"content": "import os\nimport zipfile\nimport shutil\nimport logging\nimport requests\n\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.core.files import File\nfrom django.core.files.uploadedfile import UploadedFile\nfrom django.core.exceptions import Vali... | [
{
"content": "import os\nimport zipfile\nimport shutil\nimport logging\nimport requests\n\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.core.files import File\nfrom django.core.files.uploadedfile import UploadedFile\nfrom django.core.exceptions import Vali... | diff --git a/docs/bagit/readme.txt b/docs/bagit/readme.txt
index 0195f76bd1..2251b08a91 100644
--- a/docs/bagit/readme.txt
+++ b/docs/bagit/readme.txt
@@ -10,7 +10,7 @@ You can find the full BagIt specification here: https://tools.ietf.org/html/draf
You can also find a much more detailed description of HydroShare's resource data model and packaging scheme in the following paper:
-Horsburgh, J. S., Morsy, M. M., Castronova, A., Goodall, J. L., Gan, T., Yi, H., Stealey, M. J., and D.G. Tarboton (2015). HydroShare: Sharing diverse hydrologic data types and models as social objects within a Hydrologic Information System, Journal Of the American Water Resources Association(JAWRA), http://dx.doi.org/10.1111/1752-1688.12363.
+Horsburgh, J. S., Morsy, M. M., Castronova, A., Goodall, J. L., Gan, T., Yi, H., Stealey, M. J., and D.G. Tarboton (2015). HydroShare: Sharing diverse hydrologic data types and models as social objects within a Hydrologic Information System, Journal Of the American Water Resources Association(JAWRA), https://doi.org/10.1111/1752-1688.12363.
We've summarized the important points below.
diff --git a/hs_core/hydroshare/resource.py b/hs_core/hydroshare/resource.py
index 35c397d976..a45edb86f7 100755
--- a/hs_core/hydroshare/resource.py
+++ b/hs_core/hydroshare/resource.py
@@ -909,7 +909,7 @@ def delete_resource_file(pk, filename_or_id, user, delete_logical_file=True):
def get_resource_doi(res_id, flag=''):
- doi_str = "http://dx.doi.org/10.4211/hs.{shortkey}".format(shortkey=res_id)
+ doi_str = "https://doi.org/10.4211/hs.{shortkey}".format(shortkey=res_id)
if flag:
return "{doi}{append_flag}".format(doi=doi_str, append_flag=flag)
else:
diff --git a/hs_core/tests/api/native/test_core_metadata.py b/hs_core/tests/api/native/test_core_metadata.py
index 91e1a5d847..25d2262bc3 100755
--- a/hs_core/tests/api/native/test_core_metadata.py
+++ b/hs_core/tests/api/native/test_core_metadata.py
@@ -959,7 +959,7 @@ def test_identifier(self):
# test adding an identifier with name 'DOI' when the resource does not have a DOI - should raise an exception
self.res.doi = None
self.res.save()
- url_doi = "http://dx.doi.org/10.4211/hs.{res_id}".format(res_id=self.res.short_id)
+ url_doi = "https://doi.org/10.4211/hs.{res_id}".format(res_id=self.res.short_id)
self.assertRaises(Exception, lambda: resource.create_metadata_element(self.res.short_id,'identifier',
name='DOI', url=url_doi))
@@ -974,7 +974,7 @@ def test_identifier(self):
doi_idf.id, name='DOI-1'))
# test that 'DOI' identifier url can be changed
- resource.update_metadata_element(self.res.short_id, 'identifier', doi_idf.id, url='http://doi.org/001')
+ resource.update_metadata_element(self.res.short_id, 'identifier', doi_idf.id, url='https://doi.org/001')
# test that hydroshareidentifier can't be deleted - raise exception
hs_idf = self.res.metadata.identifiers.all().filter(name='hydroShareIdentifier').first()
@@ -1495,7 +1495,7 @@ def test_get_xml(self):
# add 'DOI' identifier
self.res.doi='doi1000100010001'
self.res.save()
- self.res.metadata.create_element('identifier', name='DOI', url="http://dx.doi.org/001")
+ self.res.metadata.create_element('identifier', name='DOI', url="https://doi.org/001")
# no need to add a language element - language element is created at the time of resource creation
diff --git a/theme/templates/resource-landing-page/citation.html b/theme/templates/resource-landing-page/citation.html
index 20cceaa391..fbd4d1ded2 100644
--- a/theme/templates/resource-landing-page/citation.html
+++ b/theme/templates/resource-landing-page/citation.html
@@ -20,7 +20,7 @@ <h3>How to cite</h3>
<div>
<em>
When permanently published, this resource will have a formal Digital Object Identifier (DOI) and will be
- accessible at the following URL: http://doi.org/10.4211/hs.{{ cm.short_id }}. When you are
+ accessible at the following URL: https://doi.org/10.4211/hs.{{ cm.short_id }}. When you are
ready to permanently publish, click the Publish button at the top of the page to request your DOI.
Reminder: You may no longer edit your resource, once you have permanently published it.
</em>
|
nvaccess__nvda-8771 | UnicodeDecodeError when NVDA running on non-English outdated systems
When runing the latest NVDA snapshots on Russian versions of outdated Windows (XP/Vista), in the log I have the following exception:
Traceback (most recent call last):
File "nvda.pyw", line 64, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc4 in position 0: ordinal not in range(128)
Probably, need to explicitly specify the system encoding, since in Russian Windows it is cp1251.
For example, like so:
import winVersion
import locale
if not winVersion.isSupportedOS():
winUser.MessageBox(0, unicode(ctypes.FormatError(winUser.ERROR_OLD_WIN_VERSION), locale.getpreferredencoding()), None, winUser.MB_ICONERROR)
sys.exit(1)
Now the dialogue will be shown correctly.
| [
{
"content": "#winUser.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#Copyright (C) 2006-2017 NV Access Limited, Babbage B.V.\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n\r\n\"\"\"Functions that wrap Windows API functions from user32.dll\"\"\"\... | [
{
"content": "#winUser.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#Copyright (C) 2006-2017 NV Access Limited, Babbage B.V.\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n\r\n\"\"\"Functions that wrap Windows API functions from user32.dll\"\"\"\... | diff --git a/source/nvda.pyw b/source/nvda.pyw
index 80d158c3d24..a37f5f0ceda 100755
--- a/source/nvda.pyw
+++ b/source/nvda.pyw
@@ -74,7 +74,7 @@ globalVars.startTime=time.time()
# Check OS version requirements
import winVersion
if not winVersion.isSupportedOS():
- winUser.MessageBox(0, unicode(ctypes.FormatError(winUser.ERROR_OLD_WIN_VERSION)), None, winUser.MB_ICONERROR)
+ winUser.MessageBox(0, ctypes.FormatError(winUser.ERROR_OLD_WIN_VERSION), None, winUser.MB_ICONERROR)
sys.exit(1)
def decodeMbcs(string):
diff --git a/source/winUser.py b/source/winUser.py
index a8ad3091ce4..ecd81bc5185 100644
--- a/source/winUser.py
+++ b/source/winUser.py
@@ -514,6 +514,10 @@ def FindWindow(className, windowName):
IDCANCEL=3
def MessageBox(hwnd, text, caption, type):
+ if isinstance(text, bytes):
+ text = text.decode('mbcs')
+ if isinstance(caption, bytes):
+ caption = caption.decode('mbcs')
res = user32.MessageBoxW(hwnd, text, caption, type)
if res == 0:
raise WinError()
diff --git a/user_docs/en/changes.t2t b/user_docs/en/changes.t2t
index 152ab2b490c..d229d12c0dd 100644
--- a/user_docs/en/changes.t2t
+++ b/user_docs/en/changes.t2t
@@ -22,6 +22,7 @@ What's New in NVDA
- When NVDA is set to languages such as Kirgyz, Mongolian or Macedonian, it no longer shows a dialog on start-up warning that the language is not supported by the Operating System. (#8064)
- Moving the mouse to the navigator object will now much more accurately move the mouse to the browse mode position in Mozilla Firefox, Google Chrome and Acrobat Reader DC. (#6460)
- Interacting with combo boxes on the web in Firefox, Chrome and Internet Explorer has been improved. (#8664)
+- If running on the Japanese version of Windows XP or Vista, NVDA now displays the alert of OS version requirements as expected. (#8771)
== Changes for Developers ==
|
modal-labs__modal-examples-556 | apply #556 manually
I manually applied the patch from #556. Not sure what's up with that PR
| [
{
"content": "# # Hello, world!\n#\n# This is a trivial example of a Modal function, but it illustrates a few features:\n#\n# * You can print things to stdout and stderr.\n# * You can return data.\n# * You can map over a function.\n#\n# ## Import Modal and define the app\n#\n# Let's start with the top level imp... | [
{
"content": "# # Hello, world!\n#\n# This is a trivial example of a Modal function, but it illustrates a few features:\n#\n# * You can print things to stdout and stderr.\n# * You can return data.\n# * You can map over a function.\n#\n# ## Import Modal and define the app\n#\n# Let's start with the top level imp... | diff --git a/01_getting_started/hello_world.py b/01_getting_started/hello_world.py
index 1ef43d63e..a3fe60329 100644
--- a/01_getting_started/hello_world.py
+++ b/01_getting_started/hello_world.py
@@ -48,7 +48,7 @@ def f(i):
#
# Inside the `main()` function body, we are calling the function `f` in three ways:
#
-# 1 As a simple local call, `f(1000)`
+# 1 As a simple local call, `f.local(1000)`
# 2. As a simple *remote* call `f.remote(1000)`
# 3. By mapping over the integers `0..19`
diff --git a/11_notebooks/basic.ipynb b/11_notebooks/basic.ipynb
index 739b52181..0e77e9893 100644
--- a/11_notebooks/basic.ipynb
+++ b/11_notebooks/basic.ipynb
@@ -91,7 +91,7 @@
"\n",
"\n",
"with stub.run():\n",
- " print(quadruple(100))\n",
+ " print(quadruple.local(100))\n",
" print(quadruple.remote(100)) # run remotely\n",
" result = quadruple.remote(10_000_000)"
]
|
sunpy__sunpy-1818 | convert_data_to_pixel issue
This line in convert_data_to_pixel:
pixelx = (x - crval[0]) / cdelt[0] + (crpix[1] - 1)
should be:
pixelx = (x - crval[0]) / cdelt[0] + (crpix[0] - 1)
I found the problem using 0.6.3, but looking at the source, it persists in 0.6.4.
| [
{
"content": "from __future__ import absolute_import\n\nimport numpy as np\nimport sunpy.sun as sun\n\nimport astropy.units as u\n\nrsun_meters = sun.constants.radius.si.value\n\n__all__ = ['_convert_angle_units', 'convert_pixel_to_data', 'convert_hpc_hg',\n 'convert_data_to_pixel', 'convert_hpc_hcc',... | [
{
"content": "from __future__ import absolute_import\n\nimport numpy as np\nimport sunpy.sun as sun\n\nimport astropy.units as u\n\nrsun_meters = sun.constants.radius.si.value\n\n__all__ = ['_convert_angle_units', 'convert_pixel_to_data', 'convert_hpc_hg',\n 'convert_data_to_pixel', 'convert_hpc_hcc',... | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 775bc5f5f53..770484b5b94 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,5 +1,10 @@
-Latest
-------
+0.7.1
+-----
+
+* Fix bug in `wcs.convert_data_to_pixel` where crpix[1] was used for both axes.
+
+0.7.0
+-----
* Added `timeout` parameter in `sunpy.data.download_sample_data()`
* Fixed `aiaprep` to return properly sized map.
* Deprecation warnings fixed when using image coalignment.
diff --git a/sunpy/wcs/wcs.py b/sunpy/wcs/wcs.py
index f74f8120e9b..0d5f152c194 100644
--- a/sunpy/wcs/wcs.py
+++ b/sunpy/wcs/wcs.py
@@ -133,7 +133,7 @@ def convert_data_to_pixel(x, y, scale, reference_pixel, reference_coordinate):
# coord = inv_proj_tan(coord)
# note that crpix[] counts pixels starting at 1
- pixelx = (x - crval[0]) / cdelt[0] + (crpix[1] - 1)
+ pixelx = (x - crval[0]) / cdelt[0] + (crpix[0] - 1)
pixely = (y - crval[1]) / cdelt[1] + (crpix[1] - 1)
return pixelx, pixely
|
huggingface__transformers-7282 | weights partially missing for CamembertForMaskedLM
## Environment info
- `transformers` version: 3.1.0
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@louismartin
## Information
When loading "camembert-base" with `CamembertForMaskedLM` with:
from transformers import CamembertForMaskedLM
model = CamembertForMaskedLM.from_pretrained("camembert-base")
the bias of the LM head decoder is not loaded:
Some weights of CamembertForMaskedLM were not initialized from the model checkpoint at camembert-base and are newly initialized: ['lm_head.decoder.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
As I understand `lm_head.decoder.bias` is therefore initialized randomly.
I checked the original `camembert-base` model as published by the author, and the lm_head decoder bias is missing too, which is not discussed in the camembert or roberta publication.
| [
{
"content": "# coding=utf-8\n# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.\n# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the Li... | [
{
"content": "# coding=utf-8\n# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.\n# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the Li... | diff --git a/src/transformers/modeling_roberta.py b/src/transformers/modeling_roberta.py
index f0be480e4be0..76b7b430d3b4 100644
--- a/src/transformers/modeling_roberta.py
+++ b/src/transformers/modeling_roberta.py
@@ -303,6 +303,7 @@ def prepare_inputs_for_generation(self, input_ids, attention_mask=None, **model_
class RobertaForMaskedLM(BertPreTrainedModel):
config_class = RobertaConfig
base_model_prefix = "roberta"
+ authorized_missing_keys = [r"position_ids", r"lm_head\.decoder\.bias"]
def __init__(self, config):
super().__init__(config)
|
avocado-framework__avocado-4576 | TestSuite() initialization config param missing error
In the definition of the TestSuite Class the parameter "config" has a default value as None and when the default value is used it fails, forcing us to use a dummy input to get it working.
Error example:
```
'NoneType' object has no attribute 'get'
<class 'AttributeError'>
'NoneType' object has no attribute 'get'
File "/Library/Python/3.7/site-packages/bluegen/common/main.py", line 65, in main
ret = command.get(opts.cmd)(opts)
File "/Library/Python/3.7/site-packages/bluegen/common/command.py", line 142, in
__call__ return self.func(opts, *args, **kwargs)
File "/Library/Python/3.7/site-packages/bluegen/commands/test.py", line 34, in test
test_suites = avocado.suites_generator(opts.sequence_file, opts.tests_ref, opts.parallel,
opts.cfg_dir)
File "/Library/Python/3.7/site-packages/bluegen/utils/avocado.py", line 125, in
suites_generator job_config=JOB_CONFIG)
File "/Users/marioalvarado/Library/Python/3.7/lib/python/site-
packages/avocado/core/suite.py", line 106, in __init__ if
(config.get('run.dry_run.enabled') and
```
Code example:
```
test_suite = TestSuite(name="BlueGen Sequential Execution",
tests=runnables_with_param,
job_config=JOB_CONFIG)
```
| [
{
"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope t... | [
{
"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope t... | diff --git a/avocado/core/suite.py b/avocado/core/suite.py
index ace1b5bf40..4247957e1d 100644
--- a/avocado/core/suite.py
+++ b/avocado/core/suite.py
@@ -103,7 +103,7 @@ def __init__(self, name, config=None, tests=None, job_config=None,
self._runner = None
self._test_parameters = None
- if (config.get('run.dry_run.enabled') and
+ if (self.config.get('run.dry_run.enabled') and
self.config.get('run.test_runner') == 'runner'):
self._convert_to_dry_run()
|
edgedb__edgedb-6268 | EdgeDB server FIPS incompatibility
<!-- Please search existing issues to avoid creating duplicates. -->
<!--
For the EdgeDB Version: run `edgedb query 'select sys::get_version_as_str()'` from your project directory (or run `select sys::get_version_as_str();` in the EdgeDB interactive shell).
For the EdgeDB CLI Version: Run `edgedb --version` from anywhere
-->
- EdgeDB Version: `3.4+75c51ce`
- EdgeDB CLI Version: `3.4.0+97cad0e`
- OS Version: RHEL 8.8 with FIPS enabled
Steps to Reproduce:
1. Init a project with a local instance with the "Initial schema" below. Modify the schema to the "Changed schema" below, and `edgedb migration create` to create a migration. Then `edgedb project unlink` to unlink the project from the local instance.
2. Follow the EdgeDB bare metal deployment instructions on a RHEL machine with FIPS enabled. We're using an AWS EC2 instance for this.
3. Now back on your local machine, run `edgedb project init` and connect to the remote instance.
4. You should get an error `ValueError: [digital envelope routines: EVP_DigestInit_ex] disabled for FIPS`
There's probably an easier way to reproduce but I'm new to EdgeDB and this is the way I was able to do it. It seems to be related to constraints and long names.
We're evaluating EdgeDB for an enterprise use case where EdgeDB server will need to run on a FIPS-compliant host. The issue is that `_edgedb_name_to_pg_name` uses MD5 to hash the given name to ensure it's small enough to fit in a Postgres column name. MD5 is disabled on FIPS-compliant systems, even when you're not using it for something security related. Very annoying. On that note, the comment in the function mentions that Postgres doesn't have a sha1 implementation in all versions. SHA-1 is not currently disabled, but [will be](https://www.nist.gov/news-events/news/2022/12/nist-retires-sha-1-cryptographic-algorithm) at some point in the near future. I understand the desire to maintain backwards compatibility with older Postgres versions, but would it be possible to detect the Postgres version and use non-MD5 hash if supported? Given that SHA-1 won't be supported soon, ideally there would be a way to opt in to e.g. SHA-224. I realize that, given the objective of this function, it's counterproductive to use something like SHA-224 where (after base 64 encoding) you'll use up 38 characters vs. 27 for SHA-1 and 22 for MD5, but it's the (annoying) reality of FIPS compliance.
Edit: My reading of [NIST's guidance](https://csrc.nist.gov/projects/hash-functions) is that SHAKE-128 and SHAKE-256 are also acceptable for non-security related applications (like this one). Perhaps they could be used with a chosen digest length to limit the size of the hash digests.
<!-- If the issue is about a query error, please also provide your schema -->
Initial schema:
```
module default {
type Person {
required name: str;
}
type Movie {
title: str;
multi actors: Person;
}
};
```
Changed schema:
```
module default {
type Person {
required nameasdfgluhasdlfiuhsdafkjlndfkjlsadhflksdanfksdabfnljksdabfljkdsa: str {
constraint exclusive;
};
}
type Movie {
title: str;
multi actors: Person;
}
};
```
The full traceback:
```
Server traceback:
Traceback (most recent call last):
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler_pool/worker.py", line 186, in compile_in_tx
units, cstate = COMPILER.compile_in_tx(cstate, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/compiler.py", line 913, in compile_in_tx
return compile(ctx=ctx, source=source), ctx.state
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/compiler.py", line 2068, in compile
return _try_compile(ctx=ctx, source=source)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/compiler.py", line 2136, in _try_compile
comp, capabilities = _compile_dispatch_ql(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/compiler.py", line 1975, in _compile_dispatch_ql
query = ddl.compile_dispatch_ql_migration(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/ddl.py", line 379, in compile_dispatch_ql_migration
return compile_and_apply_ddl_stmt(ctx, ql)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/ddl.py", line 209, in compile_and_apply_ddl_stmt
block, new_types, config_ops = _process_delta(ctx, delta)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/server/compiler/ddl.py", line 345, in _process_delta
pgdelta.generate(block)
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/delta.py", line 7210, in generate
op.generate(block)
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/delta.py", line 211, in generate
op.generate(block)
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/delta.py", line 211, in generate
op.generate(block)
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/delta.py", line 211, in generate
op.generate(block)
[Previous line repeated 1 more time]
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/dbops/base.py", line 296, in generate
self_block = self.generate_self_block(block)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/dbops/base.py", line 335, in generate_self_block
cmd.generate(self_block)
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/dbops/base.py", line 296, in generate
self_block = self.generate_self_block(block)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/dbops/base.py", line 335, in generate_self_block
cmd.generate(self_block)
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/deltadbops.py", line 550, in generate
self.create_constraint(self._constraint)
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/deltadbops.py", line 489, in create_constraint
cr_trigger = self.create_constr_trigger(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/deltadbops.py", line 410, in create_constr_trigger
ins_trigger, upd_trigger = self._get_triggers(
^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/deltadbops.py", line 389, in _get_triggers
ins_trigger_name = common.edgedb_name_to_pg_name(cname + '_instrigger')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/common.py", line 195, in edgedb_name_to_pg_name
return _edgedb_name_to_pg_name(name, prefix_length)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/edgedb-server-3/lib/python3.11/site-packages/edb/pgsql/common.py", line 168, in _edgedb_name_to_pg_name
hashlib.md5(name.encode()).digest()
^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: [digital envelope routines: EVP_DigestInit_ex] disabled for FIPS
```
| [
{
"content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2008-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy... | [
{
"content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2008-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy... | diff --git a/edb/pgsql/common.py b/edb/pgsql/common.py
index 8304c0f067f..ca4a0f11445 100644
--- a/edb/pgsql/common.py
+++ b/edb/pgsql/common.py
@@ -165,7 +165,7 @@ def _edgedb_name_to_pg_name(name: str, prefix_length: int = 0) -> str:
# md5 (and it doesn't matter which function is better cryptographically
# in this case.)
hashed = base64.b64encode(
- hashlib.md5(name.encode()).digest()
+ hashlib.md5(name.encode(), usedforsecurity=False).digest()
).decode().rstrip('=')
return (
|
abey79__vpype-103 | `read` should discard invisible geometries
| [
{
"content": "\"\"\"File import/export functions.\n\"\"\"\nimport copy\nimport datetime\nimport math\nimport re\nfrom typing import Iterator, List, Optional, TextIO, Tuple, Union\nfrom xml.etree import ElementTree\n\nimport click\nimport numpy as np\nimport svgelements\nimport svgwrite\nfrom multiprocess import... | [
{
"content": "\"\"\"File import/export functions.\n\"\"\"\nimport copy\nimport datetime\nimport math\nimport re\nfrom typing import Iterator, List, Optional, TextIO, Tuple, Union\nfrom xml.etree import ElementTree\n\nimport click\nimport numpy as np\nimport svgelements\nimport svgwrite\nfrom multiprocess import... | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 9bd3bb65..56da2e7b 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,5 +1,6 @@
#### 1.1.0 (UNRELEASED)
+* Invisible SVG elements are now discarded (#103)
* Fixed `write` to cap SVG width and height to a minimum of 1px (#102)
* Fixed grouping of `stat` command in `vpype --help`
* Bump svgelements from 1.3.2 to 1.3.4 (#101)
diff --git a/tests/test_files.py b/tests/test_files.py
index 5b2d734b..ec5e0bc0 100644
--- a/tests/test_files.py
+++ b/tests/test_files.py
@@ -58,3 +58,47 @@ def test_write_is_idempotent(runner, path, tmp_path):
for line in difflib.unified_diff(txt1.split("\n"), txt2.split("\n"), lineterm=""):
print(line)
assert False
+
+
+@pytest.mark.parametrize(
+ ("svg_content", "line_count"),
+ [
+ ('<circle cx="500" cy="500" r="40"/>', 1),
+ ('<circle cx="500" cy="500" r="40" style="visibility:collapse"/>', 0),
+ ('<circle cx="500" cy="500" r="40" style="visibility:hidden"/>', 0),
+ ('<circle cx="500" cy="500" r="40" style="display:none"/>', 0),
+ ('<g style="visibility: hidden"><circle cx="500" cy="500" r="40"/></g>', 0),
+ ('<g style="visibility: collapse"><circle cx="500" cy="500" r="40"/></g>', 0),
+ (
+ """<g style="visibility: collapse">
+ <circle cx="500" cy="500" r="40" style="visibility:visible" />
+ </g>""",
+ 1,
+ ),
+ (
+ """<g style="visibility: hidden">
+ <circle cx="500" cy="500" r="40" style="visibility:visible" />
+ </g>""",
+ 1,
+ ),
+ (
+ """<g style="display: none">
+ <circle cx="500" cy="500" r="40" style="visibility:visible" />
+ </g>""",
+ 0,
+ ),
+ ],
+)
+def test_read_svg_visibility(svg_content, line_count, tmp_path):
+ svg = f"""<?xml version="1.0"?>
+<svg xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg"
+ width="1000" height="1000">
+ {svg_content}
+</svg>
+"""
+ path = str(tmp_path / "file.svg")
+ with open(path, "w") as fp:
+ fp.write(svg)
+
+ lc, _, _ = vp.read_svg(path, 1.0)
+ assert len(lc) == line_count
diff --git a/vpype/io.py b/vpype/io.py
index b79c9819..8d87a7e9 100644
--- a/vpype/io.py
+++ b/vpype/io.py
@@ -174,6 +174,9 @@ def _extract_paths(group: svgelements.Group, recursive) -> _PathListType:
everything = group
paths = []
for elem in everything:
+ if elem.values.get("visibility", "") in ("hidden", "collapse"):
+ continue
+
if isinstance(elem, svgelements.Path):
if len(elem) != 0:
paths.append(elem)
|
comic__grand-challenge.org-2531 | Markdown Editor jumps around too much
When editing in Markdown the Grand Challenge editor jumps around a lot, so the writer loses their place.
https://user-images.githubusercontent.com/12661555/173570208-c2567b82-bb78-441c-9286-2f70a8f66745.mp4
It would be better if the size of the text area was kept constant, and the content only resized when switching to the preview tab, like on GitHub:
https://user-images.githubusercontent.com/12661555/173570346-97f51fa9-fc23-49e8-b587-f656ff3cb7db.mp4
| [
{
"content": "import os\nimport re\nfrom datetime import datetime, timedelta\nfrom itertools import product\n\nimport sentry_sdk\nfrom disposable_email_domains import blocklist\nfrom django.contrib.messages import constants as messages\nfrom django.urls import reverse\nfrom machina import MACHINA_MAIN_STATIC_DI... | [
{
"content": "import os\nimport re\nfrom datetime import datetime, timedelta\nfrom itertools import product\n\nimport sentry_sdk\nfrom disposable_email_domains import blocklist\nfrom django.contrib.messages import constants as messages\nfrom django.urls import reverse\nfrom machina import MACHINA_MAIN_STATIC_DI... | diff --git a/app/config/settings.py b/app/config/settings.py
index c357e8c199..37adb9f2b8 100644
--- a/app/config/settings.py
+++ b/app/config/settings.py
@@ -716,6 +716,7 @@
)
MARKDOWNX_MARKDOWN_EXTENSION_CONFIGS = {}
MARKDOWNX_IMAGE_MAX_SIZE = {"size": (2000, 0), "quality": 90}
+MARKDOWNX_EDITOR_RESIZABLE = "False"
HAYSTACK_CONNECTIONS = {
"default": {"ENGINE": "haystack.backends.simple_backend.SimpleEngine"}
diff --git a/app/grandchallenge/core/static/js/markdownx.js b/app/grandchallenge/core/static/js/markdownx.js
index ea407db2c8..b9a60b6ff2 100644
--- a/app/grandchallenge/core/static/js/markdownx.js
+++ b/app/grandchallenge/core/static/js/markdownx.js
@@ -260,7 +260,7 @@
}));
xhr.success = function(response) {
properties.preview.innerHTML = response;
- properties.editor = updateHeight(properties.editor);
+ properties.editor = properties._editorIsResizable ? (properties.editor) : properties.editor;
utils_1.triggerCustomEvent("markdownx.update", properties.parent, [ response ]);
};
xhr.error = function(response) {
|
weni-ai__bothub-engine-199 | Ghost Intent
Reported by @IlhasoftPeter in https://github.com/Ilhasoft/bothub/issues/26
| [
{
"content": "import uuid\nimport base64\nimport requests\n\nfrom django.db import models\nfrom django.utils.translation import gettext as _\nfrom django.utils import timezone\nfrom django.conf import settings\nfrom django.core.validators import RegexValidator, _lazy_re_compile\nfrom django.core.mail import sen... | [
{
"content": "import uuid\nimport base64\nimport requests\n\nfrom django.db import models\nfrom django.utils.translation import gettext as _\nfrom django.utils import timezone\nfrom django.conf import settings\nfrom django.core.validators import RegexValidator, _lazy_re_compile\nfrom django.core.mail import sen... | diff --git a/bothub/common/models.py b/bothub/common/models.py
index c23c9bdf..641b2ecb 100644
--- a/bothub/common/models.py
+++ b/bothub/common/models.py
@@ -190,7 +190,7 @@ def votes_sum(self):
@property
def intents(self):
return list(set(self.examples(
- exclude_deleted=False).exclude(
+ exclude_deleted=True).exclude(
intent='').values_list(
'intent',
flat=True)))
diff --git a/bothub/common/tests.py b/bothub/common/tests.py
index 63cb95d4..70d9348c 100644
--- a/bothub/common/tests.py
+++ b/bothub/common/tests.py
@@ -292,7 +292,7 @@ def test_intents(self):
'greet',
self.repository.intents)
- RepositoryExample.objects.create(
+ example = RepositoryExample.objects.create(
repository_update=self.repository.current_update(
languages.LANGUAGE_PT),
text='tchau',
@@ -305,6 +305,12 @@ def test_intents(self):
'bye',
self.repository.intents)
+ example.delete()
+
+ self.assertNotIn(
+ 'bye',
+ self.repository.intents)
+
def test_entities(self):
example = RepositoryExample.objects.create(
repository_update=self.repository.current_update(
|
archlinux__archinstall-184 | gnome-extra provides WAY too much bloatware
I can't imagine most people wanting all the packages this installs on a new installation. Most of these applications are things like games and advanced tools like dconf-editor that your average user should not be touching. Some of them are nice to have but can be installed later manually instead of during initial installation.
| [
{
"content": "import archinstall\n\ninstallation.add_additional_packages(\"gnome gnome-extra gdm\") # We'll create a gnome-minimal later, but for now, we'll avoid issues by giving more than we need.\n# Note: gdm should be part of the gnome group, but adding it here for clarity",
"path": "profiles/applicatio... | [
{
"content": "import archinstall\n\ninstallation.add_additional_packages(\"gnome gnome-tweaks gnome-todo gnome-sound-recorder evolution gdm\")\n# Note: gdm should be part of the gnome group, but adding it here for clarity\n",
"path": "profiles/applications/gnome.py"
}
] | diff --git a/profiles/applications/gnome.py b/profiles/applications/gnome.py
index 1f2a20a109..e9fd1d50dd 100644
--- a/profiles/applications/gnome.py
+++ b/profiles/applications/gnome.py
@@ -1,4 +1,4 @@
import archinstall
-installation.add_additional_packages("gnome gnome-extra gdm") # We'll create a gnome-minimal later, but for now, we'll avoid issues by giving more than we need.
-# Note: gdm should be part of the gnome group, but adding it here for clarity
\ No newline at end of file
+installation.add_additional_packages("gnome gnome-tweaks gnome-todo gnome-sound-recorder evolution gdm")
+# Note: gdm should be part of the gnome group, but adding it here for clarity
|
pyinstaller__pyinstaller-4360 | Windows: Cannot bundle with debug if pkg_resources is a dependency
This issue happens when I try to bundle my project, in the Analysis.assemble phase and only when I try to do it with debug enabled. PyInstaller tries to compile a module that is part of an executable (pyinstaller.exe in this case) which fails because it cannot read the module.
This is with Windows 10, Python 3.6.6 (official from python.org) and PyInstaller 3.5.dev0+51429f8fc (which should be the latest develop version as of today).
Here is the traceback:
```
Traceback (most recent call last):
File "c:\python36-32\Lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\python36-32\Lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\RMYROY~1\VIRTUA~1\CDDA-G~3\Scripts\pyinstaller.exe\__main__.py", line 9, in <module>
File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\__main__.py", line 111, in run
run_build(pyi_config, spec_file, **vars(args))
File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\__main__.py", line 63, in run_build
PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs)
File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\building\build_main.py", line 846, in main
build(specfile, kw.get('distpath'), kw.get('workpath'), kw.get('clean_build'))
File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\building\build_main.py", line 793, in build
exec(code, spec_namespace)
File "launcher.spec", line 17, in <module>
noarchive=True)
File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\building\build_main.py", line 243, in __init__
self.__postinit__()
File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\building\datastruct.py", line 158, in __postinit__
self.assemble()
File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\building\build_main.py", line 599, in assemble
for name, path, typecode in compile_py_files(new_toc, CONF['workpath']):
File "c:\users\rmyroy~1\virtua~1\cdda-g~3\lib\site-packages\PyInstaller\utils\misc.py", line 150, in compile_py_files
with open(obj_fnm, 'rb') as fh:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\RMYROY~1\\VIRTUA~1\\CDDA-G~3\\Scripts\\pyinstaller.exe\\__main__.pyo'
```
For some reason, the following entry is added in Analysis.pure
```python
('__main__.pyc', 'C:\\Users\\RMYROY~1\\VIRTUA~1\\CDDA-G~3\\Scripts\\pyinstaller.exe\\__main__.py', 'PYMODULE')
```
**That entry is incorrect in that it shouldn't have been added in pure or it shouldn't be compiled in assemble which is the source of this issue.**
Here is my spec file:
```python
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(['cddagl\\launcher.py'],
pathex=['C:\\Program Files (x86)\\Windows Kits\\10\\Redist\\ucrt\\DLLs\\x86\\', 'C:\\Users\\Rémy Roy\\Projects\\CDDA-Game-Launcher'],
binaries=[],
datas=[('alembic', 'alembic'), ('data', 'data'), ('cddagl/resources', 'cddagl/resources'), ('cddagl/VERSION', 'cddagl'), ('C:\\Users\\Rémy Roy\\VirtualEnvs\\CDDA-Game-Launcher\\Scripts\\UnRAR.exe', '.'), ('cddagl/locale/en/LC_MESSAGES/cddagl.mo', 'cddagl/locale/en/LC_MESSAGES'), ('cddagl/locale/fr/LC_MESSAGES/cddagl.mo', 'cddagl/locale/fr/LC_MESSAGES'), ('cddagl/locale/it/LC_MESSAGES/cddagl.mo', 'cddagl/locale/it/LC_MESSAGES'), ('cddagl/locale/ja/LC_MESSAGES/cddagl.mo', 'cddagl/locale/ja/LC_MESSAGES'), ('cddagl/locale/ru/LC_MESSAGES/cddagl.mo', 'cddagl/locale/ru/LC_MESSAGES')],
hiddenimports=['lxml.cssselect', 'babel.numbers'],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=True)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
[('v', None, 'OPTION')],
exclude_binaries=True,
name='launcher',
debug=True,
bootloader_ignore_signals=False,
strip=False,
upx=False,
console=True , icon='cddagl\\resources\\launcher.ico')
coll = COLLECT(exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=False,
upx_exclude=[],
name='launcher')
```
You can probably reproduce this issue easily by cloning [my project](https://github.com/remyroy/CDDA-Game-Launcher) and issuing the following command:
```
python setup.py freeze --debug=1
```
Here is the full pyinstaller log output: https://gist.github.com/remyroy/37f7f0a912d5d714a947cddfb78769d4
I'll investigate how that entry is added in Analysis to give more context to this issue.
| [
{
"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2019, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.... | [
{
"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2019, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.... | diff --git a/PyInstaller/hooks/hook-pkg_resources.py b/PyInstaller/hooks/hook-pkg_resources.py
index 04c588ab79..0f758bd42d 100644
--- a/PyInstaller/hooks/hook-pkg_resources.py
+++ b/PyInstaller/hooks/hook-pkg_resources.py
@@ -11,3 +11,5 @@
# pkg_resources keeps vendored modules in its _vendor subpackage, and does
# sys.meta_path based import magic to expose them as pkg_resources.extern.*
hiddenimports = collect_submodules('pkg_resources._vendor')
+
+excludedimports = ['__main__']
diff --git a/news/4263.hooks.rst b/news/4263.hooks.rst
new file mode 100644
index 0000000000..0d8d13c94c
--- /dev/null
+++ b/news/4263.hooks.rst
@@ -0,0 +1 @@
+Exclude imports for pkg_resources to fix bundling issue.
diff --git a/news/4360.hooks.rst b/news/4360.hooks.rst
new file mode 100644
index 0000000000..0d8d13c94c
--- /dev/null
+++ b/news/4360.hooks.rst
@@ -0,0 +1 @@
+Exclude imports for pkg_resources to fix bundling issue.
|
conan-io__conan-5334 | tools.patch cant create files
To help us debug your issue please explain:
- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
Hello,
I am trying to apply some out-of-tree git patches, some of them create new files.
Some of them contain new files, and the tools.patch utility will then compail that absolute filepaths are not allowed (`/dev/null`).
Lacking this support, its cumbersome to add bugfixes and git commits in general.
Example patch (created with `git format-patch`):
```patch
From d0807313143bb35da65c2b858a2d9e17fd3fbf9e Mon Sep 17 00:00:00 2001
From: Norbert Lange <nolange79@gmail.com>
Date: Fri, 7 Jun 2019 21:49:19 +0200
Subject: [PATCH] add and remove file
---
newfile | 1 +
oldfile | 1 -
2 files changed, 1 insertion(+), 1 deletion(-)
create mode 100644 newfile
delete mode 100644 oldfile
diff --git a/newfile b/newfile
new file mode 100644
index 0000000..fdedddf
--- /dev/null
+++ b/newfile
@@ -0,0 +1 @@
+Hello mean world
diff --git a/oldfile b/oldfile
deleted file mode 100644
index 32332e1..0000000
--- a/oldfile
+++ /dev/null
@@ -1 +0,0 @@
-Old litter
--
2.20.1
```
My environment is:
```
Debian Buster x64
Conan version 1.16.0
```
| [
{
"content": "import logging\nimport os\nimport platform\nimport sys\nfrom contextlib import contextmanager\nfrom fnmatch import fnmatch\n\nimport six\nfrom patch import fromfile, fromstring\n\nfrom conans.client.output import ConanOutput\nfrom conans.errors import ConanException\nfrom conans.unicode import get... | [
{
"content": "import logging\nimport os\nimport platform\nimport sys\nfrom contextlib import contextmanager\nfrom fnmatch import fnmatch\n\nimport six\nfrom patch import fromfile, fromstring\n\nfrom conans.client.output import ConanOutput\nfrom conans.errors import ConanException\nfrom conans.unicode import get... | diff --git a/conans/client/tools/files.py b/conans/client/tools/files.py
index 435afb3e168..fa000b65e20 100644
--- a/conans/client/tools/files.py
+++ b/conans/client/tools/files.py
@@ -208,7 +208,9 @@ def decode_clean(path, prefix):
return path
def strip_path(path):
- tokens = path.split("/")[strip:]
+ tokens = path.split("/")
+ if len(tokens) > 1:
+ tokens = tokens[strip:]
path = "/".join(tokens)
if base_path:
path = os.path.join(base_path, path)
diff --git a/conans/test/unittests/tools/files_patch_test.py b/conans/test/unittests/tools/files_patch_test.py
index 6ef2cfc68f5..7fba8c64d82 100644
--- a/conans/test/unittests/tools/files_patch_test.py
+++ b/conans/test/unittests/tools/files_patch_test.py
@@ -105,6 +105,26 @@ def source(self):
client.run("source .")
self.assertFalse(os.path.exists(path))
+ def test_patch_strip_delete_no_folder(self):
+ conanfile = dedent("""
+ from conans import ConanFile, tools
+ class PatchConan(ConanFile):
+ def source(self):
+ tools.patch(self.source_folder, "example.patch", strip=1)""")
+ patch = dedent("""
+ --- a/oldfile
+ +++ b/dev/null
+ @@ -0,1 +0,0 @@
+ -legacy code""")
+ client = TestClient()
+ client.save({"conanfile.py": conanfile,
+ "example.patch": patch,
+ "oldfile": "legacy code"})
+ path = os.path.join(client.current_folder, "oldfile")
+ self.assertTrue(os.path.exists(path))
+ client.run("source .")
+ self.assertFalse(os.path.exists(path))
+
def test_patch_new_delete(self):
conanfile = base_conanfile + '''
def build(self):
@@ -133,6 +153,26 @@ def build(self):
client.out)
self.assertIn("test/1.9.10@user/testing: OLD FILE=False", client.out)
+ def test_patch_new_strip(self):
+ conanfile = base_conanfile + '''
+ def build(self):
+ from conans.tools import load, save
+ patch_content = """--- /dev/null
++++ b/newfile
+@@ -0,0 +0,3 @@
++New file!
++New file!
++New file!
+"""
+ patch(patch_string=patch_content, strip=1)
+ self.output.info("NEW FILE=%s" % load("newfile"))
+'''
+ client = TestClient()
+ client.save({"conanfile.py": conanfile})
+ client.run("create . user/testing")
+ self.assertIn("test/1.9.10@user/testing: NEW FILE=New file!\nNew file!\nNew file!\n",
+ client.out)
+
def test_error_patch(self):
file_content = base_conanfile + '''
def build(self):
|
kivy__kivy-2523 | FileChooser Icon view is not scrolled to the top after opening a dir
When FileChooser Icon view is selected and the view is scolled down before opening a directory with many files and folders, ScrollView is not reset and scrolled to the top, as one would expect.
| [
{
"content": "'''\nFileChooser\n===========\n\n.. versionadded:: 1.0.5\n\n\n.. versionchanged:: 1.2.0\n In the chooser template, the `controller` is not a direct reference anymore\n but a weak-reference.\n You must update all the notation `root.controller.xxx` to\n `root.controller().xxx`.\n\nSimple... | [
{
"content": "'''\nFileChooser\n===========\n\n.. versionadded:: 1.0.5\n\n\n.. versionchanged:: 1.2.0\n In the chooser template, the `controller` is not a direct reference anymore\n but a weak-reference.\n You must update all the notation `root.controller.xxx` to\n `root.controller().xxx`.\n\nSimple... | diff --git a/kivy/uix/filechooser.py b/kivy/uix/filechooser.py
index 7242f14533..1e86dd5414 100644
--- a/kivy/uix/filechooser.py
+++ b/kivy/uix/filechooser.py
@@ -717,6 +717,13 @@ class FileChooserIconView(FileChooserController):
'''
_ENTRY_TEMPLATE = 'FileIconEntry'
+ def __init__(self, **kwargs):
+ super(FileChooserIconView, self).__init__(**kwargs)
+ self.bind(on_entries_cleared=self.scroll_to_top)
+
+ def scroll_to_top(self, *args):
+ self.ids.scrollview.scroll_y = 1.0
+
if __name__ == '__main__':
from kivy.app import App
|
mosaicml__composer-496 | Move `ComposerTrainer` to top-level imports
Our most heavily used objects should be easily importable from `composer` via:
```
from composer import Trainer, ComposerModel
```
rather than remember their submodule:
```
from composer.models import ComposerModel
```
Especially the last one, its tricky to remember whether its `models` or `model`
| [
{
"content": "# Copyright 2021 MosaicML. All Rights Reserved.\n\nfrom composer import algorithms as algorithms\nfrom composer import callbacks as callbacks\nfrom composer import datasets as datasets\nfrom composer import loggers as loggers\nfrom composer import models as models\nfrom composer import optim as op... | [
{
"content": "# Copyright 2021 MosaicML. All Rights Reserved.\n\nfrom composer import algorithms as algorithms\nfrom composer import callbacks as callbacks\nfrom composer import datasets as datasets\nfrom composer import loggers as loggers\nfrom composer import models as models\nfrom composer import optim as op... | diff --git a/composer/__init__.py b/composer/__init__.py
index ee6694915f..3e0c87c78c 100644
--- a/composer/__init__.py
+++ b/composer/__init__.py
@@ -20,6 +20,7 @@
from composer.core import Timer as Timer
from composer.core import TimeUnit as TimeUnit
from composer.core import types as types
+from composer.models import ComposerModel as ComposerModel
from composer.trainer import Trainer as Trainer
__version__ = "0.3.1"
|
ethereum__web3.py-1107 | Backport 1094 to v4 branch
### What was wrong?
https://github.com/ethereum/web3.py/issues/1094#issuecomment-428259232 needs to be backported to the v4 branch.
| [
{
"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.33\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'lint... | [
{
"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.33\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'lint... | diff --git a/setup.py b/setup.py
index e5ba56e5ff..e9c00c9986 100644
--- a/setup.py
+++ b/setup.py
@@ -80,7 +80,7 @@
"pypiwin32>=223;platform_system=='Windows'",
],
setup_requires=['setuptools-markdown'],
- python_requires='>=3.5, <4',
+ python_requires='>=3.5.3,<4',
extras_require=extras_require,
py_modules=['web3', 'ens'],
license="MIT",
|
django-cms__django-filer-491 | CircularDependencyError when using custom Image model
093d07357ee13d4ea830db136ef037180824ddae added a migration dependency to the swappable `Image` model.
But a custom `Image` model inherits from `File`, so `filer.0001_initial` needs to be applied before the custom `Image` model. It of course leads to a `CircularDependencyError`.
The solution is to remove that dependency. No django-filer model depends on `Image`, so it can be removed safely.
| [
{
"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\nimport filer.fields.multistorage_file\nimport filer.models.mixins\nfrom filer.settings import FILER_IMAGE_MODEL\nfrom django.conf import settings\n\n\nclass Migration(migrations.Migration)... | [
{
"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\nimport filer.fields.multistorage_file\nimport filer.models.mixins\nfrom filer.settings import FILER_IMAGE_MODEL\nfrom django.conf import settings\n\n\nclass Migration(migrations.Migration)... | diff --git a/filer/migrations_django/0001_initial.py b/filer/migrations_django/0001_initial.py
index c35e9ff8e..b8d70adf5 100644
--- a/filer/migrations_django/0001_initial.py
+++ b/filer/migrations_django/0001_initial.py
@@ -13,7 +13,6 @@ class Migration(migrations.Migration):
dependencies = [
('auth', '0001_initial'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
- migrations.swappable_dependency(FILER_IMAGE_MODEL or 'filer.models.imagemodels.Image'),
('contenttypes', '0001_initial'),
]
|
optuna__optuna-4964 | Use `__future__.annotations` everywhere in the Optuna code base
### Motivation
Optuna drops Python 3.6 from v3.1, so we can use `__future__.annotations`, which simplifies the code base. See [PEP 563](https://peps.python.org/pep-0563/), [PEP584](https://peps.python.org/pep-0584/), [PEP 585](https://peps.python.org/pep-0585/), and [PEP 604](https://peps.python.org/pep-0604/) for more details. This issue suggests to use the module and simplifies the code base.
### Suggestion
Use `__future__.annotations` for each file and simplify the type annotations. The list of classes whose type annotations can be simplified is [here](https://peps.python.org/pep-0585/#implementation). The list of files where the `__future__.annotations` can be used is as follows. In order to reduce review costs and to encourage more contributors to work on it, please, as a rule, fix one file per PR.
- [x] optuna/_convert_positional_args.py
- [x] optuna/visualization/_optimization_history.py
- [x] optuna/visualization/_hypervolume_history.py
- [x] optuna/visualization/_edf.py
- [x] optuna/visualization/_pareto_front.py
- [x] optuna/visualization/matplotlib/_optimization_history.py
- [x] optuna/visualization/matplotlib/_hypervolume_history.py
- [x] optuna/visualization/matplotlib/_edf.py
- [x] optuna/visualization/matplotlib/_pareto_front.py
- [x] optuna/visualization/matplotlib/_contour.py
- [x] optuna/visualization/_utils.py
- [x] optuna/logging.py
- [ ] optuna/storages/_base.py
- [ ] optuna/storages/_cached_storage.py
- [ ] optuna/storages/__init__.py
- [ ] optuna/storages/_heartbeat.py
- [ ] optuna/storages/_in_memory.py
- [ ] optuna/storages/_rdb/models.py
- [ ] optuna/storages/_rdb/storage.py
- [ ] optuna/storages/_rdb/alembic/versions/v3.0.0.c.py
- [ ] optuna/storages/_rdb/alembic/versions/v3.0.0.d.py
- [ ] optuna/storages/_rdb/alembic/versions/v3.0.0.a.py
- [ ] optuna/storages/_journal/file.py
- [ ] optuna/storages/_journal/redis.py
- [ ] optuna/storages/_journal/storage.py
- [ ] optuna/storages/_journal/base.py
- [ ] optuna/study/_dataframe.py
- [ ] optuna/study/_optimize.py
- [ ] optuna/study/_tell.py
- [ ] optuna/study/_multi_objective.py
- [ ] optuna/study/_frozen.py
- [ ] optuna/study/study.py
- [ ] optuna/study/_study_summary.py
- [ ] optuna/search_space/group_decomposed.py
- [ ] optuna/search_space/intersection.py
- [ ] optuna/_typing.py
- [ ] optuna/_deprecated.py
- [ ] optuna/pruners/_hyperband.py
- [ ] optuna/pruners/_patient.py
- [ ] optuna/pruners/_successive_halving.py
- [ ] optuna/pruners/_percentile.py
- [ ] optuna/pruners/_threshold.py
- [ ] optuna/trial/_base.py
- [ ] optuna/trial/_fixed.py
- [ ] optuna/trial/_trial.py
- [ ] optuna/trial/_frozen.py
- [ ] optuna/integration/cma.py
- [ ] optuna/integration/shap.py
- [ ] optuna/integration/lightgbm.py
- [ ] optuna/integration/pytorch_distributed.py
- [ ] optuna/integration/_lightgbm_tuner/optimize.py
- [ ] optuna/integration/_lightgbm_tuner/alias.py
- [ ] optuna/integration/mlflow.py
- [ ] optuna/integration/wandb.py
- [ ] optuna/integration/catboost.py
- [ ] optuna/integration/skopt.py
- [ ] optuna/integration/botorch.py
- [ ] optuna/integration/dask.py
- [x] optuna/integration/sklearn.py
- [ ] optuna/integration/tensorboard.py
- [ ] optuna/terminator/callback.py
- [ ] optuna/terminator/terminator.py
- [ ] optuna/terminator/improvement/_preprocessing.py
- [ ] optuna/terminator/improvement/gp/botorch.py
- [ ] optuna/terminator/improvement/gp/base.py
- [ ] optuna/terminator/improvement/evaluator.py
- [ ] optuna/importance/_base.py
- [ ] optuna/importance/_mean_decrease_impurity.py
- [ ] optuna/importance/__init__.py
- [ ] optuna/importance/_fanova/_fanova.py
- [ ] optuna/importance/_fanova/_evaluator.py
- [ ] optuna/importance/_fanova/_tree.py
- [ ] optuna/_imports.py
- [ ] optuna/testing/tempfile_pool.py
- [ ] optuna/testing/threading.py
- [ ] optuna/testing/distributions.py
- [ ] optuna/testing/samplers.py
- [ ] optuna/testing/storages.py
- [ ] optuna/distributions.py
- [ ] optuna/cli.py
- [ ] optuna/multi_objective/visualization/_pareto_front.py
- [ ] optuna/multi_objective/trial.py
- [ ] optuna/multi_objective/samplers/_base.py
- [ ] optuna/multi_objective/samplers/_nsga2.py
- [ ] optuna/multi_objective/samplers/_adapter.py
- [ ] optuna/multi_objective/samplers/_random.py
- [ ] optuna/multi_objective/samplers/_motpe.py
- [ ] optuna/multi_objective/study.py
- [ ] optuna/_experimental.py
- [ ] optuna/samplers/_base.py
- [ ] optuna/samplers/nsgaii/_crossovers/_undx.py
- [ ] optuna/samplers/nsgaii/_crossovers/_spx.py
- [ ] optuna/samplers/nsgaii/_crossovers/_sbx.py
- [ ] optuna/samplers/nsgaii/_crossovers/_vsbx.py
- [ ] optuna/samplers/nsgaii/_sampler.py
- [ ] optuna/samplers/nsgaii/_crossover.py
- [ ] optuna/samplers/_search_space/intersection.py
- [ ] optuna/samplers/_qmc.py
- [ ] optuna/samplers/_tpe/probability_distributions.py
- [ ] optuna/samplers/_tpe/_truncnorm.py
- [ ] optuna/samplers/_tpe/multi_objective_sampler.py
- [ ] optuna/samplers/_tpe/parzen_estimator.py
- [ ] optuna/samplers/_tpe/sampler.py
- [ ] optuna/samplers/_random.py
- [ ] optuna/samplers/_cmaes.py
- [ ] optuna/samplers/_partial_fixed.py
- [ ] optuna/samplers/_brute_force.py
- [ ] optuna/samplers/_nsgaiii.py
- [ ] optuna/samplers/_grid.py
- [ ] optuna/_hypervolume/wfg.py
- [ ] optuna/_hypervolume/hssp.py
- [ ] optuna/progress_bar.py
- [ ] optuna/_transform.py
- [ ] optuna/_callbacks.py
- [ ] tests/multi_objective_tests/test_study.py
- [ ] tests/multi_objective_tests/samplers_tests/test_motpe.py
- [ ] tests/multi_objective_tests/samplers_tests/test_nsga2.py
- [ ] tests/multi_objective_tests/test_trial.py
- [ ] tests/multi_objective_tests/visualization_tests/test_pareto_front.py
- [ ] tests/trial_tests/test_frozen.py
- [ ] tests/trial_tests/test_trials.py
- [ ] tests/trial_tests/test_trial.py
- [ ] tests/pruners_tests/test_percentile.py
- [ ] tests/pruners_tests/test_median.py
- [ ] tests/pruners_tests/test_patient.py
- [ ] tests/pruners_tests/test_successive_halving.py
- [ ] tests/study_tests/test_optimize.py
- [ ] tests/study_tests/test_study.py
- [ ] tests/hypervolume_tests/test_hssp.py
- [x] tests/integration_tests/test_skopt.py
- [x] tests/integration_tests/test_pytorch_lightning.py
- [ ] tests/integration_tests/test_shap.py
- [ ] tests/integration_tests/test_cma.py
- [ ] tests/integration_tests/test_pytorch_distributed.py
- [ ] tests/integration_tests/lightgbm_tuner_tests/test_optimize.py
- [ ] tests/integration_tests/lightgbm_tuner_tests/test_alias.py
- [ ] tests/integration_tests/test_botorch.py
- [ ] tests/integration_tests/test_mlflow.py
- [ ] tests/integration_tests/test_mxnet.py
- [ ] tests/integration_tests/test_wandb.py
- [ ] tests/importance_tests/fanova_tests/test_tree.py
- [ ] tests/importance_tests/test_mean_decrease_impurity.py
- [ ] tests/importance_tests/test_fanova.py
- [ ] tests/importance_tests/test_init.py
- [ ] tests/test_convert_positional_args.py
- [ ] tests/test_deprecated.py
- [ ] tests/storages_tests/test_journal.py
- [ ] tests/storages_tests/test_heartbeat.py
- [ ] tests/storages_tests/test_storages.py
- [ ] tests/storages_tests/rdb_tests/test_storage.py
- [ ] tests/storages_tests/rdb_tests/create_db.py
- [ ] tests/storages_tests/test_with_server.py
- [ ] tests/samplers_tests/test_grid.py
- [ ] tests/samplers_tests/tpe_tests/test_parzen_estimator.py
- [ ] tests/samplers_tests/tpe_tests/test_multi_objective_sampler.py
- [ ] tests/samplers_tests/tpe_tests/test_sampler.py
- [ ] tests/samplers_tests/test_cmaes.py
- [ ] tests/samplers_tests/test_samplers.py
- [x] tests/samplers_tests/test_nsgaii.py
- [x] tests/samplers_tests/test_nsgaiii.py
- [ ] tests/samplers_tests/test_qmc.py
- [ ] tests/test_distributions.py
- [ ] tests/test_multi_objective.py
- [ ] tests/test_cli.py
- [ ] tests/visualization_tests/test_hypervolume_history.py
- [ ] tests/visualization_tests/test_pareto_front.py
- [ ] tests/terminator_tests/improvement_tests/test_evaluator.py
- [ ] benchmarks/kurobako/problems/wfg/transformation_functions.py
- [ ] benchmarks/bayesmark/report_bayesmark.py
- [ ] benchmarks/bayesmark/optuna_optimizer.py
### Additional context (optional)
The above list is generated by the following script.
<details>
<summary>script</summary>
```python
import os
import pathlib
PATTERS = [
"from typing import Union",
"from typing import Optional",
"from typing import Tuple",
"from typing import List",
"from typing import Dict",
"from typing import Set",
"from typing import FrozenSet",
"from typing import Type",
"from typing import FrozenSet",
"from typing import Sequence",
]
def get_filenames_to_be_simplified(dir_path):
ret = []
for f in os.listdir(dir_path):
file_path = os.path.join(dir_path, f)
if not os.path.isfile(file_path):
ret.extend(get_filenames_to_be_simplified(file_path))
else:
try:
with open(file_path) as fd:
contents = fd.read()
if any([s in contents for s in PATTERS]):
ret.append(str(file_path))
except UnicodeDecodeError as e:
pass
return ret
def main():
dirs = ["optuna", "tests", "benchmarks"]
for dir_name in dirs:
filenames = get_filenames_to_be_simplified(pathlib.Path(dir_name))
for filename in filenames:
print(f"- [ ] {filename}")
if __name__ == "__main__":
main()
```
</details>
| [
{
"content": "from __future__ import annotations\n\nfrom enum import Enum\nimport math\nfrom typing import Callable\nfrom typing import cast\nfrom typing import NamedTuple\nfrom typing import Sequence\n\nimport numpy as np\n\nfrom optuna.logging import get_logger\nfrom optuna.samplers._base import _CONSTRAINTS_... | [
{
"content": "from __future__ import annotations\n\nfrom collections.abc import Callable\nfrom collections.abc import Sequence\nfrom enum import Enum\nimport math\nfrom typing import cast\nfrom typing import NamedTuple\n\nimport numpy as np\n\nfrom optuna.logging import get_logger\nfrom optuna.samplers._base im... | diff --git a/optuna/visualization/_optimization_history.py b/optuna/visualization/_optimization_history.py
index b489a86c9a..3561c38073 100644
--- a/optuna/visualization/_optimization_history.py
+++ b/optuna/visualization/_optimization_history.py
@@ -1,11 +1,11 @@
from __future__ import annotations
+from collections.abc import Callable
+from collections.abc import Sequence
from enum import Enum
import math
-from typing import Callable
from typing import cast
from typing import NamedTuple
-from typing import Sequence
import numpy as np
|
docker__docker-py-1250 | attach is causing an "Invalid Argument" exception from os.read
``` python
stream = client.attach(container, stream=True, stdout=True, stderr=True)
for chunk in stream:
pass
```
Results in:
```
File "/Users/michael/work/oss/marina/marina/build.py", line 695, in watcher
for chunk in stream:
File ".venv/lib/python3.5/site-packages/docker/utils/socket.py", line 67, in frames_iter
yield read(socket, n)
File ".venv/lib/python3.5/site-packages/docker/utils/socket.py", line 25, in read
return os.read(socket.fileno(), n)
OSError: [Errno 22] Invalid argument
```
Using docker-py 1.10.2 on OS X 10.11.6 with docker for mac 1.12.0-rc3. Reverting to 1.9.0 fixes the issue.
| [
{
"content": "import errno\nimport os\nimport select\nimport struct\n\nimport six\n\ntry:\n from ..transport import NpipeSocket\nexcept ImportError:\n NpipeSocket = type(None)\n\n\nclass SocketError(Exception):\n pass\n\n\ndef read(socket, n=4096):\n \"\"\"\n Reads at most n bytes from socket\n ... | [
{
"content": "import errno\nimport os\nimport select\nimport struct\n\nimport six\n\ntry:\n from ..transport import NpipeSocket\nexcept ImportError:\n NpipeSocket = type(None)\n\n\nclass SocketError(Exception):\n pass\n\n\ndef read(socket, n=4096):\n \"\"\"\n Reads at most n bytes from socket\n ... | diff --git a/docker/utils/socket.py b/docker/utils/socket.py
index 164b845af..4080f253f 100644
--- a/docker/utils/socket.py
+++ b/docker/utils/socket.py
@@ -69,7 +69,11 @@ def frames_iter(socket):
"""
Returns a generator of frames read from socket
"""
- n = next_frame_size(socket)
- while n > 0:
- yield read(socket, n)
+ while True:
n = next_frame_size(socket)
+ if n == 0:
+ break
+ while n > 0:
+ result = read(socket, n)
+ n -= len(result)
+ yield result
|
opendatacube__datacube-core-1374 | Incompatibilities with xarray > 2022.03
### Expected behaviour
ODC should work with current version of `xarray`. In `setup.py` there's an exclusion of `2022.6.0`, but I don't think that's sufficient. It'd be worth digging up the commit/PR that made that change.
### Actual behaviour
Tests are failing.
```
FAILED tests/api/test_grid_workflow.py::test_gridworkflow_with_time_depth - AssertionError
FAILED tests/api/test_virtual.py::test_aggregate - ValueError: time already exists as coordinate or variable name.
```
### Steps to reproduce the behaviour
`pytest tests/`
### Environment information
* Which ``datacube --version`` are you using?
`develop` branch at `af59377327c363b9c52b55000b4024a0b3fbaa8b`
* What datacube deployment/enviornment are you running against?
- Mambaforge
- conda-forge
- Python 3.10
| [
{
"content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'hypothesis',\n 'pycodestyle',\n 'pylint',\n 'pytest',\n 'pytest-cov',\n 'pytest-timeout',\n 'pytest-httpserver',\n 'moto',\n]\ndoc_require = [\n 'Sphinx',\n 'sphinx_rtd_theme'... | [
{
"content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'hypothesis',\n 'pycodestyle',\n 'pylint',\n 'pytest',\n 'pytest-cov',\n 'pytest-timeout',\n 'pytest-httpserver',\n 'moto',\n]\ndoc_require = [\n 'Sphinx',\n 'sphinx_rtd_theme'... | diff --git a/setup.py b/setup.py
index 2721f9506a..019a9e2dbe 100755
--- a/setup.py
+++ b/setup.py
@@ -106,7 +106,7 @@
'sqlalchemy',
'GeoAlchemy2',
'toolz',
- 'xarray>=0.9,!=2022.6.0', # >0.9 fixes most problems with `crs` attributes being lost
+ 'xarray>=0.9,<2022.6', # >0.9 fixes most problems with `crs` attributes being lost
],
extras_require=extras_require,
tests_require=tests_require,
|
ethereum__web3.py-1095 | Dissallow python 3.5.1
### What was wrong?
It looks like `typing.NewType` may not be available in python 3.5.1
https://github.com/ethereum/web3.py/issues/1091
### How can it be fixed?
Check what version `NewType` was added and restrict our python versions as declared in `setup.py` to be `>=` that version
| [
{
"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.32\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'lint... | [
{
"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.32\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'lint... | diff --git a/setup.py b/setup.py
index 87e5defc7d..8c4a8a4ede 100644
--- a/setup.py
+++ b/setup.py
@@ -81,7 +81,7 @@
"pypiwin32>=223;platform_system=='Windows'",
],
setup_requires=['setuptools-markdown'],
- python_requires='>=3.5, <4',
+ python_requires='>=3.5.2, <4',
extras_require=extras_require,
py_modules=['web3', 'ens'],
license="MIT",
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.