repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
iperov/DeepFaceLab | machine-learning | 5,253 | Training fails!! | I've tried all other modes, it always gives me this error. No idea what to do.
##Information:
DFL build_01_11_2020
Computer:
OS: Windows 7
CPU: Intel Core i5 -3337U 1.8 GHz
RAM: 6 GB
GPU: Intel HD 4000
##Process
`Running trainer.
Loading model...
Model first run.
Enable autobackup? (y/n ?:help skip:n) : n
Write preview history? (y/n ?:help skip:n) : y
Choose image for the preview history? (y/n skip:n) : y
Target iteration (skip:unlimited/default) :
0
Batch_size (?:help skip:0) :
0
Flip faces randomly? (y/n ?:help skip:y) :
y
Use lightweight autoencoder? (y/n, ?:help skip:n) :
n
Use pixel loss? (y/n, ?:help skip: n/default ) :
n
Using plaidml.keras.backend backend.
INFO:plaidml:Opening device "opencl_intel_hd_graphics_4000.0"
Error:
Traceback (most recent call last):
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\mainscripts\Tra
iner.py", line 50, in trainerThread
device_args=device_args)
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\models\ModelBas
e.py", line 145, in __init__
self.onInitialize()
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\models\Model_H6
4\Model.py", line 33, in onInitialize
bgr_shape, mask_shape, self.encoder, self.decoder_src, self.decoder_dst = se
lf.Build(self.options['lighter_ae'])
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\models\Model_H6
4\Model.py", line 200, in Build
return bgr_shape, mask_shape, Encoder(bgr_shape), Decoder(), Decoder()
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\models\Model_H6
4\Model.py", line 154, in Encoder
x = downscale(128)(x)
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\models\Model_H6
4\Model.py", line 142, in func
return LeakyReLU(0.1)(Conv2D(dim, 5, strides=2, padding='same')(x))
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packa
ges\keras\engine\base_layer.py", line 431, in __call__
self.build(unpack_singleton(input_shapes))
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packa
ges\keras\layers\convolutional.py", line 141, in build
constraint=self.kernel_constraint)
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packa
ges\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packa
ges\keras\engine\base_layer.py", line 249, in add_weight
weight = K.variable(initializer(shape),
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packa
ges\keras\initializers.py", line 218, in __call__
dtype=dtype, seed=self.seed)
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packa
ges\plaidml\keras\backend.py", line 1095, in random_uniform
rng_state = _make_rng_state(seed)
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packa
ges\plaidml\keras\backend.py", line 192, in _make_rng_state
rng_state = variable(rng_init, dtype='uint32')
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packa
ges\plaidml\keras\backend.py", line 1668, in variable
with tensor.mmap_discard(_ctx) as view:
File "contextlib.py", line 81, in __enter__
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packa
ges\plaidml\__init__.py", line 1241, in mmap_discard
_lib().plaidml_get_shape_element_count(self.shape), self.shape, None)
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packa
ges\plaidml\__init__.py", line 1101, in __init__
self._base = ctypes.cast(_lib().plaidml_get_mapping_base(ctx, self), ctypes.
POINTER(ctype))
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packa
ges\plaidml\__init__.py", line 764, in _check_err
self.raise_last_status()
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packa
ges\plaidml\library.py", line 131, in raise_last_status
raise self.last_status()
Exception
Exception ignored in: <bound method _View.__del__ of <plaidml._View object at 0x
000000002346F160>>
Traceback (most recent call last):
File "C:\Deepface\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packa
ges\plaidml\__init__.py", line 1112, in __del__
if self._buf:
AttributeError: '_View' object has no attribute '_buf'
` | open | 2021-01-20T13:52:51Z | 2023-06-08T22:25:47Z | https://github.com/iperov/DeepFaceLab/issues/5253 | [] | Tim791-20 | 3 |
jupyter-book/jupyter-book | jupyter | 2,294 | [Bug]: Unable to install Jupyter Book 2 with pixi | ### What happened, and what did you expect to happen?
I expect to be able to install the jupyter-book pre-release into an new environment alongside ipykernel and a recent python version, but the following produces an error:
```bash
mkdir test
cd test
pixi init
pixi add ipykernel python=3.12
pixi add "jupyter-book==2.0.0a0" --pypi
```
```
Error:
× failed to solve the pypi requirements of 'default' 'osx-arm64'
├─▶ failed to resolve pypi dependencies
╰─▶ Because jupyter-book==2.0.0a0 depends on platformdirs>=4.2.2,<4.3.dev0 and platformdirs==4.3.6, we can conclude that jupyter-book==2.0.0a0 cannot
be used.
And because you require jupyter-book==2.0.0a0, we can conclude that your requirements are unsatisfiable.
```
I'm not very familiar with hatch, but is `platformdirs` only required for building? If platformdirs is a run-time dependency for jupyter-book could it use a less restrictive pin in the project file?:
https://github.com/jupyter-book/jupyter-book/blob/a4adf9f618180e92ab7b07d11ca08db6d5cfc04b/pyproject.toml#L48-L53
### What version of Jupyter Book are you running?
v2.0.0a0
### What Operating System are you using?
Mac OS
### Relevant log output
```shell
``` | closed | 2025-01-13T12:49:10Z | 2025-01-14T11:46:59Z | https://github.com/jupyter-book/jupyter-book/issues/2294 | [
"bug"
] | scottyhq | 3 |
strawberry-graphql/strawberry | django | 3,762 | context_getter signature does not offer passing Awaitable | The `context_getter` is further passed to https://github.com/strawberry-graphql/strawberry/blob/7ba5928a418ca790cf8b663f52b8174b6de12b2a/strawberry/fastapi/router.py#L70 so https://github.com/strawberry-graphql/strawberry/blob/7ba5928a418ca790cf8b663f52b8174b6de12b2a/strawberry/fastapi/router.py#L128 should likely have the same signature | closed | 2025-01-31T21:47:26Z | 2025-02-12T17:34:39Z | https://github.com/strawberry-graphql/strawberry/issues/3762 | [] | alexey-pelykh | 0 |
graphistry/pygraphistry | pandas | 193 | [FEA] Node reductions | Node reductions are great!
# What
This is an important case of graph reductions. There are requests in both the UI and APIs to do reductions at the level of node, multiedge, and multiple nodes / subgraphs. Instead of simply filtering out the selected entities, the idea is they'd be partitioned into subgraphs, and different kinds of operations would replace them new topologies, and attributes representing them. They seem like some sort of continuous derivative operators, and may even be invertable.
# Example
One of the most common cases is dropping a node and propagating its attributes and edges/attributes through its neighborhood, so
In hypergraph:
```python
g = graphistry.hypergraph(
pd.DataFrame([
{'x': 'a', 'y': 'm'},
{'x': 'b', 'y': 'm'}
],
direct=True)
```
We'd get 3-node graph `(a)-[e1]->(m)<-[e2]-(b)`
However, we really just want `(a)-[m]-(b)` ... so need a way to drop node `(m)` and synthesize ede `-[m]->`. This gets weirder in the cases of multiedges, cliques, and different kinds of nodes:
```python
g2 = g.replace_node_with_edges(
select_nodes=g._nodes.query('type="m"'),
edge_reducer=g.reducers.pair_unique,
edge_attr_reducer={'weight': 'max'})
g2 = g.replace_node_with_edges(select_nodes='m')
g2 = g.replace_node_with_edges(select_nodes=[1])
g2 = g.replace_node_with_edges(select_nodes=[False, True, False])
g2 = g.replace_node_with_edges(select_nodes=g._nodes.query('type="m"'))
g2 = g.replace_node_with_edges(select_nodes='m', edge_reducer=g.reducer.pair_unique_undirected, edge_reducer_attr={'weight': 'max', ...})
```
# Sample reducers
This gets at patterns of graph reducers, where we want to take nodes/edges, remove them, and replace with other nodes edges. For example, we can replace all `type='m'` nodes with `(a: x)-[e]->(b: x)` via a node reduction driven by a selector predicate:
## 1. Node reducers: Drop a node by converting to edges
`g2.replace_node_with_edges(selected=g._nodes['type'] == m, reducer=lambda edges: strong_component(edges_to_nodes(edges)))`
The idea is pull out nodes matching some predicate
```python
def replace_node_with_edges(g):
nodes = g._nodes[ selected ]
new_edges = []
for node in g._nodes[g._node]:
in_edges = g._edges[ g._edges[g._src] == node ]
out_edges = g._edges[ g._edges[g._src] == node ]
new_edges.append( reducer(pd.concat([in_edges, out_edges]))
new_nodes = g._nodes[ ~selected ]
g2 = g.nodes(new_nodes).edges(new_edges)
return g2
```
where
```
def strong_component(edges, s='src', d='dst')
node_ids = pd.concat([edges[s].unique() + edges[d].unique()]).unique()
edges = []
for x in range(len(node_ides)):
for y in range(x+1, len(node_ids)):
edges.append([x,y])
return edges
```
## 2. Sugar for common cases
```python
g2 = g.replace_node_with_edges(select_key='type', select_value='m', reducer=reducers.pair_unique)
```
## 3. Build into the hypergraph transform, such as during or post-process
```python
g2 = graphistry.hypergraph(
pd.DataFrame([
{'x': 'a', 'y': 'm'},
{'x': 'b', 'y': 'm'}
],
edges={ }, # no direct edges
reducer_subgraphs=[ {'reducer_type': 'node_reducer', 'node_selector': ['y'], 'output_nodes': ['x'], 'reducer': 'pair_unique'} ]
)
```
Or as a post-process:
```python
g2 = graphistry.hypergraph(
pd.DataFrame([
{'x': 'a', 'y': 'm'},
{'x': 'b', 'y': 'm'}
],
direct=true,
reduce=[ {'reducer_type': 'node_reducer', 'node_selector': ['y'], 'reducer': 'pair_unique'} ]
)
```
# Plan
I think it's worth doing some principled experiments of different shaping use cases first (`replace_node_with_edges()`) as the API is not at all obvious:
* selecting nodes: by index, type, ... ?
* reducing a neighborhood to new edges: generic handler + typical cases
* handling attribute reductions as part of that: generic handler + typical cases
Ideally we come to something simple + extensible in PyGraphistry, and use that to drive subsequent incorpration in hypergraphs, UI, and new preprocessing APIs.
# Common scenarios
* Turn hypergraph `(event {...e} )->(attribute)` into `(attribute)-{...e}->(attribute)`
* Turn bipartite `(a)->(b)` graph into graph of `(a)-[e {...b}]->(a)` or `(b)-[e {...a}]->(b)`
* Can combine with edge reductions to collapse multiedge `(a1)-[e1]->(a2), (a1)-[e2]->(a2), ...` into singular `(a)-[e {...}]->(b)`
* If removed edges + nodes have props, should be available to generator of new edges
* Collapse some hierarchy into summary nodes:
* Rigid: Ex - collapse chain of `country -> state -> city -> street` into `country -> state { street_count: 100 }`
* Loose: Ex - collapse communities like `user { community: 1 } --[friend]--> user {community: 1}` into `community { id: 1, user_count: 2 }` | open | 2021-01-07T21:35:47Z | 2022-04-23T15:50:25Z | https://github.com/graphistry/pygraphistry/issues/193 | [
"enhancement",
"help wanted",
"good-first-issue"
] | lmeyerov | 10 |
tqdm/tqdm | jupyter | 1,621 | tqdm fail on system with comma decimal seperator, on one of examples | - [x] I have marked all applicable categories:
+ [x] exception-raising bug
+ [ ] visual output bug
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
```
4.66.5 3.12.6 (main, Sep 9 2024, 00:00:00) [GCC 14.2.1 20240801 (Red Hat 14.2.1-1)] linux
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
```
➜ LC_NUMERIC=pl_PL.UTF-8 seq 3 .1 5 | tqdm --total 5 --update-to
0%| | 0/5 [00:00<?, ?it/s]
3,0
Traceback (most recent call last):
File "/usr/bin/tqdm", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/lib/python3.12/site-packages/tqdm/cli.py", line 308, in main
callback(i)
File "/usr/lib/python3.12/site-packages/tqdm/cli.py", line 305, in callback
t.update(numeric(i.decode()) - t.n)
~~~~~~~~~~~~~~~~~~~~^~~~~
TypeError: unsupported operand type(s) for -: 'tuple' and 'int'
tokariew in ~/NFS/Media/podcast4 v3.12.6
✗ tqdm --version
4.66.5
``` | open | 2024-10-07T10:18:33Z | 2024-10-07T21:10:03Z | https://github.com/tqdm/tqdm/issues/1621 | [
"question/docs ‽"
] | Tokariew | 1 |
samuelcolvin/dirty-equals | pytest | 34 | Numerous test failures with pypy3.9 | The following tests fail on pypy3.9 (7.3.9):
```
FAILED tests/test_base.py::test_not_repr - Failed: DID NOT RAISE <class 'AssertionError'>
FAILED tests/test_boolean.py::test_dirty_not_equals - Failed: DID NOT RAISE <class 'AssertionError'>
FAILED tests/test_dict.py::test_is_dict[input_value16-expected16] - AssertionError: assert {'a': 1, 'b': None} == IsIgnoreDict(a=1)
FAILED tests/test_dict.py::test_is_dict[input_value17-expected17] - assert {1: 10, 2: None} == IsIgnoreDict(1=10)
FAILED tests/test_dict.py::test_is_dict[input_value20-expected20] - assert {1: 10, 2: False} == IsIgnoreDict[ignore={False}](1=10)
FAILED tests/test_dict.py::test_callable_ignore - AssertionError: assert {'a': 1, 'b': 42} == IsDict[ignore=ignore_42](a=1)
FAILED tests/test_dict.py::test_ignore - AssertionError: assert {'a': 1, 'b': 2, 'c': 3, 'd': 4} == IsDict[ignore=custom_ignore](a=1...
FAILED tests/test_dict.py::test_ignore_with_is_str - AssertionError: assert {'dob': None, 'id': 123, 'street_address': None, 'token'...
FAILED tests/test_dict.py::test_unhashable_value - AssertionError: assert {'b': {'a': 1}, 'c': None} == IsIgnoreDict(b={'a': 1})
FAILED tests/test_docs.py::test_docs_examples[dirty_equals/_inspection.py:172-189] - AssertionError: assert <_inspection_172_189.Foo...
FAILED tests/test_docs.py::test_docs_examples[dirty_equals/_dict.py:186-204] - AssertionError: assert {'a': 1, 'b': 2, 'c': None} ==...
FAILED tests/test_inspection.py::test_has_attributes[-HasAttributes(a=IsInt, b=IsStr)] - assert <tests.test_inspection.Foo object at...
```
Full output:
```pytb
========================================================= test session starts =========================================================
platform linux -- Python 3.9.12[pypy-7.3.9-final], pytest-7.1.2, pluggy-1.0.0
rootdir: /tmp/dirty-equals, configfile: pyproject.toml, testpaths: tests
plugins: forked-1.4.0, xdist-2.5.0, xprocess-0.18.1, anyio-3.5.0
collected 484 items
tests/test_base.py ......F.................... [ 5%]
tests/test_boolean.py ..........................F................ [ 14%]
tests/test_datetime.py ................................................. [ 24%]
tests/test_dict.py ................FF..F................F.................FFF [ 36%]
tests/test_docs.py ..........................F...F.................. [ 46%]
tests/test_inspection.py ................F........... [ 52%]
tests/test_list_tuple.py .............................................................................. [ 68%]
tests/test_numeric.py .......................................................... [ 80%]
tests/test_other.py ................................... [ 87%]
tests/test_strings.py ........................................................... [100%]
============================================================== FAILURES ===============================================================
____________________________________________________________ test_not_repr ____________________________________________________________
def test_not_repr():
v = ~IsInt
assert str(v) == '~IsInt'
with pytest.raises(AssertionError):
> assert 1 == v
E Failed: DID NOT RAISE <class 'AssertionError'>
tests/test_base.py:66: Failed
________________________________________________________ test_dirty_not_equals ________________________________________________________
def test_dirty_not_equals():
with pytest.raises(AssertionError):
> assert 0 != IsFalseLike
E Failed: DID NOT RAISE <class 'AssertionError'>
tests/test_boolean.py:48: Failed
_______________________________________________ test_is_dict[input_value16-expected16] ________________________________________________
input_value = {'a': 1, 'b': None}, expected = IsIgnoreDict(a=1)
@pytest.mark.parametrize(
'input_value,expected',
[
({}, IsDict),
({}, IsDict()),
({'a': 1}, IsDict(a=1)),
({1: 2}, IsDict({1: 2})),
({'a': 1, 'b': 2}, IsDict(a=1, b=2)),
({'b': 2, 'a': 1}, IsDict(a=1, b=2)),
({'a': 1, 'b': None}, IsDict(a=1, b=None)),
({'a': 1, 'b': 3}, ~IsDict(a=1, b=2)),
# partial dict
({1: 10, 2: 20}, IsPartialDict({1: 10})),
({1: 10}, IsPartialDict({1: 10})),
({1: 10, 2: 20}, IsPartialDict({1: 10})),
({1: 10, 2: 20}, IsDict({1: 10}).settings(partial=True)),
({1: 10}, ~IsPartialDict({1: 10, 2: 20})),
({1: 10, 2: None}, ~IsPartialDict({1: 10, 2: 20})),
# ignore dict
({}, IsIgnoreDict()),
({'a': 1, 'b': 2}, IsIgnoreDict(a=1, b=2)),
({'a': 1, 'b': None}, IsIgnoreDict(a=1)),
({1: 10, 2: None}, IsIgnoreDict({1: 10})),
({'a': 1, 'b': 2}, ~IsIgnoreDict(a=1)),
({1: 10, 2: False}, ~IsIgnoreDict({1: 10})),
({1: 10, 2: False}, IsIgnoreDict({1: 10}).settings(ignore={False})),
# strict dict
({}, IsStrictDict()),
({'a': 1, 'b': 2}, IsStrictDict(a=1, b=2)),
({'a': 1, 'b': 2}, ~IsStrictDict(b=2, a=1)),
({1: 10, 2: 20}, IsStrictDict({1: 10, 2: 20})),
({1: 10, 2: 20}, ~IsStrictDict({2: 20, 1: 10})),
({1: 10, 2: 20}, ~IsDict({2: 20, 1: 10}).settings(strict=True)),
# combining types
({'a': 1, 'b': 2, 'c': 3}, IsStrictDict(a=1, c=3).settings(partial=True)),
({'a': 1, 'b': 2, 'c': 3}, IsStrictDict(a=1, b=2).settings(partial=True)),
({'a': 1, 'b': 2, 'c': 3}, IsStrictDict(b=2, c=3).settings(partial=True)),
({'a': 1, 'c': 3, 'b': 2}, ~IsStrictDict(b=2, c=3).settings(partial=True)),
],
)
def test_is_dict(input_value, expected):
> assert input_value == expected
E AssertionError: assert {'a': 1, 'b': None} == IsIgnoreDict(a=1)
tests/test_dict.py:47: AssertionError
_______________________________________________ test_is_dict[input_value17-expected17] ________________________________________________
input_value = {1: 10, 2: None}, expected = IsIgnoreDict(1=10)
@pytest.mark.parametrize(
'input_value,expected',
[
({}, IsDict),
({}, IsDict()),
({'a': 1}, IsDict(a=1)),
({1: 2}, IsDict({1: 2})),
({'a': 1, 'b': 2}, IsDict(a=1, b=2)),
({'b': 2, 'a': 1}, IsDict(a=1, b=2)),
({'a': 1, 'b': None}, IsDict(a=1, b=None)),
({'a': 1, 'b': 3}, ~IsDict(a=1, b=2)),
# partial dict
({1: 10, 2: 20}, IsPartialDict({1: 10})),
({1: 10}, IsPartialDict({1: 10})),
({1: 10, 2: 20}, IsPartialDict({1: 10})),
({1: 10, 2: 20}, IsDict({1: 10}).settings(partial=True)),
({1: 10}, ~IsPartialDict({1: 10, 2: 20})),
({1: 10, 2: None}, ~IsPartialDict({1: 10, 2: 20})),
# ignore dict
({}, IsIgnoreDict()),
({'a': 1, 'b': 2}, IsIgnoreDict(a=1, b=2)),
({'a': 1, 'b': None}, IsIgnoreDict(a=1)),
({1: 10, 2: None}, IsIgnoreDict({1: 10})),
({'a': 1, 'b': 2}, ~IsIgnoreDict(a=1)),
({1: 10, 2: False}, ~IsIgnoreDict({1: 10})),
({1: 10, 2: False}, IsIgnoreDict({1: 10}).settings(ignore={False})),
# strict dict
({}, IsStrictDict()),
({'a': 1, 'b': 2}, IsStrictDict(a=1, b=2)),
({'a': 1, 'b': 2}, ~IsStrictDict(b=2, a=1)),
({1: 10, 2: 20}, IsStrictDict({1: 10, 2: 20})),
({1: 10, 2: 20}, ~IsStrictDict({2: 20, 1: 10})),
({1: 10, 2: 20}, ~IsDict({2: 20, 1: 10}).settings(strict=True)),
# combining types
({'a': 1, 'b': 2, 'c': 3}, IsStrictDict(a=1, c=3).settings(partial=True)),
({'a': 1, 'b': 2, 'c': 3}, IsStrictDict(a=1, b=2).settings(partial=True)),
({'a': 1, 'b': 2, 'c': 3}, IsStrictDict(b=2, c=3).settings(partial=True)),
({'a': 1, 'c': 3, 'b': 2}, ~IsStrictDict(b=2, c=3).settings(partial=True)),
],
)
def test_is_dict(input_value, expected):
> assert input_value == expected
E assert {1: 10, 2: None} == IsIgnoreDict(1=10)
tests/test_dict.py:47: AssertionError
_______________________________________________ test_is_dict[input_value20-expected20] ________________________________________________
input_value = {1: 10, 2: False}, expected = IsIgnoreDict[ignore={False}](1=10)
@pytest.mark.parametrize(
'input_value,expected',
[
({}, IsDict),
({}, IsDict()),
({'a': 1}, IsDict(a=1)),
({1: 2}, IsDict({1: 2})),
({'a': 1, 'b': 2}, IsDict(a=1, b=2)),
({'b': 2, 'a': 1}, IsDict(a=1, b=2)),
({'a': 1, 'b': None}, IsDict(a=1, b=None)),
({'a': 1, 'b': 3}, ~IsDict(a=1, b=2)),
# partial dict
({1: 10, 2: 20}, IsPartialDict({1: 10})),
({1: 10}, IsPartialDict({1: 10})),
({1: 10, 2: 20}, IsPartialDict({1: 10})),
({1: 10, 2: 20}, IsDict({1: 10}).settings(partial=True)),
({1: 10}, ~IsPartialDict({1: 10, 2: 20})),
({1: 10, 2: None}, ~IsPartialDict({1: 10, 2: 20})),
# ignore dict
({}, IsIgnoreDict()),
({'a': 1, 'b': 2}, IsIgnoreDict(a=1, b=2)),
({'a': 1, 'b': None}, IsIgnoreDict(a=1)),
({1: 10, 2: None}, IsIgnoreDict({1: 10})),
({'a': 1, 'b': 2}, ~IsIgnoreDict(a=1)),
({1: 10, 2: False}, ~IsIgnoreDict({1: 10})),
({1: 10, 2: False}, IsIgnoreDict({1: 10}).settings(ignore={False})),
# strict dict
({}, IsStrictDict()),
({'a': 1, 'b': 2}, IsStrictDict(a=1, b=2)),
({'a': 1, 'b': 2}, ~IsStrictDict(b=2, a=1)),
({1: 10, 2: 20}, IsStrictDict({1: 10, 2: 20})),
({1: 10, 2: 20}, ~IsStrictDict({2: 20, 1: 10})),
({1: 10, 2: 20}, ~IsDict({2: 20, 1: 10}).settings(strict=True)),
# combining types
({'a': 1, 'b': 2, 'c': 3}, IsStrictDict(a=1, c=3).settings(partial=True)),
({'a': 1, 'b': 2, 'c': 3}, IsStrictDict(a=1, b=2).settings(partial=True)),
({'a': 1, 'b': 2, 'c': 3}, IsStrictDict(b=2, c=3).settings(partial=True)),
({'a': 1, 'c': 3, 'b': 2}, ~IsStrictDict(b=2, c=3).settings(partial=True)),
],
)
def test_is_dict(input_value, expected):
> assert input_value == expected
E assert {1: 10, 2: False} == IsIgnoreDict[ignore={False}](1=10)
tests/test_dict.py:47: AssertionError
________________________________________________________ test_callable_ignore _________________________________________________________
def test_callable_ignore():
assert {'a': 1} == IsDict(a=1).settings(ignore=ignore_42)
> assert {'a': 1, 'b': 42} == IsDict(a=1).settings(ignore=ignore_42)
E AssertionError: assert {'a': 1, 'b': 42} == IsDict[ignore=ignore_42](a=1)
E + where IsDict[ignore=ignore_42](a=1) = <bound method IsDict.settings of IsDict(a=1)>(ignore=ignore_42)
E + where <bound method IsDict.settings of IsDict(a=1)> = IsDict(a=1).settings
E + where IsDict(a=1) = IsDict(a=1)
tests/test_dict.py:95: AssertionError
_____________________________________________________________ test_ignore _____________________________________________________________
def test_ignore():
def custom_ignore(v: int) -> bool:
return v % 2 == 0
> assert {'a': 1, 'b': 2, 'c': 3, 'd': 4} == IsDict(a=1, c=3).settings(ignore=custom_ignore)
E AssertionError: assert {'a': 1, 'b': 2, 'c': 3, 'd': 4} == IsDict[ignore=custom_ignore](a=1, c=3)
E + where IsDict[ignore=custom_ignore](a=1, c=3) = <bound method IsDict.settings of IsDict(a=1, c=3)>(ignore=<function test_ignore.<locals>.custom_ignore at 0x00007f313d03a020>)
E + where <bound method IsDict.settings of IsDict(a=1, c=3)> = IsDict(a=1, c=3).settings
E + where IsDict(a=1, c=3) = IsDict(a=1, c=3)
tests/test_dict.py:129: AssertionError
_______________________________________________________ test_ignore_with_is_str _______________________________________________________
def test_ignore_with_is_str():
api_data = {'id': 123, 'token': 't-abc123', 'dob': None, 'street_address': None}
token_is_str = IsStr(regex=r't\-.+')
> assert api_data == IsIgnoreDict(id=IsPositiveInt, token=token_is_str)
E AssertionError: assert {'dob': None, 'id': 123, 'street_address': None, 'token': 't-abc123'} == IsIgnoreDict(id=IsPositiveInt, token=IsStr(regex='t\\-.+'))
E + where IsIgnoreDict(id=IsPositiveInt, token=IsStr(regex='t\\-.+')) = IsIgnoreDict(id=IsPositiveInt, token=IsStr(regex='t\\-.+'))
tests/test_dict.py:136: AssertionError
________________________________________________________ test_unhashable_value ________________________________________________________
def test_unhashable_value():
a = {'a': 1}
api_data = {'b': a, 'c': None}
> assert api_data == IsIgnoreDict(b=a)
E AssertionError: assert {'b': {'a': 1}, 'c': None} == IsIgnoreDict(b={'a': 1})
E + where IsIgnoreDict(b={'a': 1}) = IsIgnoreDict(b={'a': 1})
tests/test_dict.py:143: AssertionError
_______________________________________ test_docs_examples[dirty_equals/_inspection.py:172-189] _______________________________________
module_name = '_inspection_172_189'
source_code = '\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\...IsStr)\nassert Foo(1, 2) != HasAttributes(a=1, b=2, c=3)\nassert Foo(1, 2) == HasAttributes(a=1, b=2, spam=AnyThing)\n'
import_execute = <function import_execute.<locals>._import_execute at 0x00007f313da302a0>
def test_docs_examples(module_name, source_code, import_execute):
> import_execute(module_name, source_code, True)
tests/test_docs.py:69:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_docs.py:25: in _import_execute
spec.loader.exec_module(module)
/usr/lib/pypy3.9/site-packages/_pytest/assertion/rewrite.py:168: in exec_module
exec(co, module.__dict__)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
from dirty_equals import HasAttributes, IsInt, IsStr, AnyThing
class Foo:
def __init__(self, a, b):
self.a = a
self.b = b
def spam(self):
pass
assert Foo(1, 2) == HasAttributes(a=1, b=2)
assert Foo(1, 2) == HasAttributes(a=1)
> assert Foo(1, 's') == HasAttributes(a=IsInt, b=IsStr)
E AssertionError: assert <_inspection_172_189.Foo object at 0x00007f313d70b750> == HasAttributes(a=IsInt, b=IsStr)
E + where <_inspection_172_189.Foo object at 0x00007f313d70b750> = <class '_inspection_172_189.Foo'>(1, 's')
E + and HasAttributes(a=IsInt, b=IsStr) = HasAttributes(a=IsInt, b=IsStr)
../pytest-of-mgorny/pytest-15/test_docs_examples_dirty_equal26/_inspection_172_189.py:185: AssertionError
__________________________________________ test_docs_examples[dirty_equals/_dict.py:186-204] __________________________________________
module_name = '_dict_186_204'
source_code = "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\...(a=1, c=3).settings(strict=True)\nassert {'b': None, 'c': 3, 'a': 1} != IsIgnoreDict(a=1, c=3).settings(strict=True)\n"
import_execute = <function import_execute.<locals>._import_execute at 0x00007f313f1be2a0>
def test_docs_examples(module_name, source_code, import_execute):
> import_execute(module_name, source_code, True)
tests/test_docs.py:69:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_docs.py:25: in _import_execute
spec.loader.exec_module(module)
/usr/lib/pypy3.9/site-packages/_pytest/assertion/rewrite.py:168: in exec_module
exec(co, module.__dict__)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
from dirty_equals import IsIgnoreDict
> assert {'a': 1, 'b': 2, 'c': None} == IsIgnoreDict(a=1, b=2)
E AssertionError: assert {'a': 1, 'b': 2, 'c': None} == IsIgnoreDict(a=1, b=2)
E + where IsIgnoreDict(a=1, b=2) = IsIgnoreDict(a=1, b=2)
../pytest-of-mgorny/pytest-15/test_docs_examples_dirty_equal30/_dict_186_204.py:189: AssertionError
________________________________________ test_has_attributes[-HasAttributes(a=IsInt, b=IsStr)] ________________________________________
value = <tests.test_inspection.Foo object at 0x00007f313eb0bc20>, dirty = HasAttributes(a=IsInt, b=IsStr)
@pytest.mark.parametrize(
'value,dirty',
[
(Foo(1, 2), HasAttributes(a=1, b=2)),
(Foo(1, 's'), HasAttributes(a=IsInt, b=IsStr)),
(Foo(1, 2), ~HasAttributes(a=IsInt, b=IsStr)),
(Foo(1, 2), ~HasAttributes(a=1, b=2, c=3)),
(Foo(1, 2), HasAttributes(a=1, b=2, spam=AnyThing)),
(Foo(1, 2), ~HasAttributes(a=1, b=2, missing=AnyThing)),
],
ids=dirty_repr,
)
def test_has_attributes(value, dirty):
> assert value == dirty
E assert <tests.test_inspection.Foo object at 0x00007f313eb0bc20> == HasAttributes(a=IsInt, b=IsStr)
tests/test_inspection.py:86: AssertionError
======================================================= short test summary info =======================================================
FAILED tests/test_base.py::test_not_repr - Failed: DID NOT RAISE <class 'AssertionError'>
FAILED tests/test_boolean.py::test_dirty_not_equals - Failed: DID NOT RAISE <class 'AssertionError'>
FAILED tests/test_dict.py::test_is_dict[input_value16-expected16] - AssertionError: assert {'a': 1, 'b': None} == IsIgnoreDict(a=1)
FAILED tests/test_dict.py::test_is_dict[input_value17-expected17] - assert {1: 10, 2: None} == IsIgnoreDict(1=10)
FAILED tests/test_dict.py::test_is_dict[input_value20-expected20] - assert {1: 10, 2: False} == IsIgnoreDict[ignore={False}](1=10)
FAILED tests/test_dict.py::test_callable_ignore - AssertionError: assert {'a': 1, 'b': 42} == IsDict[ignore=ignore_42](a=1)
FAILED tests/test_dict.py::test_ignore - AssertionError: assert {'a': 1, 'b': 2, 'c': 3, 'd': 4} == IsDict[ignore=custom_ignore](a=1...
FAILED tests/test_dict.py::test_ignore_with_is_str - AssertionError: assert {'dob': None, 'id': 123, 'street_address': None, 'token'...
FAILED tests/test_dict.py::test_unhashable_value - AssertionError: assert {'b': {'a': 1}, 'c': None} == IsIgnoreDict(b={'a': 1})
FAILED tests/test_docs.py::test_docs_examples[dirty_equals/_inspection.py:172-189] - AssertionError: assert <_inspection_172_189.Foo...
FAILED tests/test_docs.py::test_docs_examples[dirty_equals/_dict.py:186-204] - AssertionError: assert {'a': 1, 'b': 2, 'c': None} ==...
FAILED tests/test_inspection.py::test_has_attributes[-HasAttributes(a=IsInt, b=IsStr)] - assert <tests.test_inspection.Foo object at...
=================================================== 12 failed, 472 passed in 3.99s ====================================================
``` | closed | 2022-04-24T08:37:44Z | 2022-04-28T15:11:44Z | https://github.com/samuelcolvin/dirty-equals/issues/34 | [] | mgorny | 5 |
apachecn/ailearning | python | 420 | 测试 | closed | 2018-08-24T06:42:44Z | 2018-08-24T07:13:30Z | https://github.com/apachecn/ailearning/issues/420 | [] | jiangzhonglian | 0 | |
strawberry-graphql/strawberry | asyncio | 3,087 | Strawberry doesn't call serialize on a custom scalar | ## Describe the Bug
I have the following code:
```
import pydantic
import strawberry
from typing import Optional
class PydanticNullableType(pydantic.BaseModel):
data: Optional[str] = None
@strawberry.scalar
class NullableString:
@staticmethod
def serialize(value: Optional[str]) -> str:
return "" if value is None else value
@staticmethod
def parse_value(value: str) -> Optional[str]:
return None if value == "" else value
@strawberry.experimental.pydantic.type(model=PydanticNullableType)
class StrawberryType:
data: NullableString
def make_mock_query(pydantic_model, return_type):
@strawberry.type
class Query:
@strawberry.field
def serialized_data(self) -> return_type:
return return_type.from_pydantic(pydantic_model)
return Query
def nullable_field_serializes():
pydantic_model = PydanticNullableType()
assert (
strawberry.Schema(
query=make_mock_query(pydantic_model, StrawberryType),
types=[StrawberryType],
)
.execute_sync("query { serializedData { data } }")
.errors
== []
)
nullable_field_serializes()
```
This code throws an assertion error because running `execute_sync` results in an error like the following:
```
E AssertionError: assert [GraphQLError('Cannot return null for non-nullable field data.', locations=[SourceLocation(line=1, column=46)], path=['serializedData', 'data'])] == []
```
This happens because `NullableString.serialize` is not called and therefore doesn't handle the `None` value.
Why is the `NullableString.serialize` not called and how can this code be fixed to handle nullable fields?
## System Information
- Operating system:
ProductName: macOS
ProductVersion: 13.5.2
BuildVersion: 22G91
- Strawberry version (if applicable): 0.192.0 | closed | 2023-09-11T14:13:17Z | 2025-03-20T15:56:22Z | https://github.com/strawberry-graphql/strawberry/issues/3087 | [
"bug",
"info-needed"
] | nkartashov | 2 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 219 | still not fixed, redownloaded and tried with demo file - same error | still not fixed, redownloaded and tried with demo file - same error
_Originally posted by @Dmorok in https://github.com/feder-cr/linkedIn_auto_jobs_applier_with_AI/issues/89#issuecomment-2322233594_
same error not getting fixed | closed | 2024-09-01T16:05:27Z | 2024-09-08T22:37:39Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/219 | [] | ayushiimishra | 2 |
dot-agent/nextpy | streamlit | 152 | Integrate Abstractions for Jupiter | closed | 2024-03-22T01:55:56Z | 2024-03-22T01:56:01Z | https://github.com/dot-agent/nextpy/issues/152 | [] | anubrag | 0 | |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,025 | how to make it use gpu instead of cpu? | ? | open | 2022-02-24T12:58:40Z | 2022-03-15T00:17:47Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1025 | [] | mountaincastle47 | 2 |
vimalloc/flask-jwt-extended | flask | 130 | How to get refresh token? | Hello
I can get `access_token` and `refresh_token` with my authentication method.
But how can i use that `refresh_token` to get a new access token?
What is the endpoint url and which parameters do i have to include in the http request?
Here are the authorization and refresh methods I use:
```
@blueprint.route('/authenticate', methods=['POST'])
@csrf_protect.exempt
def authenticate():
"""Authenticate user and return token"""
if not request.is_json:
return jsonify({"msg": "Missing JSON in request"}), 400
username = request.json.get('username', None)
password = request.json.get('password', None)
if not username or not password:
return jsonify({"msg": "Missing username or password"}), 400
user = User.query.filter_by(username=username).first()
if user is None or not user.check_password(password):
return jsonify({"msg": "Bad credentials"}), 400
access_token = create_access_token(identity=user.id)
refresh_token = create_refresh_token(identity=user.id)
ret = {
'access_token': access_token,
'refresh_token': refresh_token
}
return jsonify(ret), 200
@blueprint.route('/refresh', methods=['POST'])
@jwt_refresh_token_required
def refresh():
current_user = get_jwt_identity()
ret = {
'access_token': create_access_token(identity=current_user)
}
return jsonify(ret), 200
``` | closed | 2018-03-11T07:06:16Z | 2018-03-13T20:05:01Z | https://github.com/vimalloc/flask-jwt-extended/issues/130 | [] | Kalesberg | 1 |
mlfoundations/open_clip | computer-vision | 191 | Hyperparams used to train specific model configs | Hi team,
I am trying to reproduce the numbers of your model on the LAION-400M dataset and am not able to reproduce the same numbers. For example, I obtain a 42% imagenet zeros shot top 1 val accuracy compares to the 47% you report with VIT-B32 quickgelu config. Could you share your other hyperparams you used to obtain those results so that I can run the training with the exact same setup, or better yet share the exact command you used to run the training runs specific to the reported results, I would really appreciate that.
| closed | 2022-10-14T20:45:51Z | 2022-11-05T21:13:30Z | https://github.com/mlfoundations/open_clip/issues/191 | [] | vasusharma | 2 |
agronholm/anyio | asyncio | 146 | Object stream randomly drops items | This issue concerns `master@23803be`
Memory object streams appear to randomly drop items in the following scenario (across all backends):
```python
import anyio
async def receiver(r):
while True:
async with anyio.move_on_after(0.1):
print(await r.receive())
async def main():
s, r = anyio.create_memory_object_stream()
async with anyio.create_task_group() as tg:
await tg.spawn(receiver, r)
for i in range(10):
await anyio.sleep(0.2)
await s.send(i)
await tg.cancel_scope.cancel()
anyio.run(main)
```
Whereas the equivalent Trio code works as expected (every item is received and printed):
```python
import trio
async def receiver(r):
while True:
with trio.move_on_after(0.1):
print(await r.receive())
async def main():
s, r = trio.open_memory_channel(max_buffer_size=0)
async with trio.open_nursery() as n:
n.start_soon(receiver, r)
for i in range(10):
await trio.sleep(0.2)
await s.send(i)
n.cancel_scope.cancel()
trio.run(main)
``` | closed | 2020-08-13T00:16:07Z | 2020-08-16T18:40:39Z | https://github.com/agronholm/anyio/issues/146 | [
"bug"
] | mjwestcott | 11 |
flairNLP/flair | nlp | 2,703 | Low GPU utilization on NER model training | Hi,
I am training a large dataset of `NER` model using `flair` and the GPU utilization is only 20- 30% when the training happens.
This is my model layer & settings
```
2022-04-04 14:20:21,426 ----------------------------------------------------------------------------------------------------
2022-04-04 14:20:21,427 Model: "SequenceTagger(
(embeddings): StackedEmbeddings(
(list_embedding_0): CharacterEmbeddings(
(char_embedding): Embedding(275, 25)
(char_rnn): LSTM(25, 25, bidirectional=True)
)
(list_embedding_1): BytePairEmbeddings(model=1-bpe-multi-100000-50)
)
(word_dropout): WordDropout(p=0.05)
(locked_dropout): LockedDropout(p=0.5)
(embedding2nn): Linear(in_features=650, out_features=650, bias=True)
(rnn): LSTM(650, 256, batch_first=True, bidirectional=True)
(linear): Linear(in_features=512, out_features=23, bias=True)
(beta): 1.0
(weights): None
(weight_tensor) None
)"
2022-04-04 14:20:21,430 ----------------------------------------------------------------------------------------------------
2022-04-04 14:20:21,430 Corpus: "Corpus: 2948930 train + 2000 dev + 842571 test sentences"
2022-04-04 14:20:21,431 ----------------------------------------------------------------------------------------------------
2022-04-04 14:20:21,432 Parameters:
2022-04-04 14:20:21,433 - learning_rate: "0.1"
2022-04-04 14:20:21,434 - mini_batch_size: "32"
2022-04-04 14:20:21,434 - patience: "3"
2022-04-04 14:20:21,435 - anneal_factor: "0.5"
2022-04-04 14:20:21,436 - max_epochs: "5"
2022-04-04 14:20:21,436 - shuffle: "True"
2022-04-04 14:20:21,438 - train_with_dev: "False"
2022-04-04 14:20:21,440 - batch_growth_annealing: "False"
2022-04-04 14:20:21,440 ----------------------------------------------------------------------------------------------------
2022-04-04 14:20:21,441 Model training base path: "addressparser_data\100000\model"
2022-04-04 14:20:21,442 ----------------------------------------------------------------------------------------------------
2022-04-04 14:20:21,442 Device: cuda:0
2022-04-04 14:20:21,443 ----------------------------------------------------------------------------------------------------
2022-04-04 14:20:21,443 Embeddings storage mode: none
2022-04-04 14:20:21,446 ----------------------------------------------------------------------------------------------------
2022-04-04 15:04:22,217 epoch 1 - iter 9215/92155 - loss 0.12749600 - samples/sec: 111.71 - lr: 0.100000
2022-04-04 15:44:59,370 epoch 1 - iter 18430/92155 - loss 0.10006989 - samples/sec: 121.05 - lr: 0.100000
2022-04-04 16:30:37,388 epoch 1 - iter 27645/92155 - loss 0.09042490 - samples/sec: 107.74 - lr: 0.100000
2022-04-04 17:17:21,879 epoch 1 - iter 36860/92155 - loss 0.08745284 - samples/sec: 105.19 - lr: 0.100000
2022-04-04 17:59:28,028 epoch 1 - iter 46075/92155 - loss 0.08517210 - samples/sec: 116.78 - lr: 0.100000
2022-04-04 18:39:09,191 epoch 1 - iter 55290/92155 - loss 0.08944734 - samples/sec: 123.89 - lr: 0.100000
2022-04-04 19:23:13,211 epoch 1 - iter 64505/92155 - loss 0.09197077 - samples/sec: 111.57 - lr: 0.100000
```
I have `NVIDIA® TESLA® M60` GPU with 8GB RAM.
cuda version: `11.4`
<img width="457" alt="image" src="https://user-images.githubusercontent.com/8901901/161622040-dab0105e-9a1c-419c-91ec-99a007dfc9f8.png">
Any suggestion to improve the GPU utilization and faster the training?
<img width="200" alt="image" src="https://media4.giphy.com/media/3oz8xOu5Gw81qULRh6/giphy.gif"> | closed | 2022-04-04T20:04:58Z | 2022-08-29T09:47:37Z | https://github.com/flairNLP/flair/issues/2703 | [
"wontfix"
] | selva221724 | 3 |
python-restx/flask-restx | api | 27 | @api.errorhandler doesn't work | Thank you for forking this project. It does not seem like this issue is addressed yet nor did I see an existing issue mentioning it, so porting it over to this project. I believe these 3 issues capture the problem well enough, but if you need additional information, just shout.
https://github.com/noirbizarre/flask-restplus/issues/764
https://github.com/noirbizarre/flask-restplus/issues/744
https://github.com/noirbizarre/flask-restplus/issues/693
BTW, the work around using `PROPOPGATE_EXCEPTIONS` did not work for me.
Thanks | open | 2020-01-31T00:54:01Z | 2023-12-20T09:12:56Z | https://github.com/python-restx/flask-restx/issues/27 | [
"bug",
"needs_info"
] | lifehackett | 12 |
awtkns/fastapi-crudrouter | fastapi | 5 | Renaming all refences of Pydantic Basemodels to schemas. | This would help to avoid confusion. The function prototype bellow is frankly confusing and should be standardized
```python
SQLAlchemyCRUDRouter(
model=Potato,
db_model=PotatoModel,
db=get_db,
create_schema=PotatoCreate,
prefix='potato'
)
```
| closed | 2020-12-29T10:01:58Z | 2021-01-03T22:38:51Z | https://github.com/awtkns/fastapi-crudrouter/issues/5 | [
"good first issue"
] | awtkns | 0 |
scikit-learn/scikit-learn | python | 30,904 | PowerTransformer overflow warnings | ### Describe the bug
I'm running into overflow warnings using PowerTransformer in some not-very-extreme scenarios. I've been able to find at least one boundary of the problem, where a vector of `[[1]] * 354 + [[0]] * 1` works fine, while `[[1]] * 355 + [[0]] * 1` throws up ("overflow encountered in multiply"). Also, an additional warning starts happening at `[[1]] * 359 + [[0]] * 1` ("overflow encountered in reduce").
Admittedly, I haven't looked into the underlying math of Yeo-Johnson, so an overflow might make sense in that light. (If that's the case, though, perhaps this is an opportunity for a clearer warning?)
### Steps/Code to Reproduce
```python
import sys
from sklearn.preprocessing import PowerTransformer
for n in range(350, 360):
print(f"[[1]] * {n}, [[0]] * 1", file=sys.stderr)
_ = PowerTransformer().fit_transform([[1]] * n + [[0]] * 1)
print(file=sys.stderr)
```
### Expected Results
```
[[1]] * 350, [[0]] * 1
[[1]] * 351, [[0]] * 1
[[1]] * 352, [[0]] * 1
[[1]] * 353, [[0]] * 1
[[1]] * 354, [[0]] * 1
[[1]] * 355, [[0]] * 1
[[1]] * 356, [[0]] * 1
[[1]] * 357, [[0]] * 1
[[1]] * 358, [[0]] * 1
[[1]] * 359, [[0]] * 1
```
### Actual Results
```
[[1]] * 350, [[0]] * 1
[[1]] * 351, [[0]] * 1
[[1]] * 352, [[0]] * 1
[[1]] * 353, [[0]] * 1
[[1]] * 354, [[0]] * 1
[[1]] * 355, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
[[1]] * 356, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
[[1]] * 357, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
[[1]] * 358, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
[[1]] * 359, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:205: RuntimeWarning: overflow encountered in reduce
ret = umr_sum(x, axis, dtype, out, keepdims=keepdims, where=where)
```
### Versions
```shell
System:
python: 3.11.9 (main, May 16 2024, 15:17:37) [Clang 14.0.3 (clang-1403.0.22.14.1)]
executable: /Users/*****/.pyenv/versions/3.11.9/envs/disposable/bin/python
machine: macOS-15.2-arm64-arm-64bit
Python dependencies:
sklearn: 1.6.1
pip: 24.0
setuptools: 65.5.0
numpy: 2.2.3
scipy: 1.15.2
Cython: None
pandas: None
matplotlib: None
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: openmp
internal_api: openmp
num_threads: 8
prefix: libomp
filepath: /Users/*****/.pyenv/versions/3.11.9/envs/disposable/lib/python3.11/site-packages/sklearn/.dylibs/libomp.dylib
version: None
``` | open | 2025-02-26T01:45:05Z | 2025-02-28T11:39:21Z | https://github.com/scikit-learn/scikit-learn/issues/30904 | [] | rcgale | 1 |
tensorflow/tensor2tensor | deep-learning | 1,229 | *help* Using the (universal) transformer as library | ### Description
Hi, I'm quite new to tensor2tensor and have been trying to use the binaries and reading the codes in this repository. I wonder if anyone can provide an example for me to train and use the universal transformer as function calls without having to use the binary script? I'm trying to use the universal transformer without the decoder and I understand how it works from the paper and the code in the model/research directory. I just can't figure out how to use it.
### Environment information
```
OS: Arch Linux with Linux 4.18
GPU: Nvidia GTX 850M 4GB
$ pip freeze | grep tensor
mesh-tensorflow==0.0.4
tensor2tensor==1.11.0
tensorboard==1.12.0
tensorflow==1.12.0
tensorflow-gpu==1.12.0
tensorflow-probability==0.5.0
$ python -V
Python 3.6.6
```
| open | 2018-11-15T12:30:48Z | 2018-11-15T12:31:10Z | https://github.com/tensorflow/tensor2tensor/issues/1229 | [] | gregory112 | 0 |
httpie/cli | python | 793 | Stream mode not working for backend process | I'm using the stream mode to get event updates from some web sites (for example, Google's Firebase real-time database). The solution works perfectly when the process runs in console. But it does not work well as a background process.
When runs as a background process, it works while the console starting the process is open. Once the console is closed, although the process is still running, it will no longer receive update.
Following is the script: (there are updates in /tmp/fb.log until the console is closed)
# httpie is used to handle streaming events from Firebase
FB_REQUEST_URL="$FB_BASE_URL/$endpoint.json?auth=$FB_KEY"
/usr/bin/http --stream "$FB_REQUEST_URL" Accept:'text/event-stream' | \
while read -r line ; do
echo "$line" >> /tmp/fb.log
done | closed | 2019-07-24T13:36:45Z | 2019-07-24T19:35:09Z | https://github.com/httpie/cli/issues/793 | [] | gadget1999 | 1 |
django-oscar/django-oscar | django | 3,509 | Add ability to make wishlists public/shareable | The existing wishlist functionality in Oscar is incomplete - the models allow for wishlists to be "public" or "shared", but the UI does not allow for making a wishlist public/shared.
1. *Public wishlists* - we should llow users to make wishlists "public" which means that they can share links to this with others. My feeling is that "public" in this case should still require the viewer to be logged-in, so that these views are not exposed to abuse and SEO implications. This should be relatvely straighforward to implement, and has a clear use case.
2. *Shared wishlists* - it seems the original intention behind this was to have lists that can collaboratively be managed by multiple users. The use case for this isn't clear to me, and I think needs elaboration before we can implement. | closed | 2020-09-24T05:29:04Z | 2023-05-12T09:14:42Z | https://github.com/django-oscar/django-oscar/issues/3509 | [
"☼ Enhancement"
] | solarissmoke | 2 |
arogozhnikov/einops | tensorflow | 250 | Investigating effect of einops on torch.compile | ## TLDR for newcomers
see [instructions on using einops with torch.compile](https://github.com/arogozhnikov/einops/wiki/Using-torch.compile-with-einops)
<br />
## Original issue
**Describe the bug**
Einops seems to be preventing a good torch.compile optimization. No performance gain is observed (it's even worse) when using einops where we do observe an improvement with torch operations
**Reproduction steps**
Steps to reproduce the behavior:
```python
import torch
import torch.nn as nn
from einops import rearrange
import numpy as np
import time
def run_network(network, x, num_runs=10):
times = []
for _ in range(num_runs):
start = time.time()
network(x)
times.append(time.time() - start)
print(f"Average time: {np.mean(times)}, std: {np.std(times)}")
torch.set_float32_matmul_precision('high')
class FusedMLP(nn.Sequential):
def __init__(
self,
dim_model: int,
dropout: float,
activation: torch.nn,
hidden_layer_multiplier: int = 4,
):
super().__init__(
nn.Linear(dim_model, dim_model * hidden_layer_multiplier),
activation(),
nn.Dropout(dropout),
nn.Linear(dim_model * hidden_layer_multiplier, dim_model),
)
class CrossAttentionOpTorch(nn.Module):
def __init__(self, attention_dim, num_heads, dim_q, dim_kv, use_biases=True):
super().__init__()
self.dim_q = dim_q
self.dim_kv = dim_kv
self.attention_dim = attention_dim
self.num_heads = num_heads
self.use_biases = use_biases
self.q = nn.Linear(dim_q, attention_dim, bias=use_biases)
self.k = nn.Linear(dim_kv, attention_dim, bias=use_biases)
self.v = nn.Linear(dim_kv, attention_dim, bias=use_biases)
self.out = nn.Linear(attention_dim, dim_q, bias=use_biases)
def forward(self, x_to, x_from=None):
if x_from is None:
x_from = x_to
q = self.q(x_to)
k = self.k(x_from)
v = self.v(x_from)
q = q.view(q.shape[0], q.shape[1], self.num_heads, self.attention_dim // self.num_heads).permute(0, 2, 1, 3)
k = k.view(k.shape[0], k.shape[1], self.num_heads, self.attention_dim // self.num_heads).permute(0, 2, 1, 3)
v = v.view(v.shape[0], v.shape[1], self.num_heads, self.attention_dim // self.num_heads).permute(0, 2, 1, 3)
x = torch.nn.functional.scaled_dot_product_attention(q, k, v)
x = x.permute(0, 2, 1, 3).contiguous().view(x.shape[0], x.shape[2], self.attention_dim)
x = self.out(x)
return x
class CrossAttentionOpEinops(nn.Module):
def __init__(self, attention_dim, num_heads, dim_q, dim_kv, use_biases=True):
super().__init__()
self.dim_q = dim_q
self.dim_kv = dim_kv
self.attention_dim = attention_dim
self.num_heads = num_heads
self.use_biases = use_biases
self.q = nn.Linear(dim_q, attention_dim, bias=use_biases)
self.k = nn.Linear(dim_kv, attention_dim, bias=use_biases)
self.v = nn.Linear(dim_kv, attention_dim, bias=use_biases)
self.out = nn.Linear(attention_dim, dim_q, bias=use_biases)
def forward(self, x_to, x_from=None):
if x_from is None:
x_from = x_to
q = self.q(x_to)
k = self.k(x_from)
v = self.v(x_from)
q = rearrange(q, "b n (h d) -> b h n d", h=self.num_heads)
k = rearrange(k, "b n (h d) -> b h n d", h=self.num_heads)
v = rearrange(v, "b n (h d) -> b h n d", h=self.num_heads)
x = torch.nn.functional.scaled_dot_product_attention(q, k, v)
x = rearrange(x, "b h n d -> b n (h d)")
x = self.out(x)
return x
class SelfAttentionBlock(nn.Module):
def __init__(
self,
dim_qkv: int,
num_heads: int,
attention_dim: int = None,
mlp_multiplier: int = 4,
dropout: float = 0.0,
stochastic_depth: float = 0.0,
use_einops: bool = False,
):
super().__init__()
self.initial_ln = nn.LayerNorm(dim_qkv, eps=1e-6)
attention_dim = dim_qkv if attention_dim is None else attention_dim
if use_einops:
self.sa = CrossAttentionOpEinops(attention_dim, num_heads, dim_qkv, dim_qkv)
else:
self.sa = CrossAttentionOpTorch(attention_dim, num_heads, dim_qkv, dim_qkv)
self.middle_ln = nn.LayerNorm(dim_qkv, eps=1e-6)
self.ffn = FusedMLP(
dim_model=dim_qkv,
dropout=dropout,
activation=nn.GELU,
hidden_layer_multiplier=mlp_multiplier,
)
def forward(self, tokens: torch.Tensor) -> torch.Tensor:
tokens = tokens + self.sa(self.initial_ln(tokens))
tokens = tokens + self.ffn(self.middle_ln(tokens))
return tokens
class TransformerEncoder(nn.Sequential):
def __init__(
self,
num_layers: int,
dim_qkv: int,
num_heads: int,
attention_dim: int = None,
mlp_multiplier: int = 4,
dropout: float = 0.0,
stochastic_depth: float = 0.0,
use_einops: bool = False,
):
layers = []
for _ in range(num_layers):
layers.append(
SelfAttentionBlock(
dim_qkv=dim_qkv,
num_heads=num_heads,
attention_dim=attention_dim,
mlp_multiplier=mlp_multiplier,
dropout=dropout,
stochastic_depth=stochastic_depth,
use_einops=use_einops,
)
)
super().__init__(*layers)
device = torch.device("cuda:0")
transformer_einops = TransformerEncoder(
20, 512, 16, mlp_multiplier=4, use_einops=True
).to(device)
transformer_torch = TransformerEncoder(
20, 512, 16, mlp_multiplier=4, use_einops=False
).to(device)
x = torch.randn(64, 256, 512).to(device)
optimized_transformer_einops = torch.compile(transformer_einops, mode="max-autotune")
optimized_transformer_einops(x)
optimized_transformer_torch = torch.compile(transformer_torch, mode="max-autotune")
optimized_transformer_torch(x)
run_network(transformer_einops, x)
run_network(optimized_transformer_einops, x)
run_network(transformer_torch, x)
run_network(optimized_transformer_torch, x)
```
Output:
```bash
Average time: 0.036614489555358884, std: 0.024173271625475713
Average time: 0.06763033866882324, std: 0.020439297576520278
Average time: 0.03728628158569336, std: 0.022657191693774118
Average time: 0.0188093900680542, std: 0.000180285787445628
```
###
**Expected behavior**
As you can observe, when not using einops, we observe a 50% speed-up, whereas when using einops there is a 2X slowdown
Einops is awesome and i hope that we find a way to solve this issue, as it is key for pytorch 2.0 compatibility.
**Your platform**
Einops 0.6, pytorch 2.0, Nvidia 3090
| closed | 2023-04-12T16:40:55Z | 2023-07-10T16:41:01Z | https://github.com/arogozhnikov/einops/issues/250 | [] | nicolas-dufour | 30 |
blacklanternsecurity/bbot | automation | 1,869 | Tag Events with MITRE TTPs | **Description**
For each module, add an optional property denoting which MITRE ATT&CK TTP is associated with the events being generated. These should be hard-coded into the module's properties where appropriate.
Example:
```
class badsecrets(BaseModule):
watched_events = ["HTTP_RESPONSE"]
produced_events = ["FINDING", "VULNERABILITY", "TECHNOLOGY"]
flags = ["active", "safe", "web-basic"]
meta = {
"description": "Library for detecting known or weak secrets across many web frameworks",
"created_date": "2022-11-19",
"author": "@liquidsec",
"mitre_ttp": "T1078.001",
}
```
Result:
```
{
"data": {
"host": "evilcorp.com",
"severity": "INFO",
"description": "asdf",
"mitre_ttp": "T1078.001",
},
"event_type": "VULNERABILITY",
...
}
``` | open | 2024-10-18T13:05:50Z | 2025-02-28T15:02:41Z | https://github.com/blacklanternsecurity/bbot/issues/1869 | [
"enhancement"
] | kerrymilan | 0 |
wger-project/wger | django | 1,811 | Fiber included in carb amounts in US/Canada, excluded in EU | in the United States and most of America, the nutrition label lists total carbohydrate content (including fiber) as per FDA law. In the European Union, the nutrition label lists carbohydrate and fiber separately.
Not sure how to handle this. But i'm quite sure some of our calculations are wrong because of this.
OFF has a corresponding ticket here https://github.com/openfoodfacts/openfoodfacts-server/issues/5675 | open | 2024-11-06T15:11:34Z | 2024-11-07T11:48:43Z | https://github.com/wger-project/wger/issues/1811 | [] | Dieterbe | 1 |
wq/django-rest-pandas | rest-api | 34 | Calling `PandasBaseRenderer().render(data)` throws `TypeError` | For personal reasons, I wanted to call the `PandasCSVRenderer` on a dataframe outside of a view.
Because `renderer_context` is defaulted to `None`, the following code will throw `TypeError: argument of type 'NoneType' is not iterable`:
```python
from rest_pandas import renderers
data_frame = pandas.DataFrame(some_data)
print renderers.PandasCSVRenderer().render(data_frame)
>>>TypeError: argument of type 'NoneType' is not iterable
```
I suggest two options to fix the following code:
https://github.com/wq/django-rest-pandas/blob/edd3225acb21fcb0e36d8889d7bdfa5dedc5d67b/rest_pandas/renderers.py#L30
1. Change the if statement:
```python
if renderer_context and 'response' in renderer_context:
```
2. Add the following at the top of the function:
```python
if renderer_context is None:
renreder_context = {}
...
``` | closed | 2018-03-23T20:09:09Z | 2019-03-28T06:54:46Z | https://github.com/wq/django-rest-pandas/issues/34 | [] | arthurio | 2 |
deezer/spleeter | tensorflow | 532 | some question about finetune | Is the complete model open source? I encountered some problems in the fine-tuning process and notices the following issue https://github.com/deezer/spleeter/issues/32.
So I'm sorry to disturb you
Look forward to your answer
| open | 2020-12-09T10:26:11Z | 2020-12-09T10:26:11Z | https://github.com/deezer/spleeter/issues/532 | [
"question"
] | DaerTaeKook | 0 |
STVIR/pysot | computer-vision | 179 | how can i train alexnetlegacy model ? | why did i trained alexnetlegacy model size is not same to github download model? | closed | 2019-09-17T02:29:05Z | 2019-10-09T03:34:36Z | https://github.com/STVIR/pysot/issues/179 | [] | root12321 | 2 |
521xueweihan/HelloGitHub | python | 2,447 | 开源自荐:SmartPET-Feeder | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/smart2pet/SmartPET-Feeder
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:Python和Arduino C
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:SmartPET 自动喂食机
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:可以自动喂食,解决了宠物喂食问题
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:准确,精度高于市面普通自动喂食机(Precise)、操作简单(Easy)、准时(Time-based);离线可出粮;
普通自动喂食机利用重量传感器,会造成约±5g偏差。但此项目使用回归算法,实际测试证明算法使偏差普遍低于±2g。
- 示例代码:(可选)
- 截图:(可选)gif/png/jpg
- 后续更新计划:提供major、minor、patch更新,提供新功能。
- 注:未尽事宜,参见项目README。
| closed | 2022-12-04T12:32:07Z | 2024-01-24T08:22:31Z | https://github.com/521xueweihan/HelloGitHub/issues/2447 | [
"Python 项目"
] | smart2pet | 9 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 873 | Can't seem to install torch, displays "errored out with exit status 1" | I used pip install torch, and got this:
ERROR: Command errored out with exit status 1:
command: 'c:\users\admin\appdata\local\programs\python\python37-32\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Admin\\AppData\\Local\\Temp\\pip-install-jm6frbtm\\torch_2fdde634afcd4c2ea030032be969e505\\setup.py'"'"'; __file__='"'"'C:\\Users\\Admin\\AppData\\Local\\Temp\\pip-install-jm6frbtm\\torch_2fdde634afcd4c2ea030032be969e505\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Admin\AppData\Local\Temp\pip-record-qh8_xizr\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\admin\appdata\local\programs\python\python37-32\Include\torch'
cwd: C:\Users\Admin\AppData\Local\Temp\pip-install-jm6frbtm\torch_2fdde634afcd4c2ea030032be969e505\
Complete output (23 lines):
running install
running build_deps
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Admin\AppData\Local\Temp\pip-install-jm6frbtm\torch_2fdde634afcd4c2ea030032be969e505\setup.py", line 265, in <module>
description="Tensors and Dynamic neural networks in Python with strong GPU acceleration",
File "c:\users\admin\appdata\local\programs\python\python37-32\lib\site-packages\setuptools\__init__.py", line 129, in setup
return distutils.core.setup(**attrs)
File "c:\users\admin\appdata\local\programs\python\python37-32\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "c:\users\admin\appdata\local\programs\python\python37-32\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "c:\users\admin\appdata\local\programs\python\python37-32\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\Admin\AppData\Local\Temp\pip-install-jm6frbtm\torch_2fdde634afcd4c2ea030032be969e505\setup.py", line 99, in run
self.run_command('build_deps')
File "c:\users\admin\appdata\local\programs\python\python37-32\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\users\admin\appdata\local\programs\python\python37-32\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\Admin\AppData\Local\Temp\pip-install-jm6frbtm\torch_2fdde634afcd4c2ea030032be969e505\setup.py", line 51, in run
from tools.nnwrap import generate_wrappers as generate_nn_wrappers
ModuleNotFoundError: No module named 'tools.nnwrap'
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\admin\appdata\local\programs\python\python37-32\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Admin\\AppData\\Local\\Temp\\pip-install-jm6frbtm\\torch_2fdde634afcd4c2ea030032be969e505\\setup.py'"'"'; __file__='"'"'C:\\Users\\Admin\\AppData\\Local\\Temp\\pip-install-jm6frbtm\\torch_2fdde634afcd4c2ea030032be969e505\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Admin\AppData\Local\Temp\pip-record-qh8_xizr\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\admin\appdata\local\programs\python\python37-32\Include\torch' Check the logs for full command output. | closed | 2021-10-15T18:23:57Z | 2021-10-16T08:43:00Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/873 | [] | ControlAlternateDelete | 1 |
Kanaries/pygwalker | pandas | 239 | When deploying pyg walker over streamlit the save button is missing | closed | 2023-09-24T15:05:32Z | 2023-09-29T02:23:51Z | https://github.com/Kanaries/pygwalker/issues/239 | [] | JeevankumarDharmalingam | 14 | |
plotly/dash | flask | 3,036 | [Feature Request] Optionally pass errors raised from a callback specific error handler to the global error handler | Thanks so much for your interest in Dash!
Before posting an issue here, please check the Dash [community forum](https://community.plotly.com/c/dash) to see if the topic has already been discussed. The community forum is also great for implementation questions. When in doubt, please feel free to just post the issue here :)
**Is your feature request related to a problem? Please describe.**
Currently if I define `on_error` for both the `Dash()` class and a for a specific callback, the callback specific `on_error` overwrites the global `on_error`. It would be nice to have an option to chain these error handlers, so that exceptions raised from the callback specific `on_error` are passed on to the global `on_error`.
Example I can think of:
- callback specific on_error: handles user incorrectly filling in data
- global on_error: catches unexpected errors (ie. bugs) and notifies the developer
**Describe the solution you'd like**
Perhaps a `bool` argument to the `callback()` decorator that would enable passing uncaught exceptions from the local `on_error` to the global `on_error`, and the same argument to the `Dash()` class which would be used as a default for all callbacks?
**Describe alternatives you've considered**
- Wrapping callback specific on_error in a try catch block and calling the global error handler manually.
- Wrapping the body of a callback in a try catch block and calling the callback specific on_error manually.
I think both of these approaches are unnecessary boiler plate.
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2024-10-15T09:59:29Z | 2024-10-15T09:59:29Z | https://github.com/plotly/dash/issues/3036 | [] | tlauli | 0 |
Esri/arcgis-python-api | jupyter | 1,701 | Downloading CSV files via arcgis.gis.Item.download() failing with unexpected permission error | Downloading CSV files via arcgis.gis.Item.download() failing with unexpected permission error.
First noticed this when downloading an Administrative Report item in an ArcGIS Online Notebook (runtime Standard - 8.0; ArcGIS API for Python version 2.1.0.2.) The issue began occurring after the October 2023 ArcGIS Online update. The code had been running fine before this as a daily scheduled task. Seems to impact any downloading any CSV file though.
One way to reproduce the issue is to generate an Administrative Report, such as a daily-scoped Item report, via the ArcGIS Online GUI. From the report's item detail page, you can successfully down the CSV using the Download button. However, attempting to programmatically retrieve the same report via the ArcGIS API for Python using arcgis.gis.Item.download() generates a CSV which contains a JSON error message (e.g., Error: You do not have permissions to access this resource or perform this operation.)
Updating the ArcGIS Online Notebook to the latest version of the runtime, ArcGIS Notebo0k Python 3 Standard - 9.0, does not fix the issue. | closed | 2023-10-28T14:47:36Z | 2023-10-30T09:06:45Z | https://github.com/Esri/arcgis-python-api/issues/1701 | [] | knoopum | 1 |
Skyvern-AI/skyvern | automation | 1,114 | Suggestion: Skyvern may benefit from "use search on Discourse" forum heuristic | There are few forums using https://www.discourse.org/ , seems like popular framework.
When I analyze run I made - it was run on your app.skyern.com platform, that way you have full access to logs, I am fine with that, as this is search for laptops with eInk/ePaper displays or eInk/ePaper display for Framework laptop:
https://app.skyvern.com/tasks/tsk_321735444878741662/diagnostics
I gave it limit 40 steps to see how it loops,
When I took fast look at traces, I have impression it loop around idea of trying to click "search" button and fill "text box" and couldn't do it on Discourse.org playform on page https://community.frame.work/t/replaceable-e-ink-display/13327 .
Given Discourse is popular site I was thinking, that maybe reporting this trace maybe interesting, and maybe Skyvern could benefit from hardcoded heuristics for different sites.
Maybe you could made separate repository , just for those heuristics, so community could contribute and Skyvern could use them? - just ideas. Feel free to Close this bug :).
Discussion: Discussions: https://github.com/Skyvern-AI/skyvern/discussions/1115 | open | 2024-11-03T14:16:46Z | 2024-11-03T14:19:50Z | https://github.com/Skyvern-AI/skyvern/issues/1114 | [] | gwpl | 0 |
littlecodersh/ItChat | api | 977 | itchat 存在违规风险 | 在提交前,请确保您已经检查了以下内容!
- [ ] 您可以在浏览器中登陆微信账号,但不能使用`itchat`登陆
- [ ] 我已经阅读并按[文档][document] 中的指引进行了操作
- [ ] 您的问题没有在[issues][issues]报告,否则请在原有issue下报告
- [ ] 本问题确实关于`itchat`, 而不是其他项目.
- [ ] 如果你的问题关于稳定性,建议尝试对网络稳定性要求极低的[itchatmp][itchatmp]项目
请使用`itchat.run(debug=True)`运行,并将输出粘贴在下面:
```
[在这里粘贴完整日志]
```
您的itchat版本为:`[在这里填写版本号]`。(可通过`python -c "import itchat;print(itchat.__version__)"`获取)
其他的内容或者问题更详细的描述都可以添加在下面:
> [您的内容]
[document]: http://itchat.readthedocs.io/zh/latest/
[issues]: https://github.com/littlecodersh/itchat/issues
[itchatmp]: https://github.com/littlecodersh/itchatmp
| open | 2023-02-09T03:19:32Z | 2023-02-09T03:19:32Z | https://github.com/littlecodersh/ItChat/issues/977 | [] | ghost | 0 |
aimhubio/aim | data-visualization | 3,085 | Bug report | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
### To reproduce
<!-- Reproduction steps. -->
### Expected behavior
<!-- Fill in expected behavior. -->
### Environment
- Aim Version (e.g., 3.0.1)
- Python version
- pip version
- OS (e.g., Linux)
- Any other relevant information
### Additional context
<!-- Add any other context about the problem here. -->
| open | 2024-01-07T11:03:43Z | 2024-01-07T11:03:43Z | https://github.com/aimhubio/aim/issues/3085 | [
"type / bug",
"help wanted"
] | for4nous | 0 |
scikit-learn/scikit-learn | python | 30,194 | Rename `frozen.FrozenEstimator` to `frozen.Frozen` | Looking through all our estimators, none of them have the word "Estimator" besides `BaseEstimator` and `MetaEstimatorMixin`. I think we can shorten the meta-estimator name to `Frozen`.
CC @adrinjalali @scikit-learn/core-devs | closed | 2024-11-01T20:49:47Z | 2024-11-07T08:19:16Z | https://github.com/scikit-learn/scikit-learn/issues/30194 | [
"API",
"Blocker",
"RFC"
] | thomasjpfan | 15 |
MaartenGr/BERTopic | nlp | 1,079 | question with topics_over_time | hello~ GREAT thanks for your elegnant model~
### There is something wrong with your topics_over_time however, when I try drawing my own graph about topics over time. The language is Chinese.
I find the line always went horizontally, and every topic value was 1.
`vectorizer = CountVectorizer(tokenizer=tokenize_zh,stop_words=stopwords)`
`sentence_model = SentenceTransformer("paraphrase-multilingual-mpnet-base-v2")`
`embeddings = sentence_model.encode(contents, show_progress_bar=False)`
`topic_model = BERTopic(
language="multilingual",
embedding_model=sentence_model,
vectorizer_model=vectorizer
)`
`topics, _ = topic_model.fit_transform(contents,embeddings)`
`topics_over_time = topic_model.topics_over_time(contents, timestamps)`
`topic_model.visualize_topics_over_time(topics_over_time, topics=[0, 2, 3, 4, 5, 61])`
## And here's result:

Then I try your sample code, to my surprise, things still goes wrong:

### no flucation! please tell me what's going wrong.
| closed | 2023-03-08T08:53:38Z | 2023-05-23T09:32:52Z | https://github.com/MaartenGr/BERTopic/issues/1079 | [] | MrRabbit12 | 5 |
google/trax | numpy | 1,552 | Cannot create simple image pipeline with trax.data. | ### Description
Apologies if this is the wrong place to raise such issue:
Trying to follow the [documentation](https://trax-ml.readthedocs.io/en/latest/trax.data.html#module-trax.data.inputs) on data pipelines in trax I was trying to create simple image pipeline that loads and preprocesses [MNIST](https://www.tensorflow.org/datasets/catalog/mnist) dataset.
The following example will work as expected:
```python
import trax
data_pipeline = trax.data.Serial(
trax.data.TFDS('mnist', keys=("image", "label"), train=True)
)
example = data_pipeline()
data, label = next(example)
print(data.shape) # (28, 28, 1)
print(label.shape) # ()
```
Now, consider applying mock preprocessing on the data: `lambda x, y: (x, y)`. According to the mentioned documentation, the following should work:
```python
data_pipeline = trax.data.Serial(
trax.data.TFDS('mnist', keys=("image", "label"), train=True),
lambda g: map(lambda x, y: (x, y), g)
)
```
However, upon calling:
```python
data, label = next(example)
print(data.shape)
print(label.shape)
```
an error will be raised:
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-98-36d413261aac> in <module>()
----> 1 data, label = next(example)
2 print(data.shape)
3 print(label.shape)
TypeError: <lambda>() missing 1 required positional argument: 'y'---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-98-36d413261aac> in <module>()
----> 1 data, label = next(example)
2 print(data.shape)
3 print(label.shape)
TypeError: <lambda>() missing 1 required positional argument: 'y'
```
Is this a bug or is my approach wrong?
### Environment information
Google Colab
```
OS: Ubuntu 18.04
$ pip freeze | grep trax
trax==1.3.7
$ pip freeze | grep tensor
mesh-tensorflow==0.1.18
tensorboard==2.4.1
tensorboard-plugin-wit==1.8.0
tensorflow==2.4.1
tensorflow-datasets==4.0.1
tensorflow-estimator==2.4.0
tensorflow-gcs-config==2.4.0
tensorflow-hub==0.11.0
tensorflow-metadata==0.28.0
tensorflow-probability==0.12.1
tensorflow-text==2.4.3
$ pip freeze | grep jax
jax==0.2.10
jaxlib==0.1.62+cuda110
$ python -V
# Python 3.7.10
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
To reproduce the error one can execute the following:
```python
!pip install -qq -U trax
data_pipeline = trax.data.Serial(
trax.data.TFDS('mnist', keys=("image", "label"), train=True),
lambda g: map(lambda x, y: (x, y), g)
)
example = data_pipeline()
data, label = next(example)
print(data.shape)
print(label.shape)
```
# Error logs:
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-98-36d413261aac> in <module>()
----> 1 data, label = next(example)
2 print(data.shape)
3 print(label.shape)
TypeError: <lambda>() missing 1 required positional argument: 'y'
```
Best regards,
Sebastian | closed | 2021-03-20T11:51:44Z | 2021-03-28T18:18:04Z | https://github.com/google/trax/issues/1552 | [] | sebastian-sz | 1 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,366 | [Bug]: An exception occurred during image training | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
<img width="1685" alt="image" src="https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/32756573/54886b92-214a-41d6-a5d6-83c0c4dd4848">
<img width="1428" alt="image" src="https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/32756573/11da8945-d3fc-4c03-8471-cee251702e90">
### Steps to reproduce the problem
python3.10.6 No normal training, I don't understand when I first use it
### What should have happened?
No normal training, I don't understand when I first use it
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
[sysinfo-2024-03-24-14-32.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/14735760/sysinfo-2024-03-24-14-32.json)
### Console logs
```Shell
Applying attention optimization: sub-quadratic... done.
*** Error completing request
*** Arguments: ('task(6g8fb4tbp2697jt)', '', '0.00001', 1, 1, '/Users/gaoxin/Downloads/\xa0图片/爱人/', 'textual_inversion', 512, 512, False, 100000, 'disabled', '0.1', False, 0, 'once', False, 500, 500, 'style_filewords.txt', False, '', '', 20, 'DPM++ 2M Karras', 7, -1, 512, 512) {}
Traceback (most recent call last):
File "/Users/gaoxin/Downloads/wwwroot/stable-diffusion-webuiV2/modules/call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "/Users/gaoxin/Downloads/wwwroot/stable-diffusion-webuiV2/modules/call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "/Users/gaoxin/Downloads/wwwroot/stable-diffusion-webuiV2/modules/hypernetworks/ui.py", line 25, in train_hypernetwork
hypernetwork, filename = modules.hypernetworks.hypernetwork.train_hypernetwork(*args)
File "/Users/gaoxin/Downloads/wwwroot/stable-diffusion-webuiV2/modules/hypernetworks/hypernetwork.py", line 477, in train_hypernetwork
textual_inversion.validate_train_inputs(hypernetwork_name, learn_rate, batch_size, gradient_step, data_root, template_file, template_filename, steps, save_hypernetwork_every, create_image_every, log_directory, name="hypernetwork")
File "/Users/gaoxin/Downloads/wwwroot/stable-diffusion-webuiV2/modules/textual_inversion/textual_inversion.py", line 373, in validate_train_inputs
assert model_name, f"{name} not selected"
AssertionError: hypernetwork not selected
```
### Additional information
_No response_ | open | 2024-03-24T14:34:56Z | 2024-03-25T01:38:04Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15366 | [
"bug-report"
] | KevinHubs | 1 |
plotly/plotly.py | plotly | 4,455 | y axis labels does not appear in scatter plot when using plotly.express with 2 inputs | Hello Plotly,
I experience the following problem,
when trying to generate a figure of type 'line' with multiple traces using the plotly.express
(the traces are defined implicitly, by defining a list of list in the y argument)
the resulting figure indeed includes 2 traces as expected,
it DOES name correctly the x axis but DOESN'T name correctly the y axis label, which is set to 'value' for some reason.
See attached an example code and a resulting graph (saved as screenshot)
MOREOVER, you can see the same phenomenon in the plotly tutorial
https://www.youtube.com/watch?v=GGL6U0k8WYA&t=622s
at minutes 08:50-09:00 of the video. Notice the y-axis label in the video.
Thank you
--------------------Example code--------------
```
import plotly.express as px
pxFig = px.line(x=[0,1,2,3],y=[[10,11,12,13],[12,13,14,15]], labels=dict(x='xLabel', y='yLabel'), title='my Title')
pxFig.show()
```
-------------------------------
Attached image

| closed | 2023-12-08T18:51:24Z | 2024-01-18T21:25:16Z | https://github.com/plotly/plotly.py/issues/4455 | [
"bug",
"sev-2"
] | npcomplete2023 | 6 |
koxudaxi/datamodel-code-generator | fastapi | 1,904 | How to change endpoints naming template? | I run the comand:
```
datamodel-codegen --url http://localhost:8000/openapi.json --output src/server/models/language_server_rpc.py --openapi-scopes schemas paths tags parameters --output-model-type pydantic.BaseModel
```
And endpoint params are named like this: `ApiV2WorkspacesWorkspaceIdMapDependenciesPostResponse`
Is it possible to change that naming template? | open | 2024-04-07T17:27:43Z | 2024-04-07T17:27:43Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1904 | [] | funnydman | 0 |
deepset-ai/haystack | machine-learning | 8,733 | Cohere ChatGenerator - support for Tool | closed | 2025-01-16T14:10:13Z | 2025-01-28T12:41:08Z | https://github.com/deepset-ai/haystack/issues/8733 | [
"P2"
] | anakin87 | 1 | |
ludwig-ai/ludwig | data-science | 3,894 | Impossibility to use a tokenizer with auto_transformer | I want to use [this model](https://huggingface.co/ibm/MoLFormer-XL-both-10pct) as an encoder. As you can see from the description, the model can be uploaded like:
```
model = AutoModel.from_pretrained("ibm/MoLFormer-XL-both-10pct", deterministic_eval=True, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ibm/MoLFormer-XL-both-10pct", trust_remote_code=True)
```
I try to load it using
```
encoder: auto_transformer
pretrained_model_name_or_path: ibm/MoLFormer-XL-both-10pct
```
It results in `RuntimeError: Caught exception during model preprocessing: Tokenizer class MolformerTokenizer does not exist or is not currently imported.` This is not surprising, because this model does not use the specific `MolformerTokenizer` but `AutoTokenizer` instead.
However, the [documentation](https://ludwig.ai/latest/configuration/features/text_features/#preprocessing) says that `"If a text feature's encoder specifies a huggingface model, then the tokenizer for that model will be used automatically."`.
How can I load the tokenizer for this model?
| open | 2024-01-17T17:08:37Z | 2024-10-21T18:49:36Z | https://github.com/ludwig-ai/ludwig/issues/3894 | [
"bug",
"llm"
] | sergsb | 4 |
netbox-community/netbox | django | 18,032 | Server error after updated object | ### Deployment Type
Self-hosted
### Triage priority
I volunteer to perform this work (if approved)
### NetBox Version
v4.0.7
### Python Version
3.11
### Steps to Reproduce
I've an issue when I try to access to Virtual Device Contexts main page :
Server Error
There was a problem with your request. Please contact an administrator.
The complete exception is provided below:
<class 'django.core.exceptions.FieldError'>
Cannot resolve keyword '_name' into field. Choices are: bookmarks, comments, created, custom_field_data, description, device, device_id, id, identifier, interface_count, interfaces, journal_entries, last_updated, name, primary_ip4, primary_ip4_id, primary_ip6, primary_ip6_id, status, tagged_items, tags, tenant, tenant_id
Python version: 3.11.6
NetBox version: 4.0.7
Plugins: None installed
I can list VDC using to api call /api/dcim/virtual-device-contexts/
I can access to a specific VDC ID using GUI and API
### Expected Behavior
I'm supposed to display the list of all VDC on GUI
### Observed Behavior
Server Error
There was a problem with your request. Please contact an administrator.
The complete exception is provided below:
<class 'django.core.exceptions.FieldError'>
Cannot resolve keyword '_name' into field. Choices are: bookmarks, comments, created, custom_field_data, description, device, device_id, id, identifier, interface_count, interfaces, journal_entries, last_updated, name, primary_ip4, primary_ip4_id, primary_ip6, primary_ip6_id, status, tagged_items, tags, tenant, tenant_id
Python version: 3.11.6
NetBox version: 4.0.7
Plugins: None installed | closed | 2024-11-16T19:52:23Z | 2025-03-06T03:09:04Z | https://github.com/netbox-community/netbox/issues/18032 | [
"type: bug",
"status: accepted",
"status: under review",
"severity: low"
] | JeanKean-art | 7 |
plotly/dash | plotly | 2,456 | Pass a dash component to label property of dcc.Tab |
## Why is this feature important?
Currently unable to pass a dash component to the label property of a dcc.Tab. Being able to do so would enable users to have images in the tab itself and then target that image with a tooltip or other functionality via a callback.
## Describe the acceptance criteria
Being able to pass a dash component to the label property of dcc.Tab
End result could produce something like this:
<img width="703" alt="Photo1" src="https://user-images.githubusercontent.com/98440270/225378254-66dc14d1-df42-4238-bbfb-613e88be2af8.png">
## Additional context
Was able to create a workaround for this by using a mix of Dash Mantine Components tab hierarchy, Dash Mantine components Avatar and Dash Bootstrap Components Tooltip. | open | 2023-03-15T16:37:37Z | 2024-08-13T19:29:02Z | https://github.com/plotly/dash/issues/2456 | [
"feature",
"P3"
] | jwcomp4 | 3 |
mage-ai/mage-ai | data-science | 5,610 | Dynamic Routes Not Loading Correctly After Deployment on Exported Static Build | I have encountered an issue with the application after deploying a static build. The application works correctly in the development environment (3000) but fails for certain dynamic routes when running on port 6789. I have added new dynamic routes, and while they function correctly during development, they show a "404 Not Found" error on port 6789. | open | 2024-12-11T11:20:25Z | 2024-12-11T11:20:25Z | https://github.com/mage-ai/mage-ai/issues/5610 | [] | puneeth-jadhav | 0 |
sammchardy/python-binance | api | 1,288 | Placing a future limit order to take partial profit - APIError (code=-1022): Signature for this request is not valid | Please help - I am going crazy trying to place a limit order for the partial closure of the open position.
I keep getting "APIError (code=-1022): Signature for this request is not valid." error.
I have a short position on ETHYSDT and I want to place an order that takes profit from1/3 od the position at a certain price:
order_confirmation = binance_client.futures_create_order(
symbol="ETHUSDT",
side="BUY",
type=binance_client.FUTURE_ORDER_TYPE_STOP_MARKET, # on this one I already tried everything out there
quantity='0.014',
timeInForce="GTC",
reduceOnly="true",
closePosition='false',
price='1512.12',
stopPrice='1512.12',
recvWindow=600000,
timestamp=timestamp
)
I tired many different combinations and solutions I could find on the web. There is no problem with signature / api_key or secret_key because other type of orders (e.g. limit orders to open new positions) work fine with the same code. What could be the issue here? | closed | 2023-02-11T16:11:54Z | 2023-02-12T13:20:57Z | https://github.com/sammchardy/python-binance/issues/1288 | [] | arsh-charlesriver | 0 |
taverntesting/tavern | pytest | 457 | Nested lists with parametrize causing BadSchemaError |
```
marks:
- parametrize:
key:
- foo
- bar
- biz
vals:
-
- c93a50-16cf7cad928-44
- - c93a50-16cf7cad928-4e
- c93a50-16cf7cad928-4g
- - c93a50-16cf7cad928-4m
- c93a50-16cf7cad928-4o
-
- c93a50-16cfd24f2f7-76
- - c93a50-16cfd24f2f7-8d
- c93a50-16cfd24f2f7-8f
- - c93a50-16cfd24f2f7-b1
- c93a50-16cfd24f2f7-b3
``` | closed | 2019-09-30T16:11:51Z | 2019-11-19T17:43:58Z | https://github.com/taverntesting/tavern/issues/457 | [] | jsfehler | 1 |
gradio-app/gradio | python | 10,632 | `gr.Progress` not displayed correctly on `gr.Dataframe` | ### Describe the bug
The progress bar is not displayed fully on Dataframes, see screenshot.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def foo(progress=gr.Progress()):
import time
progress(0, desc="Starting")
time.sleep(1)
for i in progress.tqdm(range(1, 101), desc="Progress"):
time.sleep(1)
return [["finished"]]
with gr.Blocks() as blocks:
m = gr.Matrix(headers=["test"])
gr.Button("Foo").click(foo, show_progress=True, outputs=m)
blocks.launch()
```
### Screenshot

### Logs
```shell
```
### System Info
```shell
gradio==5.16.1
```
### Severity
I can work around it | closed | 2025-02-19T16:15:56Z | 2025-03-07T10:52:20Z | https://github.com/gradio-app/gradio/issues/10632 | [
"bug",
"💾 Dataframe"
] | Josef-Haupt | 0 |
aiortc/aiortc | asyncio | 227 | Access to local ice candidates | I'm trying to figure out the right way to share ice candidates with my peers but can't seem to find a public API for doing so.
It seems I could monitor the icegathers of the peerconnection for gathering state changes and then share the current set of local candidates from that instance, but access to this stuff seems to be considered private (leading dunder grammer).
In JS land I use the peer connection's `icecandiate` event for this and then the gathering state change event to flush a buffered collection of icecandidates so my signalling is a little smoother. It would be great if this was possible in aiortc too. | closed | 2019-11-19T18:36:28Z | 2024-05-30T11:33:26Z | https://github.com/aiortc/aiortc/issues/227 | [] | mayfield | 3 |
nerfstudio-project/nerfstudio | computer-vision | 3,044 | Cannot access GDrive link for test data | **Describe the bug**
The link for test data is 404 not found
**To Reproduce**
When using the command "ns-download-data nerfstudio --capture-name=poster", I face error because the link is broken. Accessing through browser, I face 404 not found problem.
**Expected behavior**
Can you update the link for test data, please!
| closed | 2024-04-05T03:56:42Z | 2024-04-05T04:37:11Z | https://github.com/nerfstudio-project/nerfstudio/issues/3044 | [] | PAD2003 | 0 |
iperov/DeepFaceLab | machine-learning | 5,256 | every new updates make DFL worst and worst and worst | Hello, after this new updates, DFL is only worst..
working 10 times slow
faces ectract - 1000 faces, 70 minutes
Xseg train freeze after 200 interactions
training .. slow
We can't buy new PC, and new cards, after you every new updates ))))
releases from last year, at 11.2020 working GREAT, but now, we can't download them...
Why you not share oldest releases, not only that 12/2020 ?? Releases before december, working great
Sorry.. i respect you, but now, is DFL non useful for many and many people... | open | 2021-01-22T05:17:47Z | 2023-06-08T21:42:28Z | https://github.com/iperov/DeepFaceLab/issues/5256 | [] | tembel123456 | 2 |
apragacz/django-rest-registration | rest-api | 93 | Phone_number and OTP authentication | Can we use Mobile_Number or username instead of username in Login API ?
| closed | 2019-11-28T09:54:28Z | 2019-11-29T12:24:52Z | https://github.com/apragacz/django-rest-registration/issues/93 | [
"type:question"
] | vikasgirdhar30 | 5 |
hyperspy/hyperspy | data-visualization | 2,857 | Forward compatibility file reading broken? | Files saved using `RELEASE_next_minor` (`Rnm`) can no longer be opened in `RELEASE_next_patch` (`Rnp`). I'm guessing this is related to the recently merged ragged array improvements in https://github.com/hyperspy/hyperspy/pull/2842.
Example:
In `Rnm`:
```python
import numpy as np
import hyperspy.api as hs
s = hs.signals.Signal2D(np.ones((100, 100)))
s.save("000_test_saving.hspy", overwrite=True)
```
Switch to `Rnp`:
```python
import hyperspy.api as hs
s = hs.load("000_test_saving.hspy")
```
Gives the error:
```python
hyperspy/hyperspy/io.py in load(filenames, signal_type, stack, stack_axis, new_axis_name, lazy, convert_units, escape_square_brackets, stack_metadata, load_original_metadata, show_progressba
r, **kwds)
421 else:
422 # No stack, so simply we load all signals in all files separately
--> 423 objects = [load_single_file(filename, lazy=lazy, **kwds) for filename in filenames]
424
425 if len(objects) == 1:
hyperspy/hyperspy/io.py in <listcomp>(.0)
421 else:
422 # No stack, so simply we load all signals in all files separately
--> 423 objects = [load_single_file(filename, lazy=lazy, **kwds) for filename in filenames]
424
425 if len(objects) == 1:
hyperspy/hyperspy/io.py in load_single_file(filename, **kwds)
474 try:
475 # Try and load the file
--> 476 return load_with_reader(filename=filename, reader=reader, **kwds)
477
478 except BaseException:
hyperspy/hyperspy/io.py in load_with_reader(filename, reader, signal_type, convert_units, load_original_metadata, **kwds)
503 if signal_type is not None:
504 signal_dict['metadata']["Signal"]['signal_type'] = signal_type
--> 505 signal = dict2signal(signal_dict, lazy=lazy)
506 folder, filename = os.path.split(os.path.abspath(filename))
507 filename, extension = os.path.splitext(filename)
hyperspy/hyperspy/io.py in dict2signal(signal_dict, lazy)
667 # If not defined, all dimension are categorised as signal
668 signal_dimension = signal_dict["data"].ndim
--> 669 signal = assign_signal_subclass(signal_dimension=signal_dimension,
670 signal_type=signal_type,
671 dtype=signal_dict['data'].dtype,
hyperspy/hyperspy/_signals/signal2d.py in __init__(self, *args, **kw)
318
319 def __init__(self, *args, **kw):
--> 320 super().__init__(*args, **kw)
321 if self.axes_manager.signal_dimension != 2:
322 self.axes_manager.set_signal_dimension(2)
hyperspy/hyperspy/signal.py in __init__(self, data, **kwds)
2172 self.learning_results = LearningResults()
2173 kwds['data'] = data
-> 2174 self._load_dictionary(kwds)
2175 self._plot = None
2176 self.inav = SpecialSlicersSignal(self, True)
hyperspy/hyperspy/signal.py in _load_dictionary(self, file_data_dict)
2438 if 'axes' not in file_data_dict:
2439 file_data_dict['axes'] = self._get_undefined_axes_list()
-> 2440 self.axes_manager = AxesManager(
2441 file_data_dict['axes'])
2442 if 'metadata' not in file_data_dict:
hyperspy/hyperspy/axes.py in __init__(self, axes_list)
786 obj : The AxesManager that the event belongs to.
787 """, arguments=['obj'])
--> 788 self.create_axes(axes_list)
789 # set_signal_dimension is called only if there is no current
790 # view. It defaults to spectrum
hyperspy/hyperspy/axes.py in create_axes(self, axes_list)
973 axes_list.sort(key=lambda x: x['index_in_array'])
974 for axis_dict in axes_list:
--> 975 self._append_axis(**axis_dict)
976
977 def _update_max_index(self):
hyperspy/hyperspy/axes.py in _append_axis(self, *args, **kwargs)
1036
1037 def _append_axis(self, *args, **kwargs):
-> 1038 axis = DataAxis(*args, **kwargs)
1039 axis.axes_manager = self
1040 self._axes.append(axis)
TypeError: __init__() got an unexpected keyword argument '_type'
```
@ericpre | closed | 2021-11-18T10:45:04Z | 2022-01-26T13:50:27Z | https://github.com/hyperspy/hyperspy/issues/2857 | [
"type: regression",
"type: bug?",
"release: next minor"
] | magnunor | 15 |
tensorpack/tensorpack | tensorflow | 1,214 | Improved Wasserstein GAN example does not have n_critic hyperparameter | Like the title, in the WGAN-GP (improved WGAN) example, the paper states updating the critic n_critic=5 times for each generator update. This is not part of the current [example](https://github.com/tensorpack/tensorpack/blob/master/examples/GAN/Improved-WGAN.py).
I'm still learning Tensorpack, and this was the exact reason for me to look up the example, but it wasn't there :(. | closed | 2019-05-27T07:22:28Z | 2019-05-27T07:43:33Z | https://github.com/tensorpack/tensorpack/issues/1214 | [
"examples"
] | Baukebrenninkmeijer | 2 |
piccolo-orm/piccolo | fastapi | 144 | Creating benchmarks for piccolo ORM | I'm usually not a fan of benchmarks but I think it'd be a good idea to have some benchmarks comparing `piccolo` with other `sync`/`async` ORMs.
It will also help with cases like #143 comparing `piccolo` with itself with different configurations. | open | 2021-07-31T11:13:54Z | 2021-08-06T14:32:57Z | https://github.com/piccolo-orm/piccolo/issues/144 | [] | aminalaee | 7 |
widgetti/solara | jupyter | 383 | slider tick_Labels | i am a newer for solara when i use solara slider and i can not find tick_labels option. And i want to know how to use it ? Another issue how to show the min and max value on slider by default as streamlit it shows min and max value on the slider | closed | 2023-11-16T03:04:06Z | 2024-09-26T13:28:21Z | https://github.com/widgetti/solara/issues/383 | [] | by0717 | 1 |
apify/crawlee-python | web-scraping | 598 | Deprecated variables in git_cliff_core::changelog | ```
WARN git_cliff_core::changelog > Variables ["commit.github", "commit.gitea", "commit.gitlab", "commit.bitbucket"]
are deprecated and will be removed in the future. Use `commit.remote` instead.
```
This warning indicates that the variables currently used are deprecated. To avoid future issues, it is better to refactor the variables in config `cliff.toml`. | closed | 2024-10-17T04:42:02Z | 2024-10-22T08:46:19Z | https://github.com/apify/crawlee-python/issues/598 | [
"t-tooling"
] | Telomelonia | 0 |
keras-team/keras | machine-learning | 20,393 | AttributeError: 'NoneType' object has no attribute 'shape' | https://github.com/tensorflow/tensorflow/issues/77826
TensorFlow version: 2.17.0
Transformers version: 4.46.0.dev0
Keras version: 3.6.0
This problem can be solved by using TensorFlow version 2.11, but it is not solved because i'm trying to use Tensorflow version 2.17. The existing BERT model is implemented in version 2.17, but this also does not work in version 2.11 of Tensorflow...
Therefore, I would like to solve this problem and make the entire code work in version 2.17.
| closed | 2024-10-22T14:12:25Z | 2024-11-21T02:04:54Z | https://github.com/keras-team/keras/issues/20393 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | Kim-William | 4 |
flasgger/flasgger | api | 365 | validation traceback when having cascading $ref in definitions | I have a single yaml file for all the paths and definitins I need (see test_ref.yaml)
validation for the test endpoint works great but I get a traceback when trying to validate test2.
The only difference is that one parameter if defined as a $ref...
```
127.0.0.1 - - [18/Feb/2020 19:27:53] "POST /test2 HTTP/1.1" 500 -
Traceback (most recent call last):
File "/home/dev/venv/lib/python3.6/site-packages/flask/app.py", line 2463, in __call__
return self.wsgi_app(environ, start_response)
File "/home/dev/venv/lib/python3.6/site-packages/flask/app.py", line 2449, in wsgi_app
response = self.handle_exception(e)
File "/home/dev/venv/lib/python3.6/site-packages/flask/app.py", line 1866, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/dev/venv/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/dev/venv/lib/python3.6/site-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/home/dev/venv/lib/python3.6/site-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/dev/venv/lib/python3.6/site-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/dev/venv/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/dev/venv/lib/python3.6/site-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/home/dev/venv/lib/python3.6/site-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/dev/venv/lib/python3.6/site-packages/flasgger/utils.py", line 253, in wrapper
**validate_args
File "/home/dev/venv/lib/python3.6/site-packages/flasgger/utils.py", line 412, in validate
main_def = __replace_ref(main_def, relative_path, all_definitions)
File "/home/dev/venv/lib/python3.6/site-packages/flasgger/utils.py", line 272, in __replace_ref
new_value[key] = __replace_ref(value, relative_path, all_definitions)
File "/home/dev/venv/lib/python3.6/site-packages/flasgger/utils.py", line 272, in __replace_ref
new_value[key] = __replace_ref(value, relative_path, all_definitions)
File "/home/dev/venv/lib/python3.6/site-packages/flasgger/utils.py", line 279, in __replace_ref
with open(file_ref_path) as file:
FileNotFoundError: [Errno 2] No such file or directory: '/home/jolin/dev/#/definitions/SubType'
```
test_ref.yaml :
```
swagger: '2.0'
################################################################################
# API Information #
################################################################################
info:
version: '1.0'
title: test
description: |
test "recursive" $ref definitions for input parameters
################################################################################
# Host, Base Path, Schemes and Content Types #
################################################################################
# The host (name or ip) serving the API
host: localhost:5000
# The base path on which the API is served, relative to the host. Will be prefixed to all paths. Used to control versioning
basePath: /
# The transfer protocol of the API
schemes:
- http
# Format of bodies a client can send (Content-Type)
consumes:
- application/json
# Format of the responses to the client (Accepts)
produces:
- application/json
################################################################################
# Paths #
################################################################################
paths:
/test:
post:
tags:
- test-test
operationId: testPOST
consumes:
- application/json
produces:
- application/json
parameters:
- in: body
name: request
description: request
required: true
schema:
$ref: '#/definitions/TestRequest'
responses:
'200':
description: OK
schema:
type: array
items:
$ref: '#/definitions/Test'
'201':
description: Created
'401':
description: Unauthorized
'403':
description: Forbidden
'404':
description: Not Found
deprecated: false
/test2:
post:
tags:
- test2
operationId: test2UsingPOST
consumes:
- application/json
produces:
- application/json
parameters:
- in: body
name: request
description: request
required: true
schema:
$ref: '#/definitions/Test2Request'
responses:
'200':
description: OK
schema:
type: array
items:
$ref: '#/definitions/Test'
'201':
description: Created
'401':
description: Unauthorized
'403':
description: Forbidden
'404':
description: Not Found
deprecated: false
definitions:
Test:
type: object
properties:
capacity:
type: integer
format: int32
description: the capacity
contactName:
type: string
description: Contact name
title: Test
TestRequest:
type: object
properties:
a:
description: a string parameter
type: string
p:
type: integer
description: an integer parameter
s:
type: object
properties:
p1:
type: string
p2:
type: integer
p3:
type: string
format: 'date-time'
title: TestRequest
Test2Request:
type: object
properties:
a:
description: a string parameter
type: string
p:
type: integer
description: an integer parameter
s:
$ref: '#/definitions/SubType'
title: Test2Request
SubType:
type: object
properties:
p1:
type: string
p2:
type: integer
p3:
type: string
format: 'date-time'
title: SubType
```
test_ref.py :
```
from flask import Flask, jsonify, request as flask_request
from flasgger import swag_from, Swagger
swagger_description_file = 'test_ref.yaml'
if __name__ == '__main__':
app = Flask(__name__)
swagger = Swagger(app, template_file=swagger_description_file)
@app.route('/test', methods=['POST'])
@swag_from(swagger.template, definition='TestRequest', validation=True)
def test():
req = flask_request.json
print('req:', req)
# validate(req, None, swagger_description)
return 'ok'
@app.route('/test2', methods=['POST'])
@swag_from(swagger.template, definition='Test2Request', validation=True)
def test2():
req = flask_request.json
print('req:', req)
# validate(req, None, swagger_description)
return 'ok'
app.run(debug=True)
``` | open | 2020-02-18T18:39:24Z | 2020-02-22T17:49:49Z | https://github.com/flasgger/flasgger/issues/365 | [] | rejoc | 2 |
graphql-python/gql | graphql | 479 | Use of @skip and @ignore directives with gql.dsl | I've been composing queries with gql.dsl and need to use the `@include` directive in a query. Other than https://github.com/graphql-python/gql/issues/278 I don't see any reference to directives.
Are directives supported with `gql.dsl` and if so what's the syntax? | open | 2024-05-17T01:10:52Z | 2024-05-17T18:00:06Z | https://github.com/graphql-python/gql/issues/479 | [
"type: feature"
] | david-waterworth | 1 |
Python3WebSpider/ProxyPool | flask | 127 | docker启动出现问题 | proxypool | 2021-11-05 06:37:02.319 | INFO | proxypool.processors.getter:run:36 - crawler <public.xiladaili.XiladailiCrawler object at 0x7f2df4901160> to get proxy
proxypool | 2021-11-05 06:37:02.319 | INFO | proxypool.crawlers.base:crawl:30 - fetching http://www.xiladaili.com/
proxypool | 2021-11-05 06:37:02.320 | DEBUG | proxypool.processors.tester:run:76 - 0 proxies to test
proxypool | 2021-11-05 06:37:02.320 | DEBUG | proxypool.processors.tester:run:79 - testing proxies use cursor 0, count 20
proxypool | 2021-11-05 06:37:22.342 | DEBUG | proxypool.scheduler:run_tester:32 - tester loop 18 start...
proxypool | 2021-11-05 06:37:22.342 | INFO | proxypool.processors.tester:run:74 - stating tester...
proxypool | 2021-11-05 06:37:22.343 | DEBUG | proxypool.processors.tester:run:76 - 0 proxies to test
proxypool | 2021-11-05 06:37:22.343 | DEBUG | proxypool.processors.tester:run:79 - testing proxies use cursor 0, count 20
proxypool | 2021-11-05 06:37:36.345 | ERROR | proxypool.processors.getter:run:37 - An error has been caught in function 'run', process 'MainProcess' (9), thread 'MainThread' (139835400836928):
proxypool | Traceback (most recent call last):
proxypool |
proxypool | File "run.py", line 12, in <module>
proxypool | getattr(Scheduler(), f'run_{args.processor}')()
proxypool | └ <class 'proxypool.scheduler.Scheduler'>
proxypool |
proxypool | File "/app/proxypool/scheduler.py", line 48, in run_getter
proxypool | getter.run()
proxypool | │ └ <function Getter.run at 0x7f2df4c17730>
proxypool | └ <proxypool.processors.getter.Getter object at 0x7f2df48f55c0>
proxypool |
proxypool | > File "/app/proxypool/processors/getter.py", line 37, in run
proxypool | for proxy in crawler.crawl():
proxypool | │ └ <function BaseCrawler.crawl at 0x7f2df570b950>
proxypool | └ <public.xiladaili.XiladailiCrawler object at 0x7f2df4901160>
proxypool |
proxypool | File "/app/proxypool/crawlers/base.py", line 31, in crawl
proxypool | html = self.fetch(url)
proxypool | │ │ └ 'http://www.xiladaili.com/'
proxypool | │ └ <function BaseCrawler.fetch at 0x7f2df570b8c8>
proxypool | └ <public.xiladaili.XiladailiCrawler object at 0x7f2df4901160>
proxypool |
proxypool | File "/usr/local/lib/python3.6/site-packages/retrying.py", line 49, in wrapped_f
proxypool | return Retrying(*dargs, **dkw).call(f, *args, **kw)
proxypool | │ │ │ │ │ └ {}
proxypool | │ │ │ │ └ (<public.xiladaili.XiladailiCrawler object at 0x7f2df4901160>, 'http://www.xiladaili.com/')
proxypool | │ │ │ └ <function BaseCrawler.fetch at 0x7f2df570b840>
proxypool | │ │ └ {'stop_max_attempt_number': 3, 'retry_on_result': <function BaseCrawler.<lambda> at 0x7f2df5708e18>, 'wait_fixed': 2000}
proxypool | │ └ ()
proxypool | └ <class 'retrying.Retrying'>
proxypool | File "/usr/local/lib/python3.6/site-packages/retrying.py", line 214, in call
proxypool | raise RetryError(attempt)
proxypool | │ └ Attempts: 3, Value: None
proxypool | └ <class 'retrying.RetryError'>
proxypool |
proxypool | retrying.RetryError: RetryError[Attempts: 3, Value: None]
| closed | 2021-11-05T06:40:18Z | 2021-12-30T06:02:04Z | https://github.com/Python3WebSpider/ProxyPool/issues/127 | [
"bug"
] | tt421011884 | 4 |
stanfordnlp/stanza | nlp | 580 | "AnnotationException: Could not handle incoming annotation" Problem [QUESTION] | Greeting,
I am new to CoreNLP enviroment and trying run the example code given on documentation. However, I got two errors as follows;
First code:
`from stanza.server import CoreNLPClient
with CoreNLPClient(
annotators=['tokenize','ssplit','pos',"ner"],
timeout=30000,
memory='2G',be_quiet=True) as client:
anno = client.annotate(text)`
> 2020-12-30 16:40:53 INFO: Writing properties to tmp file: corenlp_server-a15136448b834f79.props
2020-12-30 16:40:53 INFO: Starting server with command: java -Xmx2G -cp C:\Users\fatih\stanza_corenlp\* edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 30000 -threads 5 -maxCharLength 100000 -quiet True -serverProperties corenlp_server-a15136448b834f79.props -annotators tokenize,ssplit,pos,ner -preload -outputFormat serialized
```
`Traceback (most recent call last):
File "C:\Users\fatih\anaconda3\lib\site-packages\stanza\server\client.py", line 446, in _request
r.raise_for_status()
File "C:\Users\fatih\anaconda3\lib\site-packages\requests\models.py", line 941, in raise_for_status
raise HTTPError(http_error_msg, response=self)
HTTPError: 500 Server Error: Internal Server Error for url: http://localhost:9000/?properties=%7B%27annotators%27%3A+%27tokenize%2Cssplit%2Cpos%2Cner%27%2C+%27outputFormat%27%3A+%27serialized%27%7D&resetDefault=false
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<ipython-input-6-2fbdcdb77b41>", line 6, in <module>
anno = client.annotate(text)
File "C:\Users\fatih\anaconda3\lib\site-packages\stanza\server\client.py", line 514, in annotate
r = self._request(text.encode('utf-8'), request_properties, reset_default, **kwargs)
File "C:\Users\fatih\anaconda3\lib\site-packages\stanza\server\client.py", line 452, in _request
raise AnnotationException(r.text)
AnnotationException: Could not handle incoming annotation`
```
What am I doing wrong? It's on windows, Anaconda, Spyder.
| closed | 2020-12-30T13:46:23Z | 2021-04-11T20:47:03Z | https://github.com/stanfordnlp/stanza/issues/580 | [
"question"
] | fatihbozdag | 38 |
pbugnion/gmaps | jupyter | 336 | How to add a point on Google Map using python's gmaps? | I am trying to add a Point on Google Map in Jupyter Notebook as follows:
```python
import gmaps
API_Key = 'CJJUIIEaSyBpv6a' # This is not the legit key, I have not provided a real API key to avoid misuse and for security reasons
gmaps.configure(api_key=API_Key)
centroid_lat, centroid_long = (82.301581497344834, 110.89898182803991)
center_coordinates = (centroid_lat, centroid_long)
fig = gmaps.figure(center=center_coordinates, zoom_level=13.5)
P = gmaps.Point(center_coordinates)
point_layer = gmaps.drawing_layer(features=[P])
fig.add_layer(point_layer)
fig
```
However, I am getting following error:
```bash
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-35-baba23e890bf> in <module>
10 #driving_layer = gmaps.heatmap_layer(coordinates)
11 P = gmaps.Point(center_coordinates)
---> 12 point_layer = gmaps.drawing_layer(features=[P])
13 fig.add_layer(driving_layer)
14 fig
.
.
.
~/anaconda3/envs/dbn/lib/python3.7/site-packages/ipykernel/jsonutil.py in json_clean(obj)
195
196 # we don't understand it, it's probably an unserializable object
--> 197 raise ValueError("Can't clean for JSON: %r" % obj)
ValueError: Can't clean for JSON: <gmaps.geotraitlets.Point object at 0x7fb6f1e3a490>
```
I don't understand error fully. Please help, if you have experience on using gmaps.
Thanks. | open | 2020-04-03T00:47:12Z | 2020-04-03T00:47:12Z | https://github.com/pbugnion/gmaps/issues/336 | [] | rahulbhadani | 0 |
kizniche/Mycodo | automation | 695 | SHT3x Reset | Add software reset to SHT3x input code
Ref: https://github.com/kizniche/Mycodo/issues/692#issuecomment-533349295
https://github.com/ralf1070/Adafruit_Python_SHT31/blob/master/Adafruit_SHT31.py
```python
def reset(self):
self._writeCommand(SHT31_SOFTRESET)
time.sleep(0.01) # Wait the required time
```
Could be used to fix this intermittent issue with the SHT35:
```
2019-09-18 10:53:45,362 - ERROR - mycodo.inputs.sht31_6162b51f - InputModule.get_measurement() method raised IOError: [Errno 121] Remote I/O error
Traceback (most recent call last):
File "/var/mycodo-root/mycodo/inputs/base_input.py", line 122, in read
self._measurements = self.get_measurement()
File "/home/pi/Mycodo/mycodo/inputs/sht31.py", line 146, in get_measurement
self.value_set(1, self.sensor.read_humidity())
File "/home/pi/Mycodo/env/src/adafruit-sht31/Adafruit_SHT31.py", line 133, in read_humidity
(temperature, humidity) = self.read_temperature_humidity()
File "/home/pi/Mycodo/env/src/adafruit-sht31/Adafruit_SHT31.py", line 112, in read_temperature_humidity
buffer = self._device.readList(0, 6)
File "/var/mycodo-root/env/lib/python3.7/site-packages/Adafruit_GPIO/I2C.py", line 134, in readList
results = self._bus.read_i2c_block_data(self._address, register, length)
File "/var/mycodo-root/env/lib/python3.7/site-packages/Adafruit_PureIO/smbus.py", line 215, in read_i2c_block_data
ioctl(self._device.fileno(), I2C_RDWR, request)
OSError: [Errno 121] Remote I/O error
```
Need to test. | closed | 2019-09-20T17:36:00Z | 2019-09-21T16:50:00Z | https://github.com/kizniche/Mycodo/issues/695 | [] | kizniche | 39 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 1,281 | AttributeError: module 'sqlalchemy.orm' has no attribute 'DeclarativeBase' | ```
___________________________________ ERROR collecting tests/test_config.py ____________________________________
__pypackages__/3.11/lib/_pytest/runner.py:341: in from_call
result: Optional[TResult] = func()
__pypackages__/3.11/lib/_pytest/runner.py:372: in <lambda>
call = CallInfo.from_call(lambda: list(collector.collect()), "collect")
__pypackages__/3.11/lib/_pytest/python.py:531: in collect
self._inject_setup_module_fixture()
__pypackages__/3.11/lib/_pytest/python.py:545: in _inject_setup_module_fixture
self.obj, ("setUpModule", "setup_module")
__pypackages__/3.11/lib/_pytest/python.py:310: in obj
self._obj = obj = self._getobj()
__pypackages__/3.11/lib/_pytest/python.py:528: in _getobj
return self._importtestmodule()
__pypackages__/3.11/lib/_pytest/python.py:617: in _importtestmodule
mod = import_path(self.path, mode=importmode, root=self.config.rootpath)
__pypackages__/3.11/lib/_pytest/pathlib.py:567: in import_path
importlib.import_module(module_name)
/usr/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1204: in _gcd_import
???
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
__pypackages__/3.11/lib/_pytest/assertion/rewrite.py:186: in exec_module
exec(co, module.__dict__)
tests/test_config.py:5: in <module>
from util.main import (register_commands, register_error_handlers,
src/util/main.py:19: in <module>
from command.db import drop, init, load, migrate, shell, url
src/command/db.py:14: in <module>
from model.main import Mail, Role, Text, User
src/model/main.py:11: in <module>
from util.extensions import db
src/util/extensions.py:10: in <module>
from flask_sqlalchemy import SQLAlchemy
__pypackages__/3.11/lib/flask_sqlalchemy/__init__.py:5: in <module>
from .extension import SQLAlchemy
__pypackages__/3.11/lib/flask_sqlalchemy/extension.py:40: in <module>
t.Type[sa_orm.DeclarativeBase],
E AttributeError: module 'sqlalchemy.orm' has no attribute 'DeclarativeBase'
```
Environment:
- Python version: 3.11
- Flask-SQLAlchemy version: 3.1.1
- SQLAlchemy version: 2.0.23
| closed | 2023-11-24T18:18:33Z | 2023-12-10T01:03:24Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1281 | [] | InoreNeronI | 2 |
unionai-oss/pandera | pandas | 1,257 | Support pydantic v2 | **Is your feature request related to a problem? Please describe.**
This feature should add support for pydantic v2, which will also require updates to the fastapi `UploadFile` type defined in the `pandera.typing.fastapi` module.
**Describe the solution you'd like**
All unit tests should pass with a pydantic>=2 installation with no syntax changes in the pandera code.
**Describe alternatives you've considered**
NA
**Additional context**
NA
| closed | 2023-07-09T17:53:12Z | 2023-10-20T07:05:38Z | https://github.com/unionai-oss/pandera/issues/1257 | [
"enhancement"
] | cosmicBboy | 6 |
vitalik/django-ninja | pydantic | 1,119 | Traversing the data structures to create an HTTP API client | I love ninja and have been using it in all my projects for a few years now.
As part of my workflow, I implement a command line python HTTP client for every single API endpoint I create. This is needed to fulfill unusual login/cryptographic/websocket requirements that can't be handled by Curl, Hurl and company, but also very useful for development, debugging and testing (this client is scriptable and outputs performance stats).
Naturally there is a common library of code shared among projects and then the implementation of the individual api client calls. My goal is to replace the implementation of the individual calls by running within the django/ninja context and traversing the Ninja data structures to obtain the **urls**, **http methods**, **schemas** and **url parameters** programmatically. If I manage this to a satisfactory degree, I would likely release the whole thing as Free Software after some polishing.
I welcome guidance in what objects/methods/functions can give me that information if we start for example from the context of running `manage.py shell` on a working ninja project.
Many thanks. | closed | 2024-04-05T09:29:36Z | 2024-04-05T18:25:02Z | https://github.com/vitalik/django-ninja/issues/1119 | [] | scrwghub | 3 |
ray-project/ray | deep-learning | 51,620 | [RLlib] Incorrect error message for improper registering of custom env | ### What happened + What you expected to happen
If a custom env is used and it is not registered properly, the `ERR_MSG_INVALID_ENV_DESCRIPTOR` error message from `rllib.utils.error.py` is raised. The message is incorrect as it tells the user to use `tune.register()` and use the old config(env) to set the env in config.
### Versions / Dependencies
Ray 2.43
Python 3.9
OS : Ubuntu 22.04
### Reproduction script
Just use a custom env and run training without registering it.
### Issue Severity
Low: It annoys or frustrates me. | open | 2025-03-22T19:33:06Z | 2025-03-24T01:57:10Z | https://github.com/ray-project/ray/issues/51620 | [
"bug",
"triage",
"rllib"
] | ashwinsnambiar | 1 |
cupy/cupy | numpy | 8,092 | Batched `cupy.sum` on short reduction axes are slow | ### Description
Using `cub::SegmentedReduce::Sum` to find batched reduce-sum with short reduction axes seems to degrade performance (cc: @leofang)
### To Reproduce
Script:
```py
import cupy
from cupy import testing
from cupyx.profiler import benchmark
from cupy._core import _accelerator
n = 24
for i in range(n + 1):
shape = (2 ** i, 2 ** (n - i))
x = testing.shaped_random(shape, xp=cupy, dtype=cupy.float32)
f = lambda: x.sum(axis=1)
for acc in (["cub"], []):
name=f'{shape=}, {acc=}'.ljust(32)
_accelerator.set_routine_accelerators(acc)
_accelerator.set_reduction_accelerators(acc)
perf = benchmark(f, (), n_warmup=1, n_repeat=20, name=name)
print(perf)
```
Result:
```
shape=(1, 16777216), acc=['cub']: CPU: 26.774 us +/- 15.314 (min: 20.986 / max: 92.476) us GPU-0: 2429.798 us +/- 14.812 (min: 2419.712 / max: 2492.416) us
shape=(1, 16777216), acc=[] : CPU: 29.575 us +/- 12.893 (min: 25.245 / max: 85.183) us GPU-0: 15859.610 us +/- 25.041 (min: 15814.656 / max: 15921.152) us
shape=(2, 8388608), acc=['cub'] : CPU: 22.423 us +/- 2.995 (min: 20.244 / max: 31.022) us GPU-0: 1226.445 us +/- 3.578 (min: 1221.632 / max: 1235.968) us
shape=(2, 8388608), acc=[] : CPU: 27.899 us +/- 9.635 (min: 24.382 / max: 69.227) us GPU-0: 8306.074 us +/- 9.421 (min: 8291.328 / max: 8338.432) us
shape=(4, 4194304), acc=['cub'] : CPU: 21.922 us +/- 2.638 (min: 20.004 / max: 29.159) us GPU-0: 626.893 us +/- 2.834 (min: 623.616 / max: 634.880) us
shape=(4, 4194304), acc=[] : CPU: 25.847 us +/- 2.671 (min: 24.650 / max: 36.779) us GPU-0: 4175.360 us +/- 4.562 (min: 4166.656 / max: 4188.160) us
shape=(8, 2097152), acc=['cub'] : CPU: 38.684 us +/- 76.913 (min: 19.650 / max: 373.826) us GPU-0: 348.774 us +/- 77.878 (min: 327.680 / max: 688.128) us
shape=(8, 2097152), acc=[] : CPU: 25.308 us +/- 2.838 (min: 23.652 / max: 36.125) us GPU-0: 2139.699 us +/- 5.365 (min: 2128.896 / max: 2151.424) us
shape=(16, 1048576), acc=['cub']: CPU: 20.951 us +/- 1.941 (min: 19.680 / max: 28.707) us GPU-0: 182.016 us +/- 1.993 (min: 180.224 / max: 189.440) us
shape=(16, 1048576), acc=[] : CPU: 25.276 us +/- 2.068 (min: 23.885 / max: 33.285) us GPU-0: 1096.909 us +/- 4.116 (min: 1090.560 / max: 1108.992) us
shape=(32, 524288), acc=['cub'] : CPU: 20.958 us +/- 1.712 (min: 19.784 / max: 27.497) us GPU-0: 109.619 us +/- 2.476 (min: 106.496 / max: 117.760) us
shape=(32, 524288), acc=[] : CPU: 25.534 us +/- 2.463 (min: 23.556 / max: 34.311) us GPU-0: 563.558 us +/- 3.876 (min: 558.080 / max: 577.536) us
shape=(64, 262144), acc=['cub'] : CPU: 22.116 us +/- 3.914 (min: 19.856 / max: 36.777) us GPU-0: 78.694 us +/- 3.822 (min: 74.752 / max: 92.160) us
shape=(64, 262144), acc=[] : CPU: 25.496 us +/- 2.592 (min: 23.864 / max: 35.152) us GPU-0: 295.373 us +/- 3.097 (min: 291.840 / max: 306.176) us
shape=(128, 131072), acc=['cub']: CPU: 20.917 us +/- 1.745 (min: 19.735 / max: 27.680) us GPU-0: 69.786 us +/- 2.029 (min: 68.608 / max: 77.824) us
shape=(128, 131072), acc=[] : CPU: 25.511 us +/- 2.386 (min: 23.795 / max: 34.476) us GPU-0: 174.541 us +/- 2.518 (min: 172.032 / max: 184.320) us
shape=(256, 65536), acc=['cub'] : CPU: 21.537 us +/- 1.787 (min: 20.550 / max: 28.584) us GPU-0: 68.045 us +/- 1.874 (min: 66.560 / max: 75.776) us
shape=(256, 65536), acc=[] : CPU: 26.720 us +/- 2.939 (min: 24.254 / max: 36.635) us GPU-0: 111.821 us +/- 2.978 (min: 108.544 / max: 121.856) us
shape=(512, 32768), acc=['cub'] : CPU: 21.592 us +/- 1.662 (min: 20.574 / max: 28.206) us GPU-0: 68.250 us +/- 1.840 (min: 66.560 / max: 75.776) us
shape=(512, 32768), acc=[] : CPU: 27.014 us +/- 2.458 (min: 24.568 / max: 35.970) us GPU-0: 111.155 us +/- 1.956 (min: 108.544 / max: 117.760) us
shape=(1024, 16384), acc=['cub']: CPU: 21.647 us +/- 1.748 (min: 20.584 / max: 28.635) us GPU-0: 69.069 us +/- 2.135 (min: 67.584 / max: 77.824) us
shape=(1024, 16384), acc=[] : CPU: 27.259 us +/- 3.538 (min: 24.602 / max: 37.223) us GPU-0: 96.205 us +/- 3.598 (min: 93.184 / max: 106.496) us
shape=(2048, 8192), acc=['cub'] : CPU: 21.751 us +/- 1.749 (min: 20.611 / max: 28.420) us GPU-0: 68.403 us +/- 1.820 (min: 66.560 / max: 75.776) us
shape=(2048, 8192), acc=[] : CPU: 26.338 us +/- 2.323 (min: 24.458 / max: 35.013) us GPU-0: 91.494 us +/- 2.319 (min: 89.088 / max: 100.352) us
shape=(4096, 4096), acc=['cub'] : CPU: 21.637 us +/- 1.823 (min: 20.432 / max: 28.816) us GPU-0: 68.403 us +/- 1.606 (min: 66.560 / max: 74.752) us
shape=(4096, 4096), acc=[] : CPU: 26.932 us +/- 2.779 (min: 24.665 / max: 36.798) us GPU-0: 97.434 us +/- 2.676 (min: 95.232 / max: 107.520) us
shape=(8192, 2048), acc=['cub'] : CPU: 21.714 us +/- 2.099 (min: 20.487 / max: 29.103) us GPU-0: 69.222 us +/- 2.158 (min: 67.584 / max: 77.824) us
shape=(8192, 2048), acc=[] : CPU: 26.533 us +/- 2.480 (min: 24.702 / max: 35.654) us GPU-0: 108.134 us +/- 2.517 (min: 106.496 / max: 117.760) us
shape=(16384, 1024), acc=['cub']: CPU: 21.689 us +/- 1.934 (min: 20.536 / max: 29.293) us GPU-0: 77.158 us +/- 1.950 (min: 75.776 / max: 84.992) us
shape=(16384, 1024), acc=[] : CPU: 27.076 us +/- 3.201 (min: 24.885 / max: 36.887) us GPU-0: 139.622 us +/- 3.177 (min: 137.216 / max: 149.504) us
shape=(32768, 512), acc=['cub'] : CPU: 21.522 us +/- 1.818 (min: 20.387 / max: 28.492) us GPU-0: 76.698 us +/- 2.170 (min: 74.752 / max: 84.992) us
shape=(32768, 512), acc=[] : CPU: 26.370 us +/- 2.306 (min: 24.647 / max: 34.772) us GPU-0: 213.043 us +/- 2.368 (min: 210.944 / max: 222.208) us
shape=(65536, 256), acc=['cub'] : CPU: 39.122 us +/- 76.225 (min: 20.477 / max: 371.281) us GPU-0: 128.051 us +/- 77.065 (min: 108.544 / max: 463.872) us
shape=(65536, 256), acc=[] : CPU: 26.882 us +/- 2.808 (min: 24.816 / max: 36.989) us GPU-0: 202.240 us +/- 2.660 (min: 199.680 / max: 211.968) us
shape=(131072, 128), acc=['cub']: CPU: 21.769 us +/- 2.015 (min: 20.487 / max: 29.721) us GPU-0: 195.789 us +/- 2.234 (min: 194.560 / max: 204.800) us
shape=(131072, 128), acc=[] : CPU: 27.187 us +/- 3.214 (min: 24.456 / max: 37.588) us GPU-0: 194.970 us +/- 3.096 (min: 192.512 / max: 206.848) us
shape=(262144, 64), acc=['cub'] : CPU: 21.995 us +/- 1.813 (min: 20.775 / max: 28.899) us GPU-0: 361.882 us +/- 2.007 (min: 360.448 / max: 369.664) us
shape=(262144, 64), acc=[] : CPU: 28.027 us +/- 5.230 (min: 24.513 / max: 47.760) us GPU-0: 194.611 us +/- 3.373 (min: 191.488 / max: 203.776) us
shape=(524288, 32), acc=['cub'] : CPU: 22.112 us +/- 1.843 (min: 20.612 / max: 28.637) us GPU-0: 688.538 us +/- 2.084 (min: 687.104 / max: 696.320) us
shape=(524288, 32), acc=[] : CPU: 28.239 us +/- 6.005 (min: 24.802 / max: 52.128) us GPU-0: 208.589 us +/- 3.860 (min: 205.824 / max: 221.184) us
shape=(1048576, 16), acc=['cub']: CPU: 22.190 us +/- 1.817 (min: 20.727 / max: 28.973) us GPU-0: 1351.373 us +/- 2.150 (min: 1349.632 / max: 1359.872) us
shape=(1048576, 16), acc=[] : CPU: 26.513 us +/- 2.789 (min: 24.741 / max: 36.449) us GPU-0: 197.530 us +/- 2.746 (min: 195.584 / max: 206.848) us
shape=(2097152, 8), acc=['cub'] : CPU: 22.394 us +/- 1.879 (min: 20.600 / max: 29.345) us GPU-0: 2661.018 us +/- 6.646 (min: 2652.160 / max: 2675.712) us
shape=(2097152, 8), acc=[] : CPU: 26.793 us +/- 2.864 (min: 24.771 / max: 36.733) us GPU-0: 161.946 us +/- 2.883 (min: 159.744 / max: 172.032) us
shape=(4194304, 4), acc=['cub'] : CPU: 23.579 us +/- 2.721 (min: 21.729 / max: 32.016) us GPU-0: 5252.250 us +/- 8.810 (min: 5236.736 / max: 5270.528) us
shape=(4194304, 4), acc=[] : CPU: 26.353 us +/- 2.672 (min: 24.312 / max: 35.980) us GPU-0: 144.640 us +/- 2.687 (min: 142.336 / max: 154.624) us
shape=(8388608, 2), acc=['cub'] : CPU: 23.547 us +/- 2.921 (min: 21.738 / max: 35.030) us GPU-0: 10224.179 us +/- 18.385 (min: 10207.232 / max: 10266.624) us
shape=(8388608, 2), acc=[] : CPU: 27.276 us +/- 3.147 (min: 24.392 / max: 38.883) us GPU-0: 138.445 us +/- 3.232 (min: 135.168 / max: 150.528) us
shape=(16777216, 1), acc=['cub']: CPU: 23.375 us +/- 3.250 (min: 21.518 / max: 33.582) us GPU-0: 20320.819 us +/- 34.711 (min: 20283.392 / max: 20376.575) us
shape=(16777216, 1), acc=[] : CPU: 28.918 us +/- 5.404 (min: 25.087 / max: 46.885) us GPU-0: 137.882 us +/- 5.254 (min: 134.144 / max: 155.648) us
```
### Installation
None
### Environment
```
A100
```
### Additional Information
_No response_ | open | 2024-01-09T06:59:52Z | 2025-01-26T18:21:24Z | https://github.com/cupy/cupy/issues/8092 | [
"cat:performance"
] | asi1024 | 1 |
django-cms/django-cms | django | 7,777 | [BUG] Placeholder inherit raising "NoneType should implement get_template" if used on an apphooked page | <!--
Please fill in each section below, otherwise, your issue will be closed.
This info allows django CMS maintainers to diagnose (and fix!) your issue
as quickly as possible.
-->
## Description
Using Django CMS 4 upgraded from Django CMS 3.11.4.
In a template from an apphooked page using `{% placeholder "Image rotator" %}` causes `NoneType should implement get_template` error. It is raised by `line 373, in get_declared_placeholders_for_obj`.
I must point out it happens on a page which does not have a parent but since it may be relocated to a subpage inheritance is needed because it should inherit the placeholder from parent if the one exists.
I have the old Django CMS 3 `PlaceholderField` in my custom's app models but it is not rendered in my list template where the issue arises. It should call a CMS placeholder not related to my app whatsoever (it is not defined in my models.py as a PlaceholderField).
I experience the same problem with another apphooked app's page **not having** the `PlaceholderField` in its `models.py`. It is however, strange, that in my other apphooked page the default template (list) renders as expected but the detail page raises the same error. Please note in my other app there is no `PlaceholderField`.
Using the inheritance in "regular" pages does not raise the error.
<!--
If this is a security issue stop immediately and follow the instructions at:
http://docs.django-cms.org/en/latest/contributing/development-policies.html#reporting-security-issues
-->
## Steps to reproduce
1. Create a page
2. Create an apphook on a created page
3. Make a default template for the apphook that extends `{% block content %}` from `base.html` and has a `{% placeholder "Image rotator" inherit %}` template tag (placeholded name does not matter)
4. When visiting the page the issue arises
<!--
Clear steps describing how to reproduce the issue.
Steps to reproduce the behavior:
1. Go to '...'
5. Click on '....'
6. Scroll down to '....'
7. See error
-->
## Expected behaviour
The page should inherit its parent page placeholder (show image rotator in my case).
<!--
A clear and concise description of what you expected to happen.
-->
## Actual behaviour
Placeholder from the parent page (if it exists) is not inherited, error arises, also if the page does not have a parent the result is the same - an error.
<!--
A clear and concise description of what is actually happening.
-->
## Additional information (CMS/Python/Django versions)
Django CMS 4.1.0
Python 3.8.13
Django 4.2.9
<!--
Add any other context about the problem such as environment,
CMS/Python/Django versions, logs etc. here.
-->
## Do you want to help fix this issue?
<!--
The django CMS project is managed and kept alive by its open source community and is backed by the [django CMS Association](https://www.django-cms.org/en/about-us/). We therefore welcome any help and are grateful if people contribute to the project. Please use 'x' to check the items below.
-->
* [x] Yes, I want to help fix this issue and I will join #workgroup-pr-review on [Slack](https://www.django-cms.org/slack) to confirm with the community that a PR is welcome.
* [ ] No, I only want to report the issue.
| closed | 2024-01-22T20:46:42Z | 2024-01-23T09:31:19Z | https://github.com/django-cms/django-cms/issues/7777 | [] | aacimov | 2 |
mirumee/ariadne | graphql | 120 | Change configuration approach for GraphQL strategy objects from inheritance to init kwargs | Currently Ariadne relies on inheritance for `GraphQL` behaviour customization. This could be replaced with list of kwargs containing configuration options, eg:
```python
app = GraphQL(
schema,
context: Optional[Any] = None, # Callable
root_resolver: Optional[Callable] = None,
error_handler: Optional[Callable] = None,
debug: bool = False,
validators=[
validate_max_query_depth(10),
validate_max_query_width(80),
validate_max_query_cost(100),
],
)
``` | closed | 2019-03-25T18:02:49Z | 2019-05-06T17:03:16Z | https://github.com/mirumee/ariadne/issues/120 | [
"enhancement",
"roadmap"
] | rafalp | 1 |
marshmallow-code/apispec | rest-api | 300 | When using example DocPlugin(), to_dict() paths contains an instance of Path. |
```
'paths': OrderedDict([(<apispec.core.Path object at 0x10665cda0>,
OrderedDict([('get',
{'responses': {200: {'description': 'A '
'list '
'of '
'all '
'appcodes',
'schema': {'$ref': '#/definitions/appcode'}}},
'summary': 'Gets all appcode',
'tags': ['appcode']})]))]),
```
Wanted
```
'paths': OrderedDict([('/appcode',
OrderedDict([('get',
{'responses': {200: {'description': 'A '
'list '
'of '
'all '
'appcodes',
'schema': {'$ref': '#/definitions/appcode'}}},
'summary': 'Gets all appcode',
'tags': ['appcode']})]))]),
```
Managed to find a fix,
#core.py
```
def update(self, path):
if path.path:
self.path = path.path
self.operations.update(path.operations)
```
#core.py
```
def update(self, path):
if path.path:
self.path = path.path.path
self.operations.update(path.operations)
```
| closed | 2018-10-05T05:17:13Z | 2018-10-08T01:18:40Z | https://github.com/marshmallow-code/apispec/issues/300 | [] | justai-net | 2 |
bauerji/flask-pydantic | pydantic | 37 | How can I give custom params for Request.get_json(force=False, silent=False, cache=True) ? | I saw that `get_json()` method have no any params in line:
https://github.com/bauerji/flask_pydantic/blob/45987a2d4474b5c9423e057a66d98863468fa0bd/flask_pydantic/core.py#L176
but the flask support some custom params, in doc:
https://tedboy.github.io/flask/generated/generated/flask.Request.get_json.html
Currently, there was some errors when a post request have no request body:
```html
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>400 Bad Request</title>\n<h1>Bad Request</h1>\n<p>Failed to decode JSON object: Expecting value: line 1 column 1 (char 0)</p><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>400 Bad Request</title>\n<h1>Bad Request</h1>\n<p>Failed to decode JSON object: Expecting value: line 1 column 1 (char 0)</p>
```
| closed | 2021-06-15T14:19:35Z | 2021-10-28T10:25:44Z | https://github.com/bauerji/flask-pydantic/issues/37 | [] | gwind | 1 |
gunthercox/ChatterBot | machine-learning | 2,380 | return in finally can swallow exceptions | In https://github.com/gunthercox/ChatterBot/blob/4ff8af28567ed446ae796d37c246bb6a14032fe7/chatterbot/logic/unit_conversion.py#L164 there is a `return` statement in a `finally` block, which would swallow any in-flight exception.
This means that if a `BaseException` (such as `KeyboardInterrupt`) is raised from the `try` body, or any exception is raised from one of the `except:` clauses, it will not propagate on as expected.
See also https://docs.python.org/3/tutorial/errors.html#defining-clean-up-actions.
| closed | 2024-10-24T13:42:45Z | 2025-03-01T21:09:39Z | https://github.com/gunthercox/ChatterBot/issues/2380 | [] | iritkatriel | 2 |
deezer/spleeter | tensorflow | 388 | [Discussion] Output in FLAC or other formats | Hi,
I was wondering if it is possible to to export the output files in other formats than WMA. I would be interested in having the files as FLAC, is it already possible to do so ?
I thank you in advance for your time and comprehension towards my request. | closed | 2020-05-22T12:33:33Z | 2020-05-22T12:51:53Z | https://github.com/deezer/spleeter/issues/388 | [
"question"
] | leops95 | 3 |
nltk/nltk | nlp | 2,648 | Pos-tag returns strange output for 'as' | Any insights on how to handle 'as'?
Inltk.pos_tag('as')
[('a', DT), ('s','NN')
I also tried to check wordnet synsets, most of the synsets were chemical terms and only the last one was my desired synset: Synset('equally.r.01) r , any insights on how i can isolate this specific meaning and use it for pos_tag, etc? | closed | 2020-12-29T19:09:37Z | 2021-07-30T07:48:27Z | https://github.com/nltk/nltk/issues/2648 | [
"resolved"
] | mina1987 | 2 |
falconry/falcon | api | 2,143 | [Need Help]How does a route match all addresses under a certain path | How does a route match all addresses under a certain path.It's Not in the document
| closed | 2023-03-19T13:38:59Z | 2023-03-20T18:44:26Z | https://github.com/falconry/falcon/issues/2143 | [] | Fgaoxing | 0 |
autokey/autokey | automation | 244 | GUI not starting | ## Classification:
Bug
## Reproducibility:
Always
## Version
AutoKey version: 0.95.2-1.fc29
Used GUI (Gtk, Qt, or both): Gtk
Installed via: dnf
Linux Distribution: Fedora 29
## Summary
Interface won't shows up.
## Steps to Reproduce (if applicable)
- $ autokey-gtk --verbose
## Expected Results
- Application should start
## Actual Results
- No interface created.
```
https://pastebin.com/XyNx27Cd
```
## Notes
 | closed | 2019-02-01T20:02:29Z | 2019-04-21T11:38:19Z | https://github.com/autokey/autokey/issues/244 | [
"enhancement",
"documentation",
"easy fix"
] | ProkopRandacek | 2 |
marcomusy/vedo | numpy | 796 | AttributeError: module 'numpy' has no attribute 'bool' | Hi there,
After an update of numpy I have run into an error.
I am trying to run the following code,
```python
fitshape = # list of lists
splinecoors = np.asarray(fitshape)
ssp = vedo.Line(splinecoors).lw(4).c("red4")
```
and, I have the following error:
```
# cut many bits of stack trace through my code
ssp = vedo.Line(splinecoors).lw(4).c("red4")
File "/[xxx]/.venv/lib/python3.8/site-packages/vedo/shapes.py", line 489, in __init__
ppoints.SetData(utils.numpy2vtk(np.asarray(p0, dtype=float), dtype=float))
File "/[xxx]/.venv/lib/python3.8/site-packages/vedo/utils.py", line 224, in numpy2vtk
varr = numpy_to_vtk(arr.astype(dtype), deep=deep)
File "/[xxx]/.venv/lib/python3.8/site-packages/vtkmodules/util/numpy_support.py", line 164, in numpy_to_vtk
arr_dtype = get_numpy_array_type(vtk_typecode)
File "/[xxx]/.venv/lib/python3.8/site-packages/vtkmodules/util/numpy_support.py", line 94, in get_numpy_array_type
return get_vtk_to_numpy_typemap()[vtk_array_type]
File "/[xxx]/.venv/lib/python3.8/site-packages/vtkmodules/util/numpy_support.py", line 74, in get_vtk_to_numpy_typemap
_vtk_np = {vtkConstants.VTK_BIT:numpy.bool,
File "/[xxx]/.venv/lib/python3.8/site-packages/numpy/__init__.py", line 284, in __getattr__
raise AttributeError("module {!r} has no attribute "
AttributeError: module 'numpy' has no attribute 'bool'
```
I guess it is related to #770, related to numpy deprecating/removing a bunch of things...?
Versions:
- python 3.8.16
- vedo 2023.4.3
- numpy 1.24.0 | closed | 2023-01-24T16:01:25Z | 2023-01-24T17:06:26Z | https://github.com/marcomusy/vedo/issues/796 | [] | antmatyjajo | 2 |
proplot-dev/proplot | data-visualization | 358 | When used with seaborn, legend draws error: two legend label boxes appear. | <!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
I was using propplot as the layout tool and seaborn as the plotting tool.
Everything was fine until I used ax.legend(loc='b') to add a label box below the plot. Two tab boxes appeared! One is drawn by seaborn itself, and the other is drawn by propplot.
I tried adding legend=False to sns.lineplot(), however, this makes proplot unable to get all the legend information (because seaborn will not add the automatically generated legend information except the label to the handler).
How to solve this problem please?
I can turn off the legend of proplot, but then I can't adjust the position of the legend box!
### Steps to reproduce
```python
fig, axs = pplt.subplots(ncols = 1, nrows = 2, refaspect=2, refwidth=5, sharey=False)
axs.format(abc='a', abcloc='ul')
sns.lineplot(ax=axs[0], data=global_mean, x="year", y="LAI", hue="Dataset", legend=False)
axs[0].format(ylabel='Global Mean LAI ($\mathregular{m^2/m^2}$)')
sns.lineplot(ax=axs[1], data=global_annomaly, x="year", y="LAI", hue="Dataset")
sns.lineplot(ax=axs[1], data=global_annomaly, x="year", y="LAI", color="black", linewidth=2.2, label="Multi-dataset Mean")
axs[1].format(ylabel="Global LAI Anomaly ($\mathregular{m^2/m^2}$)")
axs[1].legend(loc='b')
fig.show()
```
**Expected behavior**:
Only one legend box on the bottom of the plot.
**Actual behavior**:
<img width="592" alt="image" src="https://user-images.githubusercontent.com/8275836/165816879-55ef9232-f633-4959-ac34-aba9e1806ad4.png">
**Another Example**:
Adding legend=False to sns.lineplot()
```python
fig, axs = pplt.subplots(ncols = 1, nrows = 2, refaspect=2, refwidth=5, sharey=False)
axs.format(abc='a', abcloc='ul')
sns.lineplot(ax=axs[0], data=global_mean, x="year", y="LAI", hue="Dataset", legend=False)
axs[0].format(ylabel='Global Mean LAI ($\mathregular{m^2/m^2}$)')
sns.lineplot(ax=axs[1], data=global_annomaly, x="year", y="LAI", hue="Dataset", legend=False)
sns.lineplot(ax=axs[1], data=global_annomaly, x="year", y="LAI", color="black", linewidth=2.2, label="Multi-dataset Mean", legend=False)
axs[1].format(ylabel="Global LAI Anomaly ($\mathregular{m^2/m^2}$)")
axs[1].legend(loc='b')
fig.show()
```
<img width="590" alt="image" src="https://user-images.githubusercontent.com/8275836/165818062-21aab30f-2fd8-44c1-ac8b-b62014547d7c.png">
### Proplot version
Paste the results of `import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)`here.
3.4.3
0.9.5 | open | 2022-04-28T18:04:24Z | 2023-03-29T09:16:23Z | https://github.com/proplot-dev/proplot/issues/358 | [
"integration"
] | moulai | 1 |
tqdm/tqdm | pandas | 771 | Doesn't work with docker-compose | ```
import time
from tqdm import tqdm
for i in tqdm(range(50), desc='debug'):
time.sleep(0.1)
```
Then in terminal I see nothing until the loop has finished, and it just prints:
`debug: 100% 50/50 [00:05<00:00, 9.81it/s]`
My docker compose file has `tty: true` and `os.isatty(sys.stdout.fileno())` returns true
| closed | 2019-07-06T07:12:52Z | 2024-06-23T13:24:24Z | https://github.com/tqdm/tqdm/issues/771 | [
"invalid ⛔",
"question/docs ‽",
"to-merge ↰"
] | justin-pierce | 12 |
sinaptik-ai/pandas-ai | pandas | 1,470 | use pandas instead of RestrictedPandas | ### System Info
pandas 1.5.3
pandasai 2.4.0
### 🐛 Describe the bug
I want the agent to use panda instead of RestrictedPandas. What should be done
raise AttributeError(f"'{name}' is not allowed in RestrictedPandas")
AttributeError: 'DateOffset' is not allowed in RestrictedPandas | closed | 2024-12-12T04:21:51Z | 2025-01-20T10:08:50Z | https://github.com/sinaptik-ai/pandas-ai/issues/1470 | [
"bug"
] | XJTU-JP | 6 |
plotly/jupyter-dash | dash | 73 | Using on SageMaker Studio | I'd like to use this on AWS Sagemaker Studio, but I think Sagemaker Studio is JupyterLab 1.
Has anyone been able to get either the inline or jupyterlab modes working on Sagemaker Studio? | closed | 2022-01-05T03:04:35Z | 2024-05-23T17:18:39Z | https://github.com/plotly/jupyter-dash/issues/73 | [] | tonyreina | 1 |
seleniumbase/SeleniumBase | pytest | 2,973 | uc_gui_handle_cf() started behaving strangely | Hey seems like `uc_gui_handle_cf()` stop working correctly (watch the video), maybe some changes in DOM, can u check please? Using last sb upgrade, macOS, python 3.11. Testing on this url: https://dexscreener.com/
Other functions `uc_gui_click_captcha(), uc_gui_click_cf()` working well, but I need to use that one.
https://github.com/user-attachments/assets/010faa12-a059-439b-b041-bb7281656f3e
| closed | 2024-07-29T18:02:50Z | 2024-09-03T17:32:39Z | https://github.com/seleniumbase/SeleniumBase/issues/2973 | [
"feature or fix already exists",
"UC Mode / CDP Mode",
"Fun"
] | vmolostvov | 3 |
wkentaro/labelme | computer-vision | 1,353 | Labelme Crash | ### Provide environment information
Windows 11 22H2
Labelme 5.3.1
### What OS are you using?
Windows 11 22H2
### Describe the Bug
When the image file is on SSD,software crash occurs when saving JSON files.If using a mechanical hard drive,there will be no problem.This situation has been replicated on four computers,if you use old version Labelme also crash.
SSD (C:)
Mechanical Hard Drive (D:)
### Expected Behavior
No Problem
### To Reproduce
1.Open Dir
2.Create Polygons
3.Save
4.Software Crash | open | 2023-11-14T03:48:12Z | 2024-01-29T17:55:57Z | https://github.com/wkentaro/labelme/issues/1353 | [
"issue::bug"
] | LLL151 | 1 |
jwkvam/celluloid | matplotlib | 19 | animation.save() is goint to infinite loop | I am trying to save my animation using ```animation.save()``` before it was working fine but not it's going to infinite loop and file size if 20kb then after 1 min it's going to 0 bytes.
Same is the case with google colab:
```HTML(animation.to_html5_video())```
is just running with out any output.

| open | 2020-10-05T01:42:42Z | 2020-10-05T01:57:32Z | https://github.com/jwkvam/celluloid/issues/19 | [] | saibhaskar24 | 0 |
akurgat/automating-technical-analysis | streamlit | 25 | Over resource limits on Streamlit Cloud | Hey there :wave: Just wanted to let you know that [your app on Streamlit Cloud deployed from this repo](https://akurgat-automating-technical-analysis-trade-qn1uzx.streamlit.app/akurgat/automating-technical-analysis/Trade.py) has gone over its resource limits. Access to the app is temporarily limited. Visit the app to see more details and possible solutions. | closed | 2024-10-24T16:36:24Z | 2024-10-25T12:27:04Z | https://github.com/akurgat/automating-technical-analysis/issues/25 | [] | JAEP2 | 1 |
google-deepmind/sonnet | tensorflow | 33 | AttributeError: 'module' object has no attribute 'AbstractModule' when running python train.py in deepmind/dnc | closed | 2017-05-20T15:30:48Z | 2017-05-21T02:13:41Z | https://github.com/google-deepmind/sonnet/issues/33 | [] | b03201003 | 4 | |
deeppavlov/DeepPavlov | nlp | 1,140 | English NER Demo vs github version | Hi,
I tried ner_ontonotes_bert model from http://docs.deeppavlov.ai/en/master/features/models/ner.html.
And followed the instructions.
ner_model([mysentence]) works.
But unfortunately, I am not getting the same tags for the same sentence, as nice as I get from the demo page: https://demo.deeppavlov.ai/#/en/ner
Many tags are missing, or incorrect.
Could someone tell me what the problem is?
Thank you! | closed | 2020-02-28T09:34:11Z | 2020-02-28T11:05:57Z | https://github.com/deeppavlov/DeepPavlov/issues/1140 | [] | farmeh | 2 |
replicate/cog | tensorflow | 1,718 | Can't build and push image to replicate | `cog push`
returns
```
#2 ERROR: r8.im/...:latest: not found
------
> [internal] load metadata for r8.im/...:latest:
------
Dockerfile:1
--------------------
1 | >>> FROM r8.im/...
2 | COPY .cog/openapi_schema.json .cog
3 |
--------------------
ERROR: failed to solve: r8.im/...: failed to resolve source metadata for r8.im/...:latest: r8.im/...:latest: not found
ⅹ Failed to add labels to image: exit status 1
```
while `cog predict -i` succesfully executed
Looks related to this
`cog --version = 0.9.9` | closed | 2024-06-06T12:11:03Z | 2024-06-17T23:58:54Z | https://github.com/replicate/cog/issues/1718 | [] | Theodotus1243 | 16 |
yt-dlp/yt-dlp | python | 11,804 | extractor/Smotrim fix for site updates | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
russia
### Provide a description that is worded well enough to be understood
.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU https://smotrim.ru/channel/4
[debug] Command-line config: ['-vU', 'https://smotrim.ru/channel/4']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.13 from yt-dlp/yt-dlp [542166962]
[debug] Lazy loading extractors is disabled
[debug] Python 3.12.8 (CPython x86_64 64bit) - Linux-6.12.3-amd64-x86_64-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.1-3 (setts), ffprobe 7.1-3
[debug] Optional libraries: Cryptodome-3.20.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.1, urllib3-2.2.3, websockets-13.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.12.13 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.12.13 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://smotrim.ru/channel/4
[generic] 4: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 4: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://smotrim.ru/channel/4
Traceback (most recent call last):
File "/srv/git/yt-dlp/yt_dlp/YoutubeDL.py", line 1624, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/srv/git/yt-dlp/yt_dlp/YoutubeDL.py", line 1759, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/srv/git/yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/srv/git/yt-dlp/yt_dlp/extractor/generic.py", line 2553, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://smotrim.ru/channel/4
```
| open | 2024-12-13T15:48:29Z | 2024-12-13T15:48:29Z | https://github.com/yt-dlp/yt-dlp/issues/11804 | [
"site-bug",
"triage"
] | vstavrinov | 0 |
lukasmasuch/streamlit-pydantic | streamlit | 7 | Displaying Input Sections in Columns | <!--
Thanks for requesting a feature 🙌 ❤️
Before opening a new issue, please make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. Also, be sure to check our documentation first.
-->
**Feature description:**
As far as I understand at the moment User Input will all be displayed in one colum. As seen here

But instead i would like to display the interactive fields in multile colums, As seen here:

Maybe one could specify the number of input columns as name-value pair:
```
input_data = sp.pydantic_input(
"model_input", model=ExampleModel, width=2
)
```
Additionaly if the width value equals None, the Number of Collums could be automaticly determined to approximate for e.g. a 3 by 4 aspect ratio. If the window size is to narrow streamlit rearranges the elements anyway.
<!---
Provide a detailed description of the feature or improvement you are proposing. What specific solution would you like? What is the expected behaviour?
Add any other context, screenshots, or code snippets about the feature request here as well.
-->
**Problem and motivation:**
The motivation is clearly to make apps with a lot of user input more organised and utilize more of wide screen displays.
This is espacially sensible, since streamlit-pydantic is a usefull tool espacially in applications with a lot of user input, to make code more organised and modular.
<!---
Why is this change important to you? What is the problem this feature would solve? How would you use it? How can it benefit other users?
-->
**Is this something you're interested in working on?**
Yes, although I am totally inexperienced
<!--- Yes or No -->
| closed | 2021-08-23T12:10:37Z | 2023-01-17T01:59:03Z | https://github.com/lukasmasuch/streamlit-pydantic/issues/7 | [] | BenediktPrusas | 7 |
nltk/nltk | nlp | 3,136 | 'str' object has no attribute 'decode' | Hi,
I'm trying to run a code in Figure 3.4 from NLTK book. In my Jupyter, the code is seen below:
`
# -*- coding: utf-8 -*-
import re
sent = """
Przewiezione przez Niemców pod koniec II wojny światowej na Dolny Śląsk, zostały odnalezione po 1945 r. na terytorium Polski.
"""
u = sent.decode('utf8')
u.lower
`
but the Jupyter returns an error of no attribute 'decode'. Any idea how to fix this? | closed | 2023-03-18T23:15:01Z | 2023-08-07T21:53:01Z | https://github.com/nltk/nltk/issues/3136 | [] | edmondium | 2 |
autokey/autokey | automation | 963 | Bump action versions in workflow files | ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Enhancement
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [ ] autokey-gtk
- [ ] autokey-qt
- [ ] beta
- [ ] bug
- [ ] critical
- [X] development
- [ ] documentation
- [ ] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [ ] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
update
### Which Linux distribution did you use?
_No response_
### Which AutoKey GUI did you use?
None
### Which AutoKey version did you use?
_No response_
### How did you install AutoKey?
_No response_
### Can you briefly describe the issue?
Update the action versions in the [pages.yml](https://github.com/autokey/autokey/blob/master/.github/workflows/pages.yml) file, the [build.yml](https://github.com/autokey/autokey/blob/master/.github/workflows/build.yml) file, and the [python-test.yml](https://github.com/autokey/autokey/blob/master/.github/workflows/python-test.yml) file.
### Can the issue be reproduced?
None
### What are the steps to reproduce the issue?
_No response_
### What should have happened?
_No response_
### What actually happened?
_No response_
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_
<br/>
<hr/>
<details><summary>This repo is using Opire - what does it mean? 👇</summary><br/>💵 Everyone can add rewards for this issue commenting <code>/reward 100</code> (replace <code>100</code> with the amount).<br/>🕵️♂️ If someone starts working on this issue to earn the rewards, they can comment <code>/try</code> to let everyone know!<br/>🙌 And when they open the PR, they can comment <code>/claim #963</code> either in the PR description or in a PR's comment.<br/><br/>🪙 Also, everyone can tip any user commenting <code>/tip 20 @Elliria</code> (replace <code>20</code> with the amount, and <code>@Elliria</code> with the user to tip).<br/><br/>📖 If you want to learn more, check out our <a href="https://docs.opire.dev">documentation</a>.</details> | closed | 2024-11-08T15:46:46Z | 2024-11-08T21:30:27Z | https://github.com/autokey/autokey/issues/963 | [
"development"
] | Elliria | 0 |
deepinsight/insightface | pytorch | 2,369 | RetinaFace Pretrained Model for Commercial Use | Can RetinaFace [pretrained Model](https://github.com/deepinsight/insightface/tree/master/detection/retinaface#retinaface-pretrained-models) be for Commercial Use?
| open | 2023-07-13T15:51:54Z | 2023-07-13T15:52:11Z | https://github.com/deepinsight/insightface/issues/2369 | [] | htcml | 0 |
robinhood/faust | asyncio | 366 | Global cache to share configuration data | ## Checklist
- [ ] I have included information about relevant versions
- [ ] I have verified that the issue persists when using the `master` branch of Faust.
## Steps to reproduce
Currently there is no global cache support in Faust
## Expected behavior
I need to store some app preferences in a shared table which always has up to date data.
I need to be able to update the table from any of the nodes and be able to get the updated values anywhere in the app.
## Actual behavior
Not supported.
## Full traceback
```pytb
Paste the full traceback (if there is any)
```
# Versions
* Python version
* Faust version
* Operating system
* Kafka version
* RocksDB version (if applicable)
| closed | 2019-06-19T08:19:06Z | 2019-09-19T19:45:15Z | https://github.com/robinhood/faust/issues/366 | [] | apapikyan | 2 |
sktime/sktime | scikit-learn | 7,551 | [ENH] Remove `gluonts` dependency in `gluonts_ListDataset_panel` mtype | The `gluonts_ListDataset_panel` data type specification does not require `gluonts` objects, as it is composed entirely of `python` base types and `pandas` types.
Therefore we should be able to remove the `gluonts` dependencies. To do this:
* look at the `PanelGluontsList` class and replace `gluonts` imports by equivalent checks that do not require `gluonts`
* remove `gluonts` from the `python_dependencies` tag | closed | 2024-12-21T13:49:07Z | 2025-01-06T20:35:22Z | https://github.com/sktime/sktime/issues/7551 | [
"good first issue",
"module:datatypes",
"enhancement"
] | fkiraly | 1 |
MycroftAI/mycroft-core | nlp | 2,864 | "Hey MyCroft" used instead of alternate wake word selected | **Describe the bug**
"Hey Jarvis" was selected as the wake word during device pairing
Device only responds to "Hey MyCroft"
**To Reproduce**
Steps to reproduce the behavior:
Run the docker version of MyCroft on linux mint
Pair the device and select "Hey Jarvis" as the wake word
Speak "Hey Jarvis" nothing happens
Speak "Hey MyCroft" "Who was Abraham Lincoln?" - the device speaks/reads the response
**Expected behavior**
I expect "Hey Jarvis" to initiate recognition the way "Hey MyCroft" does
**Log files**
I'm not home right now, but I can submit these later if helpful.
You may also include screenshots, however screenshots of log files are often difficult to read and parse.
If you are running Mycroft, the [Support Skill](https://github.com/MycroftAI/skill-support) helps to automate gathering this information. Simply say "Create a support ticket" and the Skill will put together a support package and email it to you.
**Environment (please complete the following information):**
- Device type: [e.g. Raspberry Pi, Mark 1, desktop]
- OS: [e.g. Ubuntu, Picroft]
- Mycroft-core version: [e.g. 20.08]
- Other versions: [e.g. Adapt v0.3.7]
**Additional context**
Add any other context about the problem here.
Please think carefully about whether you have modified anything in Mycroft's code or configuration files. If so, can you reproduce this on a clean installation of Mycroft? Many "bugs" turn out to be non-standard configuration errors.
| closed | 2021-03-17T21:48:53Z | 2024-09-08T08:29:54Z | https://github.com/MycroftAI/mycroft-core/issues/2864 | [
"bug"
] | chrisamow | 6 |
AirtestProject/Airtest | automation | 930 | app targetSdkVersion 设为 API 级别升级为30,APP无法抓取元素 | app targetSdkVersion 设为 API 级别升级为30,APP无法抓取元素,求解
https://developer.android.com/about/versions/pie/android-9.0-changes-28 | closed | 2021-06-28T13:28:56Z | 2021-09-30T07:43:30Z | https://github.com/AirtestProject/Airtest/issues/930 | [] | zhangqiongxiu | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.