repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
erdewit/ib_insync | asyncio | 387 | possible documentation error with updateEvent | First thanks for developing this library. It has proved very useful.
The API documentation (https://ib-insync.readthedocs.io/api.html) states the following:
```
updateEvent (): Is emitted after a network packet has been handeled.
barUpdateEvent (bars: BarDataList, hasNewBar: bool): Emits the bar list that has been updated in real time. If a new bar has been added then hasNewBar is True, when the last bar has changed it is False.
```
However, in the code the BarDataList class only has an updateEvent attribute meaning that it's not possible to attach a barUpdateEvent to such an object.
It seems that the updateEvent is overloaded depending on the data type it is attached to. This is slightly confusing at first because it conflicts with the above documentation. Furthermore, the updateEvent when overloaded on a specific data type can actually take arguments, further contradicting with the documentation.
It seems the intention is to attach an updateEvent callback to a specific instance of a data type but if a generic behaviour is desired over all instances of such events then a callback can be attached to the instance of ib and the type of event the callback should be triggered by. For example:
```
bars = self.ib.reqHistoricalData()
# either of the below would work in triggering self.on_new_bar() when a barUpdateEvent occurs
self.ib.barUpdateEvent += self.on_new_bar
bars.updateEvent += self.on_new_bar
```
It's not obvious from the documentation which paradigm should be used and why. | closed | 2021-07-16T14:55:16Z | 2021-07-29T14:02:05Z | https://github.com/erdewit/ib_insync/issues/387 | [] | laker-93 | 1 |
kennethreitz/responder | flask | 217 | ModuleNotFoundError: No module named 'starlette.debug' | I got the following error when running docs examples with Responder 1.1.1
```
Traceback (most recent call last):
File "myapp/api.py", line 1, in <module>
import responder
File "/home/julien/.local/share/virtualenvs/python-spa-starter-Pu8Jxsg6/lib/python3.7/site-packages/responder/__init__.py", line 1, in <module>
from .core import *
File "/home/julien/.local/share/virtualenvs/python-spa-starter-Pu8Jxsg6/lib/python3.7/site-packages/responder/core.py", line 1, in <module>
from .api import API
File "/home/julien/.local/share/virtualenvs/python-spa-starter-Pu8Jxsg6/lib/python3.7/site-packages/responder/api.py", line 16, in <module>
from starlette.debug import DebugMiddleware
ModuleNotFoundError: No module named 'starlette.debug'
```
with this code
```python
import responder
api = responder.API()
@api.route("/{greeting}")
async def greet_world(req, resp, *, greeting):
resp.text = f"{greeting}, world!"
if __name__ == '__main__':
api.run()
``` | closed | 2018-11-09T13:31:18Z | 2018-11-15T10:50:42Z | https://github.com/kennethreitz/responder/issues/217 | [] | jkermes | 5 |
AlexMathew/scrapple | web-scraping | 49 | Add tests | Tests for verifying CSV output from run/generate need to be added. #44 and #46
| closed | 2015-03-15T12:50:46Z | 2017-02-24T08:44:15Z | https://github.com/AlexMathew/scrapple/issues/49 | [] | AlexMathew | 0 |
sktime/pytorch-forecasting | pandas | 1,099 | when use weight param in TimeSeriesDataSet, raised RuntimeError: The size of tensor a (128) must match the size of tensor b (12) at non-singleton dimension 1 | - PyTorch-Forecasting version:0.10.1
- PyTorch version: 1.11.0
- Python version:3.7.4
- Operating System: centos
### Expected behavior
I executed code
TimeSeriesDataSet(
data[lambda x: x.time_idx <= training_cutoff],
time_idx="time_idx",
target="value",
group_ids=["series"],
time_varying_unknown_reals=["value"],
max_encoder_length=context_length,
max_prediction_length=prediction_length,
target_normalizer="auto",
**weight="weight"**
)
in order to add weight to part important sample and expected to get advanced result
### Actual behavior
However, it raised error :
RuntimeError: The size of tensor a (128) must match the size of tensor b (12) at non-singleton dimension 1
. I think there is an error in the current code :
File /pathtopython/site-packages/pytorch_forecasting/metrics.py:1006, in MASE.update(self, y_pred, target, encoder_target, encoder_lengths)
1004 # weight samples
1005 if weight is not None:
-> 1006 losses = losses * weight.unsqueeze(-1)
I printed the shape of losses and weight , which were the same shape . When I change the code to losses = losses * weight, the error didn't occur. And advanced result was obtained.
Please check if this is a problem.
| open | 2022-08-12T05:46:04Z | 2023-04-04T10:34:39Z | https://github.com/sktime/pytorch-forecasting/issues/1099 | [] | tuhulihongbing | 3 |
Lightning-AI/pytorch-lightning | data-science | 19,803 | Current FSDPPrecision does not support custom scaler for 16-mixed precision | ### Bug description

`self.precision` here inherits from parent class `Precision`, so it is always "32-true"

The subsequent definition of `self.scaler` also assigns `None` if the `scaler is not None` and `precision == "16-mixed"`.

Is this intentional or a bug?
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | open | 2024-04-23T04:14:07Z | 2024-04-23T04:14:07Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19803 | [
"bug",
"needs triage"
] | SongzhouYang | 0 |
pyg-team/pytorch_geometric | deep-learning | 9,786 | Graphormer-GD Implementation | ### 🚀 The feature, motivation and pitch
Dear pyg community,
I propose to implement the graphormer-gd transformer layer from the paper [Rethinking the Expressive Power of GNNs via Graph Biconnectivity](https://arxiv.org/abs/2301.09505) into pyg. The implementation would be similar to e.g. GraphSAGE. There is already an implementation on github, however, it does not use pyg. See here for reference: https://github.com/lsj2408/Graphormer-GD
I'm happy to take this over, however I first want to check that a PR like this would be accepted. Let me know what you think!
### Alternatives
_No response_
### Additional context
_No response_ | open | 2024-11-14T16:01:17Z | 2024-11-14T16:01:17Z | https://github.com/pyg-team/pytorch_geometric/issues/9786 | [
"feature"
] | goelzva | 0 |
microsoft/nni | data-science | 5,298 | cannot fix the mask of the interdependent layers | **Describe the issue**:
When i pruned the segmentation model.After saving the mask.pth,when i speed up ,the mask cannot fix the new architecture of the model
**Environment**:
- NNI version:2.0
- Training service (local|remote|pai|aml|etc):local
- Client OS:ubuntu
- Server OS (for remote mode only):
- Python version:3.7
- PyTorch/TensorFlow version:pytorch
- Is conda/virtualenv/venv used?:yes
- Is running in Docker?:no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
mask_conflict.py,line195,in fix_mask_conflict
assert shape[0] % group == 0
AssertionError
when i print shape[0] and group:
32 32
32 32
16 1
96 96
96 96
24 320
the group==320 is bigger than the shape[0]==24
how can i fix the problem?
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | closed | 2022-12-23T11:12:32Z | 2023-02-24T02:36:58Z | https://github.com/microsoft/nni/issues/5298 | [] | sungh66 | 10 |
microsoft/MMdnn | tensorflow | 900 | shape inference error in depthwise_conv | Platform: ubuntu
MMdnn version: 0.3.1
Source framework with version: Tensorflow 1.13.1
Pre-trained model: mobilenet_v1
from https://github.com/tensorflow/models/tree/master/research/slim/nets
Running scripts: mmtoir
Description: shape inference error in parsing depthwise_conv (converting from tf-slim model to IR file), the output shape's channel won't match with depthwise_conv's kernel channel

| open | 2020-10-10T05:05:07Z | 2020-10-12T06:44:29Z | https://github.com/microsoft/MMdnn/issues/900 | [] | xianyuggg | 2 |
gradio-app/gradio | data-visualization | 10,171 | Allow specifying custom dictionary keys for event input. | - [x] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
When using dictionaries as event inputs in Gradio, we can only use component objects themselves as dictionary keys. This makes it difficult to separate event handling functions from component creation code, as the functions need direct access to the component objects.
**Describe the solution you'd like**
Allow specifying custom string keys when using dictionaries as event inputs. | open | 2024-12-10T20:16:39Z | 2024-12-10T20:19:57Z | https://github.com/gradio-app/gradio/issues/10171 | [
"enhancement",
"needs designing"
] | L9qmzn | 0 |
axnsan12/drf-yasg | rest-api | 395 | Add support to include generated or custom openapi schema into UI view's HTML output | Is there a way to set the `swagger-ui` `spec` property using the generated schema instead of the `url` property when loading swagger-ui via `drf-yasg`?
By including the schema in the HTML file directly, the generated HTML page with swagger-ui should be able to be "saved" for offline reading, as well as avoid a 2nd request back to the server for the schema.
I'm coming from using `django-rest-swagger` which already does this (https://github.com/marcgibbons/django-rest-swagger/blob/master/rest_framework_swagger/templates/rest_framework_swagger/index.html#L61), rather than making `swagger-ui` fetch the schema in a 2nd request. I know this would have implications for the `REFETCH_SCHEMA_*` settings, but we don't use those currently.
If I find a clean solution, I'll try to submit a PR.
| open | 2019-06-27T18:15:26Z | 2025-03-07T12:16:23Z | https://github.com/axnsan12/drf-yasg/issues/395 | [
"triage"
] | devmonkey22 | 1 |
robotframework/robotframework | automation | 4,626 | Inconsistent argument conversion when using `None` as default value with Python 3.11 and earlier | there is something strange happening with Python 3.11 and type hints and auto conversion.
When you have a Type Hint Any and default Value is None in 3.11 'none' is always converted to None while in 3.10. it stays a string.
Given that Test:
```robotframework
*** Settings ***
Library mylib.py
*** Test Cases ***
Test
Myfunc none none
Test 2
Myfunc Hello Hello
```
And That Library
```python
from typing import Any, Optional, Union
def myfunc(value, assertion_expected: Any = None):
assert value == assertion_expected, (
"Assertion failed: "
f"{value} ({type(value)}) != {assertion_expected} ({type(assertion_expected)})"
)
```
It fails in Python 3.11 and passes in 3.10
`def myfunc(value, assertion_expected: Optional[Any] = None):`
The Optional[Any] fixes it.
Also Libdoc behaves differently and only shows <Any> as Type hint. | closed | 2023-01-31T18:55:54Z | 2023-03-15T12:50:24Z | https://github.com/robotframework/robotframework/issues/4626 | [
"bug",
"priority: medium",
"alpha 1"
] | Snooz82 | 6 |
mage-ai/mage-ai | data-science | 5,043 | Add agnostic path for project files | **DBT config yaml**
Working with the magician on different operating systems (Win, Unix, Mac), we discovered that the paths to dbt models are saved differently in yaml files depending on this. for example, `dbts\run_all.yaml` in Win. Is it possible to make path formation independent of the operating system, at least when reading the config or running the pipeline?
https://files.slack.com/files-pri/T03GK6PEQP6-F0727SEC9FF/image.png
https://files.slack.com/files-pri/T03GK6PEQP6-F0727SBMN0M/image.png
**Solution**
Using pathlib instead of os.path
Refactoring File class methods ([git](https://github.com/mage-ai/mage-ai/blob/e84116411f24b98711a8095f291c1a9898c1cd49/mage_ai/data_preparation/models/file.py#L40))
**Example**
```
from pathlib import Path, PureWindowsPath
from platform import system
class AgnosticPath(Path):
def __new__(cls, *args, **kwargs):
new_path = PureWindowsPath(*args).parts
if (system() != "Windows") and (len(new_path) > 0) and (new_path[0] in ("/", "\\")):
new_path = ("/", *new_path[1:])
return super().__new__(Path, *new_path, **kwargs)
``` | open | 2024-05-08T07:09:54Z | 2024-05-08T07:09:54Z | https://github.com/mage-ai/mage-ai/issues/5043 | [] | FelixMiksonAramMeem | 0 |
microsoft/nni | pytorch | 5,660 | nni norm_pruning example error | Hello!
I am trying to locally run a pytorch version of one of the pruning examples (nni/examples/compression/pruning/norm_pruning.py).
However, it seems that the config_list that is generate from the function "auto_set_denpendency_group_ids" has some problem.
**The config_list is:**
[{'op_names': ['layer3.1.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '0152545ff8de4d14a8cfe727bf9769d1',
'internal_metric_block': 1},
{'op_names': ['layer1.0.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '5c913fb3076441e2af16c32c03758329',
'internal_metric_block': 1},
{'op_names': ['layer2.0.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '497a228f19e047d8a26fa94cc97fbabf',
'internal_metric_block': 1},
{'op_names': ['layer4.0.downsample.0'],
'sparse_ratio': 0.5,
'dependency_group_id': 'ea22b181139c4199b090c4e702d85083',
'internal_metric_block': 1},
{'op_names': ['layer3.1.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': '60ccd2e1a186412e89d256682007b2f7',
'internal_metric_block': 1},
{'op_names': ['layer4.1.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '70578023ad6e48c1b14ef44d5e6a0c3f',
'internal_metric_block': 1},
{'op_names': ['conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '01d9e39d16e94df4838ea98275f5d445',
'internal_metric_block': 1},
{'op_names': ['layer1.0.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': '01d9e39d16e94df4838ea98275f5d445',
'internal_metric_block': 1},
{'op_names': ['layer3.0.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '0cbb55f71a484d64b775d7d82380d0dd',
'internal_metric_block': 1},
{'op_names': ['layer4.0.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': 'ea22b181139c4199b090c4e702d85083',
'internal_metric_block': 1},
{'op_names': ['layer1.1.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': '01d9e39d16e94df4838ea98275f5d445',
'internal_metric_block': 1},
{'op_names': ['layer2.0.downsample.0'],
'sparse_ratio': 0.5,
'dependency_group_id': '3a2f40dea91340f296b0e40049ee1b57',
'internal_metric_block': 1},
{'op_names': ['layer2.1.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': 'aa5de9115c5141aeb1736ed8d9f479fd',
'internal_metric_block': 1},
{'op_names': ['layer4.1.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': 'ea22b181139c4199b090c4e702d85083',
'internal_metric_block': 1},
{'op_names': ['layer2.1.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': '3a2f40dea91340f296b0e40049ee1b57',
'internal_metric_block': 1},
{'op_names': ['layer3.0.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': '60ccd2e1a186412e89d256682007b2f7',
'internal_metric_block': 1},
{'op_names': ['layer4.0.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '526922a4a69d4a46b2fdbf937f8283dc',
'internal_metric_block': 1},
{'op_names': ['layer1.1.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': 'f1dbbaba5cce46698e3efbcd84d48e4c',
'internal_metric_block': 1},
{'op_names': ['layer3.0.downsample.0'],
'sparse_ratio': 0.5,
'dependency_group_id': '60ccd2e1a186412e89d256682007b2f7',
'internal_metric_block': 1},
{'op_names': ['layer2.0.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': '3a2f40dea91340f296b0e40049ee1b57',
'internal_metric_block': 1}]
**The error I get:**
Or(And({Or('sparsity', 'sparsity_per_layer'): And(<class 'float'>, <function <lambda> at 0x7d11a9a048b0>), Optional('op_types'): And(['Conv2d', 'Linear'], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a98b8ca0>), Optional('op_names'): And([<class 'str'>], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a74cb520>), Optional('op_partial_names'): [<class 'str'>]}, <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a74cbeb0>), And({'exclude': <class 'bool'>, Optional('op_types'): And(['Conv2d', 'Linear'], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a74cbe20>), Optional('op_names'): And([<class 'str'>], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a74cbd90>), Optional('op_partial_names'): [<class 'str'>]}, <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a74cb490>), And({'total_sparsity': And(<class 'float'>, <function <lambda> at 0x7d11a9a07be0>), Optional('max_sparsity_per_layer'): {<class 'str'>: <class 'float'>}, Optional('op_types'): And(['Conv2d', 'Linear'], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a7429ea0>), Optional('op_names'): And([<class 'str'>], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a7428790>)}, <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a7428430>)) did not validate {'op_names': ['layer3.1.conv1'], 'sparse_ratio': 0.5, 'dependenc...
Missing key: Or('sparsity', 'sparsity_per_layer')
Missing key: 'exclude'
Missing key: 'total_sparsity'
Would be happy to understand what I am missing.
Thanks a lot!
Noy
| closed | 2023-08-13T13:32:48Z | 2023-08-17T07:30:54Z | https://github.com/microsoft/nni/issues/5660 | [] | NoyLalzary | 0 |
aio-libs/aiomysql | asyncio | 592 | Any plans to support PyMySQL-1.0.2 | Current version aiomysql require PyMySQL<=0.9.3,>=0.9
Do exist any planning when will be supported PyMySQL > 1.0.0
for example for TLS support. | closed | 2021-06-16T15:34:55Z | 2022-01-13T17:35:56Z | https://github.com/aio-libs/aiomysql/issues/592 | [
"enhancement",
"pymysql"
] | jpVm5jYYRE1VIKL | 3 |
pywinauto/pywinauto | automation | 433 | How should I find and use custom controls of WPF application? | I testing a WPF App, it contains some **custom controls.**. I do not have a way to find the operation in the document
How shuld I do ?
| open | 2017-11-01T09:46:12Z | 2017-11-01T20:09:02Z | https://github.com/pywinauto/pywinauto/issues/433 | [
"enhancement",
"New Feature"
] | fangchaooo | 2 |
yunjey/pytorch-tutorial | pytorch | 75 | captioning code doesn't run | Hi. I am getting the error below. I hope someone can help @jtoy
```
python sample.py --image='png/example.png'
Traceback (most recent call last):
File "sample.py", line 97, in <module>
main(args)
File "sample.py", line 61, in main
sampled_ids = decoder.sample(feature)
File "/home/shas/sandbox/pytorch-tutorial/tutorials/03-advanced/image_captioning/model.py", line 62, in sample
hiddens, states = self.lstm(inputs, states) # (batch_size, 1, hidden_size),
File "/home/shas/miniconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/shas/miniconda2/lib/python2.7/site-packages/torch/nn/modules/rnn.py", line 91, in forward
output, hidden = func(input, self.all_weights, hx)
File "/home/shas/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 343, in forward
return func(input, *fargs, **fkwargs)
File "/home/shas/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 243, in forward
nexth, output = func(input, hidden, weight)
File "/home/shas/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 83, in forward
hy, output = inner(input, hidden[l], weight[l])
File "/home/shas/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 112, in forward
hidden = inner(input[i], hidden, *weight)
File "/home/shas/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 30, in LSTMCell
gates = F.linear(input, w_ih, b_ih) + F.linear(hx, w_hh, b_hh)
File "/home/shas/miniconda2/lib/python2.7/site-packages/torch/nn/functional.py", line 449, in linear
return state(input, weight) if bias is None else state(input, weight, bias)
File "/home/shas/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/linear.py", line 10, in forward
output.addmm_(0, 1, input, weight.t())
RuntimeError: matrices expected, got 3D, 2D tensors at /opt/conda/conda-bld/pytorch_1501972792122/work/pytorch-0.1.12/torch/lib/TH/generic/THTensorMath.c:1232
```
I have tried with both `pytorch=0.1.12` and `pytorch=0.2`. None work! | closed | 2017-10-24T01:57:34Z | 2018-05-10T08:56:17Z | https://github.com/yunjey/pytorch-tutorial/issues/75 | [] | ShasTheMass | 3 |
xonsh/xonsh | data-science | 4,934 | xonfig web fails with AssertionError | Steps to reproduce on macOS 12.5.1:
* brew install xonsh
* xonfig web
Webpage doesn't load for long time. Then error in console appears: `Failed to parse color-display ParseError('syntax error: line 1, column 0')`
Details:
<details>
```
(base) maye@Michaels-Mini ~/.config $ xonfig web
Web config started at 'http://localhost:8421'. Hit Crtl+C to stop.
ERROR:root:Failed to format Xonsh code AssertionError("wrong color format 'noinherit'"). 'bw'
Traceback (most recent call last):
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/xonsh_data.py", line 194, in render_colors
display = html_format(token_stream, style=style)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/xonsh_data.py", line 51, in html_format
proxy_style = xonsh_style_proxy(XonshStyle(style))
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/pyghooks.py", line 453, in xonsh_style_proxy
class XonshStyleProxy(Style):
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/pygments/style.py", line 112, in __new__
ndef[4] = colorformat(styledef[3:])
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/pygments/style.py", line 79, in colorformat
assert False, "wrong color format %r" % text
AssertionError: wrong color format 'noinherit'
ERROR:root:Failed to parse color-display ParseError('syntax error: line 1, column 0'). 'import sys\necho "Welcome $USER on" @(sys.platform)\n\ndef func(x=42):\n d = {"xonsh": True}\n return d.get("xonsh") and you\n\n# This is a comment\n![env | uniq | sort | grep PATH]\n'
127.0.0.1 - - [08/Sep/2022 20:52:46] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [08/Sep/2022 20:52:47] "GET /js/bootstrap.min.css HTTP/1.1" 200 -
127.0.0.1 - - [08/Sep/2022 20:52:47] code 404, message File not found
127.0.0.1 - - [08/Sep/2022 20:52:47] "GET /js/xonsh_sticker.svg HTTP/1.1" 404 -
127.0.0.1 - - [08/Sep/2022 20:52:47] code 404, message File not found
127.0.0.1 - - [08/Sep/2022 20:52:47] "GET /favicon.ico HTTP/1.1" 404 -
127.0.0.1 - - [08/Sep/2022 20:52:47] code 404, message File not found
127.0.0.1 - - [08/Sep/2022 20:52:47] "GET /apple-touch-icon-precomposed.png HTTP/1.1" 404 -
127.0.0.1 - - [08/Sep/2022 20:52:47] code 404, message File not found
127.0.0.1 - - [08/Sep/2022 20:52:47] "GET /apple-touch-icon.png HTTP/1.1" 404 -
```
and then wepage loads visually, but any interaction get's a "Safari can't load server" because the process died in the terminal.
A 2nd launch of `xonsh web` throws now a huge error:
```
/xonsh/webconfig stable $ xonfig web
Web config started at 'http://localhost:8421'. Hit Crtl+C to stop.
--- Logging error ---
Traceback (most recent call last):
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/xonsh_data.py", line 194, in render_colors
display = html_format(token_stream, style=style)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/xonsh_data.py", line 51, in html_format
proxy_style = xonsh_style_proxy(XonshStyle(style))
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/pyghooks.py", line 453, in xonsh_style_proxy
class XonshStyleProxy(Style):
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/pygments/style.py", line 112, in __new__
ndef[4] = colorformat(styledef[3:])
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/pygments/style.py", line 79, in colorformat
assert False, "wrong color format %r" % text
AssertionError: wrong color format 'noinherit'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/logging/__init__.py", line 1103, in emit
stream.write(msg + self.terminator)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/base_shell.py", line 182, in write
self.mem.write(s)
ValueError: I/O operation on closed file.
Call stack:
File "/opt/homebrew/bin/xonsh", line 8, in <module>
sys.exit(main())
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/main.py", line 468, in main
sys.exit(main_xonsh(args))
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/main.py", line 512, in main_xonsh
shell.shell.cmdloop()
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/ptk_shell/shell.py", line 407, in cmdloop
self.default(line, raw_line)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/base_shell.py", line 389, in default
exc_info = run_compiled_code(code, self.ctx, None, "single")
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/codecache.py", line 63, in run_compiled_code
func(code, glb, loc)
File "<stdin>", line 1, in <module>
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/built_ins.py", line 196, in subproc_captured_hiddenobject
return xonsh.procs.specs.run_subproc(cmds, captured="hiddenobject", envs=envs)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/specs.py", line 896, in run_subproc
return _run_specs(specs, cmds)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/specs.py", line 931, in _run_specs
command.end()
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/pipelines.py", line 458, in end
self._end(tee_output=tee_output)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/pipelines.py", line 466, in _end
for _ in self.tee_stdout():
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/pipelines.py", line 368, in tee_stdout
for line in self.iterraw():
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/pipelines.py", line 255, in iterraw
proc.wait()
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/proxies.py", line 820, in wait
r = self.f(self.args, stdin, stdout, stderr, spec, spec.stack)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/lazyasd.py", line 80, in __call__
return obj(*args, **kwargs)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/cli_utils.py", line 676, in __call__
result = dispatch(
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/cli_utils.py", line 425, in dispatch
return _dispatch_func(func, ns)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/cli_utils.py", line 398, in _dispatch_func
return func(**kwargs)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/xonfig.py", line 703, in _web
main.main(_args[1:])
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/main.py", line 183, in main
serve(ns.browser)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/main.py", line 169, in serve
httpd.serve_forever()
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socketserver.py", line 237, in serve_forever
self._handle_request_noblock()
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socketserver.py", line 316, in _handle_request_noblock
self.process_request(request, client_address)
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socketserver.py", line 347, in process_request
self.finish_request(request, client_address)
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socketserver.py", line 360, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/server.py", line 658, in __init__
super().__init__(*args, **kwargs)
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socketserver.py", line 747, in __init__
self.handle()
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/server.py", line 432, in handle
self.handle_one_request()
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/server.py", line 420, in handle_one_request
method()
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/main.py", line 87, in do_GET
route = self._get_route("get")
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/main.py", line 84, in _get_route
return route_cls(url=url, params=params, xsh=XSH)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/routes.py", line 91, in __init__
self.colors = dict(xonsh_data.render_colors())
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/xonsh_data.py", line 196, in render_colors
logging.error(
Message: 'Failed to format Xonsh code AssertionError("wrong color format \'noinherit\'"). \'bw\''
Arguments: ()
--- Logging error ---
Traceback (most recent call last):
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/routes.py", line 74, in get_display
display = t.etree.fromstring(display)
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/xml/etree/ElementTree.py", line 1342, in XML
parser.feed(text)
xml.etree.ElementTree.ParseError: syntax error: line 1, column 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/logging/__init__.py", line 1103, in emit
stream.write(msg + self.terminator)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/base_shell.py", line 182, in write
self.mem.write(s)
ValueError: I/O operation on closed file.
Call stack:
File "/opt/homebrew/bin/xonsh", line 8, in <module>
sys.exit(main())
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/main.py", line 468, in main
sys.exit(main_xonsh(args))
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/main.py", line 512, in main_xonsh
shell.shell.cmdloop()
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/ptk_shell/shell.py", line 407, in cmdloop
self.default(line, raw_line)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/base_shell.py", line 389, in default
exc_info = run_compiled_code(code, self.ctx, None, "single")
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/codecache.py", line 63, in run_compiled_code
func(code, glb, loc)
File "<stdin>", line 1, in <module>
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/built_ins.py", line 196, in subproc_captured_hiddenobject
return xonsh.procs.specs.run_subproc(cmds, captured="hiddenobject", envs=envs)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/specs.py", line 896, in run_subproc
return _run_specs(specs, cmds)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/specs.py", line 931, in _run_specs
command.end()
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/pipelines.py", line 458, in end
self._end(tee_output=tee_output)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/pipelines.py", line 466, in _end
for _ in self.tee_stdout():
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/pipelines.py", line 368, in tee_stdout
for line in self.iterraw():
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/pipelines.py", line 255, in iterraw
proc.wait()
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/procs/proxies.py", line 820, in wait
r = self.f(self.args, stdin, stdout, stderr, spec, spec.stack)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/lazyasd.py", line 80, in __call__
return obj(*args, **kwargs)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/cli_utils.py", line 676, in __call__
result = dispatch(
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/cli_utils.py", line 425, in dispatch
return _dispatch_func(func, ns)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/cli_utils.py", line 398, in _dispatch_func
return func(**kwargs)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/xonfig.py", line 703, in _web
main.main(_args[1:])
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/main.py", line 183, in main
serve(ns.browser)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/main.py", line 169, in serve
httpd.serve_forever()
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socketserver.py", line 237, in serve_forever
self._handle_request_noblock()
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socketserver.py", line 316, in _handle_request_noblock
self.process_request(request, client_address)
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socketserver.py", line 347, in process_request
self.finish_request(request, client_address)
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socketserver.py", line 360, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/server.py", line 658, in __init__
super().__init__(*args, **kwargs)
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socketserver.py", line 747, in __init__
self.handle()
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/server.py", line 432, in handle
self.handle_one_request()
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/server.py", line 420, in handle_one_request
method()
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/main.py", line 89, in do_GET
return self.render_get(route)
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/main.py", line 75, in render_get
body = t.to_str(route.get()) # type: ignore
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/tags.py", line 125, in to_str
txt = b"".join(_to_str()).decode()
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/tags.py", line 116, in _to_str
for idx, el in enumerate(elems):
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/routes.py", line 145, in get
cols = list(self.get_cols())
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/routes.py", line 107, in get_cols
self.to_card(name, display),
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/routes.py", line 100, in to_card
self.get_display(display),
File "/opt/homebrew/Cellar/xonsh/0.13.1/libexec/lib/python3.10/site-packages/xonsh/webconfig/routes.py", line 76, in get_display
logging.error(f"Failed to parse color-display {ex!r}. {display!r}")
Message: 'Failed to parse color-display ParseError(\'syntax error: line 1, column 0\'). \'import sys\\necho "Welcome $USER on" @(sys.platform)\\n\\ndef func(x=42):\\n d = {"xonsh": True}\\n return d.get("xonsh") and you\\n\\n# This is a comment\\n![env | uniq | sort | grep PATH]\\n\''
Arguments: ()
127.0.0.1 - - [08/Sep/2022 20:57:36] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [08/Sep/2022 20:57:36] code 404, message File not found
127.0.0.1 - - [08/Sep/2022 20:57:36] "GET /js/xonsh_sticker.svg HTTP/1.1" 404 -
```
Ok, so much for my yearly check-in with xonsh. See you next year! :)
</details> | closed | 2022-09-08T18:59:50Z | 2022-09-15T12:55:24Z | https://github.com/xonsh/xonsh/issues/4934 | [
"mac osx",
"xonfig",
"xonfig-web"
] | michaelaye | 7 |
BeanieODM/beanie | asyncio | 318 | Why Input model need to be init with beanie? | Hi I'm using beanie with fastapi.
this is my model:
```
class UserBase(Document):
username: str | None
parent_id: str | None
role_id: str | None
payment_type: str | None
disabled: bool | None
note: str | None
access_token: str | None
class UserIn(UserBase):
username: str
password: str
disabled: bool = False
class User(UserBase):
username: str
password: str
disabled: bool = False
salt: str
class Settings:
name = "users"
class UserOut(UserBase):
note: str | None
access_token: str | None
```
my routes:
```
USERS = APIRouter()
@USERS.post('/users', response_model=UserOut)
async def post_user(user: UserIn):
user_to_crate = User(**user.dict(), salt=get_salt())
await user_to_crate.save()
user_to_response = UserOut(**user_to_crate.dict())
return user_to_response
```
my beane init function:
```
async def init():
# Create Motor client
client = motor.motor_asyncio.AsyncIOMotorClient(
f"mongodb://{getenv('MONGO_USER')}:{getenv('MONGO_PASSWORD')}"
f"@{getenv('MONGO_HOST')}/{getenv('MONGO_DATABASE')}?"
f"replicaSet={getenv('MONGO_REPLICA_SET')}"
f"&authSource={getenv('MONGO_DATABASE')}"
)
# Init beanie
await init_beanie(database=client[f"{getenv('MONGO_DATABASE')}"],
document_models=[User])
```
if i'm not put UserIn to the init beanie function, then call /users, i will get this error:
```
INFO: 172.16.16.7:55512 - "POST /users HTTP/1.0" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 269, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.10/site-packages/starlette/exceptions.py", line 93, in __call__
raise exc
File "/usr/local/lib/python3.10/site-packages/starlette/exceptions.py", line 82, in __call__
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 670, in __call__
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 266, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 65, in app
response = await func(request)
File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 217, in app
solved_result = await solve_dependencies(
File "/usr/local/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 557, in solve_dependencies
) = await request_body_to_args( # body_params checked above
File "/usr/local/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 692, in request_body_to_args
v_, errors_ = field.validate(value, values, loc=loc)
File "pydantic/fields.py", line 857, in pydantic.fields.ModelField.validate
File "pydantic/fields.py", line 1074, in pydantic.fields.ModelField._validate_singleton
File "pydantic/fields.py", line 1121, in pydantic.fields.ModelField._apply_validators
File "pydantic/class_validators.py", line 313, in pydantic.class_validators._generic_validator_basic.lambda12
File "pydantic/main.py", line 686, in pydantic.main.BaseModel.validate
File "/usr/local/lib/python3.10/site-packages/beanie/odm/documents.py", line 138, in __init__
self.get_motor_collection()
File "/usr/local/lib/python3.10/site-packages/beanie/odm/interfaces/getters.py", line 13, in get_motor_collection
return cls.get_settings().motor_collection
File "/usr/local/lib/python3.10/site-packages/beanie/odm/documents.py", line 779, in get_settings
raise CollectionWasNotInitialized
beanie.exceptions.CollectionWasNotInitialized
```
Then i put UserIn in the beanie init function. Everything is working.
```
# Init beanie
await init_beanie(database=client[f"{getenv('MONGO_DATABASE')}"],
document_models=[User,UserIn])
```
I just want the UserIn model to validate input data and beanie try to find it in the db.
Models not mapped to db should not have anything to do with the db. Right?
| closed | 2022-07-29T04:42:10Z | 2022-07-30T05:58:07Z | https://github.com/BeanieODM/beanie/issues/318 | [] | nghianv19940 | 3 |
joeyespo/grip | flask | 161 | automatically open in a browser | Easily done with [`webbrowser`](https://docs.python.org/2/library/webbrowser.html), see [`antigravity.py`](https://github.com/python/cpython/blob/master/Lib/antigravity.py) for example usage. ;-)
``` python
import webbrowser
webbrowser.open("http://localhost:6419/")
```
| closed | 2016-01-21T21:20:51Z | 2016-01-21T22:07:36Z | https://github.com/joeyespo/grip/issues/161 | [
"already-implemented"
] | chadwhitacre | 2 |
TracecatHQ/tracecat | pydantic | 61 | Add PII redacting loggers | Related to #62 | closed | 2024-04-18T01:51:31Z | 2024-07-02T20:46:55Z | https://github.com/TracecatHQ/tracecat/issues/61 | [] | daryllimyt | 0 |
lanpa/tensorboardX | numpy | 450 | add_image() won't work without NVIDIA GPU | **Describe the bug**
When calling add_image() for a CPU FloatTensor, it still fails in case there is no NVIDIA GPU available. The reason for this is an assert in summary.py which will check whether the tensor is an instance of torch.cuda.FloatTensor. This causes the following behaviour:
```
python test.py
Traceback (most recent call last):
File "test.py", line 7, in <module>
writer.add_image("tag", a, 1)
File "/home/ubuntu/.local/lib/python3.6/site-packages/tensorboardX/writer.py", line 277, in add_image
self.file_writer.add_summary(image(tag, img_tensor), global_step)
File "/home/ubuntu/.local/lib/python3.6/site-packages/tensorboardX/summary.py", line 157, in image
assert isinstance(tensor, np.ndarray) or isinstance(tensor, torch.cuda.FloatTensor) or isinstance(tensor, torch.FloatTensor), 'input tensor should be one of numpy.ndarray, torch.cuda.FloatTensor, torch.FloatTensor'
File "/home/ubuntu/.local/lib/python3.6/site-packages/torch/cuda/__init__.py", line 162, in _lazy_init
_check_driver()
File "/home/ubuntu/.local/lib/python3.6/site-packages/torch/cuda/__init__.py", line 82, in _check_driver
http://www.nvidia.com/Download/index.aspx""")
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx
```
**Minimal runnable code to reproduce the behavior**
```
import torch
from tensorboardX import SummaryWriter
a = torch.FloatTensor(3, 1, 1)
a.zero_()
writer = SummaryWriter()
writer.add_image("tag", a, 1)
```
**Expected behavior**
add_image() works successfully for CPU FloatTensor even if no NVIDIA GPU is installed.
**Environment**
tensorboard (1.13.1)
tensorboard-pytorch (0.7.1)
tensorflow (1.13.1)
tensorflow-estimator (1.13.0)
torch (1.1.0)
torchvision (0.3.0)
**Additional context**
Issue is fixed if order of checks in assert is modified as short-circuit behaviour of **or** will cause no check for cuda.FloatTensor if CPU FloatTensor was already successfully confirmed:
`assert isinstance(tensor, np.ndarray) or isinstance(tensor, torch.FloatTensor) or isinstance(tensor, torch.cuda.FloatTensor), 'input tensor should be one of numpy.ndarray, torch.FloatTensor, torch.cuda.FloatTensor'` | closed | 2019-06-16T16:33:50Z | 2019-10-21T11:57:38Z | https://github.com/lanpa/tensorboardX/issues/450 | [] | blDraX | 1 |
geopandas/geopandas | pandas | 3,338 | BUG: to_postgis broken using sqlalchemy 1.4 | - [X ] I have checked that this issue has not already been reported.
- [X ] I have confirmed this bug exists on the latest version of geopandas.
- [ ] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
#### Problem description
I'm trying to inject a geodataframe content into my postgis db using to_postgis method, geopandas 0.14.4 and sqlalchemy 1.4.51.
Whatever I do it fails with "geometry (geometry(POINT,4326)) not a string" error
minified example (stolen from an old ticket about to_postgis not working with the same not a string error)
```python
import geopandas as gpd
from shapely.geometry import Point
from sqlalchemy import create_engine
engine = create_engine('postgresql://XXX@XXX/XXX')
gpd.GeoDataFrame([], geometry=[Point(10,10)], crs='epsg:4326').to_postgis('test_table', engine)
```
Tried to change the con parameter with engine, connection or whatever, or using dtype, but I've never been able to make it work.
I also get the infamous warning "UserWarning: pandas only supports SQLAlchemy connectable blabla Please consider using SQLAlchemy." but f it I'm using sqlalchemy so I don't know why it's complaining.
After a while I figured out the same example works fine using sqlalchemy2 (but I would like to keep sqlalchemy 1.4)
As far as I can tell sqlalchemy 1.4 is still supposed to be supported (but the warning should be changed to **Please consider using SQLAlchemy2.** )
#### Expected Output
#### Output of ``geopandas.show_versions()``
<details>
geopandas : 0.14.4
numpy : 1.26.4
pandas : 2.2.2
pyproj : 3.6.1
shapely : 2.0.4
fiona : 1.9.6
geoalchemy2: 0.15.1
geopy : None
matplotlib : 3.9.0
mapclassify: None
pygeos : None
pyogrio : None
psycopg2 : 2.9.9 (dt dec pq3 ext lo64)
pyarrow : None
rtree : 1.2.0
</details>
| closed | 2024-06-12T15:45:29Z | 2024-06-13T15:22:29Z | https://github.com/geopandas/geopandas/issues/3338 | [
"bug",
"needs triage"
] | fvallee-bnx | 2 |
plotly/dash-cytoscape | dash | 192 | [BUG] Elements positions don't match specification in preset layout | #### Description
https://github.com/plotly/dash-cytoscape/assets/101562106/7e767f5a-9edb-4cb5-b160-a4324b7ce2d3
- We have a Dash App with two main callbacks and a container where we can see 3 different cytoscape graphs - only one at a time, the difference between the three graphs is the number of nodes (20, 30, 40)
- Callback 1 saves the current position of the nodes and links (it saves the whole `elements` property) in a dcc.Store. That dcc.Store data property is a dict with one item for each cytoscape graph, so positions for each of the three graphs can be saved at the same time (saving positions of graph 2 doesn't overwrite saved positions for graph 1)
```
@app.callback(
Output('store', 'data'),
Input('save1, 'n_clicks'),
State('cyto', 'elements'),
State('number', 'value'),
State('store', 'data'),
)
def savemapstate(clicks,elements, number, store):
if clicks is None:
raise PreventUpdate
else:
store[number] = elements
return store
```
- Callback 2 modifies the `elements` and `layout` properties of a cytoscape graph based on either (1) default value defined as a global variable, if the user clicks 'Reset' (2) saved value
```
@app.callback(
Output('cyto', 'elements'),
Output('cyto', 'layout'),
Input('update', 'n_clicks'),
Input('reset', 'n_clicks'),
State('number', 'value'),
State('store', 'data'),
prevent_initial_call=True
)
def updatemapstate(click1, click2, number, store):
triggered_id = callback_context.triggered[0]['prop_id'].split('.')[0]
if click1 is None and click2 is None:
raise PreventUpdate
else:
if "update" in triggered_id:
elements = store[number]
layout = {
'name': 'preset',
'fit': True,
}
elif "reset" in triggered_id:
elements = initial_data[number] # initial_data is a global variable (dict)
layout = {
'name': 'concentric',
'fit': True,
'minNodeSpacing': 100,
'avoidOverlap': True,
'startAngle': 50,
}
return elements, layout
```
- When a user tries to update the
- Sometimes it only happens when the number of nodes is >20; if the panning has been changed (the user has "dragged" the whole graph before saving the positions) the issue is worse (more nodes are displaced) and it can happen with <20 nodes.
- Related issue: https://github.com/plotly/dash-cytoscape/issues/175
- With this app, we can also reproduce this issue: https://github.com/plotly/dash-cytoscape/issues/159
#### Code to Reproduce
Env: Python 3.8.12
requirements.txt:
```
dash-design-kit==1.6.8
dash==2.3.1 # it happens with 2.10.2 too
dash_cytoscape==0.2.0 # it happens with 0.3.0 too
pandas
gunicorn==20.0.4
pandas>=1.1.5
flask==2.2.5
```
Complete app code:
```
from dash import Dash, html, dcc, Input, Output, State, callback_context
from dash.exceptions import PreventUpdate
import dash_cytoscape as cyto
import json
import random
app = Dash(__name__)
data1 = [
{'data': {'id': f'{i}', 'label': f'Node {i}'}, 'position': {'x': 100*random.uniform(0,2), 'y': 100*random.uniform(0,2)}}
for i in range(20)] + [
{'data': {'id':f'link1-{i}','source': '1', 'target': f'{i}'}}
for i in range(2,20)
]
data2 = [
{'data': {'id': f'{i}', 'label': f'Node {i}'}, 'position': {'x': 100*random.uniform(0,2), 'y': 100*random.uniform(0,2)}}
for i in range(30)] + [
{'data': {'id':f'link1-{i}','source': '1', 'target': f'{i}'}}
for i in range(2,30)
]
data3 = [
{'data': {'id': f'{i}', 'label': f'Node {i}'}, 'position': {'x': 100*random.uniform(0,2), 'y': 100*random.uniform(0,2)}}
for i in range(40)] + [
{'data': {'id':f'link1-{i}','source': '1', 'target': f'{i}'}}
for i in range(2,40)
]
initial_data = {'1':data1, '2':data2, '3':data3}
app.layout = html.Div([
html.Div([
dcc.Dropdown(['1','2','3'], '1', id='number'),
html.Button(id='save', children='Save'),
html.Button(id='update', children='Update'),
html.Button(id='reset', children='Reset'),
dcc.Store(id='store', data={'1':[], '2':[], '3':[]})
],
style = {'width':'300px'}),
html.Div(
children=cyto.Cytoscape(
id='cyto',
layout={'name': 'concentric',},
panningEnabled=True,
zoom=0.5,
zoomingEnabled=True,
elements=[],
)
)
])
@app.callback(
Output('store', 'data'),
Input('save', 'n_clicks'),
State('cyto', 'elements'),
State('number', 'value'),
State('store', 'data'),
)
def savemapstate(clicks,elements, number, store):
if clicks is None:
raise PreventUpdate
else:
store[number] = elements
return store
@app.callback(
Output('cyto', 'elements'),
Output('cyto', 'layout'),
Input('update', 'n_clicks'),
Input('reset', 'n_clicks'),
State('number', 'value'),
State('store', 'data'),
prevent_initial_call=True
)
def updatemapstate(click1, click2, number, store):
triggered_id = callback_context.triggered[0]['prop_id'].split('.')[0]
if click1 is None and click2 is None:
raise PreventUpdate
else:
if "update" in triggered_id:
elements = store[number]
layout = {
'name': 'preset',
'fit': True,
}
elif "reset" in triggered_id:
elements = initial_data[number]
layout = {
'name': 'concentric',
'fit': True,
'minNodeSpacing': 100,
'avoidOverlap': True,
'startAngle': 50,
}
return elements, layout
if __name__ == '__main__':
app.run_server(debug=True)
```
#### Workaround
- Returning in the callback a new cytoscape graph with a new id. If we return a cytoscape with the same id, the issue still happens.
- To keep saving the `elements` (=use them as an Input in a callback) we can use pattern-matching callbacks
```
from dash import Dash, html, dcc, Input, Output, State, callback_context, ALL
from dash.exceptions import PreventUpdate
import dash_cytoscape as cyto
import json
import random
app = Dash(__name__)
data1 = [
{'data': {'id': f'{i}', 'label': f'Node {i}'}, 'position': {'x': 100*random.uniform(0,2), 'y': 100*random.uniform(0,2)}}
for i in range(20)] + [
{'data': {'id':f'link1-{i}','source': '1', 'target': f'{i}'}}
for i in range(2,20)
]
data2 = [
{'data': {'id': f'{i}', 'label': f'Node {i}'}, 'position': {'x': 100*random.uniform(0,2), 'y': 100*random.uniform(0,2)}}
for i in range(30)] + [
{'data': {'id':f'link1-{i}','source': '1', 'target': f'{i}'}}
for i in range(2,30)
]
data3 = [
{'data': {'id': f'{i}', 'label': f'Node {i}'}, 'position': {'x': 100*random.uniform(0,2), 'y': 100*random.uniform(0,2)}}
for i in range(40)] + [
{'data': {'id':f'link1-{i}','source': '1', 'target': f'{i}'}}
for i in range(2,40)
]
initial_data = {'1':data1, '2':data2, '3':data3}
app.layout = html.Div([
html.Div([
dcc.Dropdown(['1','2','3'], '1', id='number'),
html.Button(id='save', children='Save'),
html.Button(id='update', children='Update'),
html.Button(id='reset', children='Reset'),
dcc.Store(id='store', data={'1':[], '2':[], '3':[]})
],
style = {'width':'300px'}),
html.Div(
id='cyto-card',
children=[],
),
])
@app.callback(
Output('store', 'data'),
Input('save', 'n_clicks'),
State({'type':'cyto', 'index':ALL}, 'elements'),
State('number', 'value'),
State('store', 'data'),
)
def savemapsatae(clicks,elements, number, store):
if clicks is None:
raise PreventUpdate
else:
store[number] = elements[0]
return store
@app.callback(
Output('cyto-card', 'children'),
Input('update', 'n_clicks'),
Input('reset', 'n_clicks'),
State('number', 'value'),
State('store', 'data'),
prevent_initial_call=True
)
def updatemapsatae(click1, click2, number, store):
triggered_id = callback_context.triggered[0]['prop_id'].split('.')[0]
if click1 is None and click2 is None:
raise PreventUpdate
else:
if "update" in triggered_id:
elements = store[number]
layout = {
'name': 'preset',
}
elif "reset" in triggered_id:
elements = initial_data[number]
layout = {
'name': 'concentric',
'fit': True,
'minNodeSpacing': 100,
'avoidOverlap': True,
'startAngle': 50,
}
n = sum(filter(None, [click1, click2]))
cyto_return = cyto.Cytoscape(
id={'type':'cyto', 'index':n},
layout=layout,
panningEnabled=True,
zoom=0.5,
zoomingEnabled=True,
elements=elements,
)
return cyto_return
if __name__ == '__main__':
app.run_server(debug=True)
```
| open | 2023-07-21T14:25:15Z | 2023-07-21T14:25:15Z | https://github.com/plotly/dash-cytoscape/issues/192 | [] | celia-lm | 0 |
serengil/deepface | machine-learning | 519 | Mediapipe imported but no output | Hi Mr. Sefik,
I am trying to create embeddings of a dataset using mediapipe and facenet.
The model and detector are built successfully but there is not progress in the embeddings task (see image below):
<img width="569" alt="Screen Shot 2022-07-22 at 18 02 40" src="https://user-images.githubusercontent.com/46068423/180455738-da2ba1bf-2e4a-4ddb-a6d6-fd9c80935b44.png">
After waiting for a long time, the program remains at 0 progress and doesn't throw any errors to resolve.
What is the issue here?
| closed | 2022-07-22T14:03:56Z | 2022-07-24T11:09:40Z | https://github.com/serengil/deepface/issues/519 | [
"dependencies"
] | MKJ52 | 4 |
ultralytics/ultralytics | machine-learning | 19,063 | Transfer Learning from YOLOv9 to YOLOv11 | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello YOLO team,
I have a YOLOv9 segmentation model trained on a large custom dataset, and I’m now looking to train a YOLOv11 keypoint detection model on a smaller dataset.
I want to leverage transfer learning from my YOLOv9 model to YOLOv11 for keypoint detection. Will the following command work for that?
yolo pose train data=custom.yaml model=yolo11n-pose.yaml pretrained=custom_yolo9model.pt ...
Same question for tranfering from KEYPOINT yolov8 model to keypoint yolov11 model?
### Additional
_No response_ | open | 2025-02-04T11:37:07Z | 2025-02-04T16:44:22Z | https://github.com/ultralytics/ultralytics/issues/19063 | [
"question",
"pose"
] | VelmCoder | 2 |
jupyter/docker-stacks | jupyter | 1,530 | Latest scipy-notebook:lab-3.1.18 docker image actually contains lab 3.2.0 | The latest docker image for `jupyter/scipy-notebook:lab-3.1.18` does not contain Jupyter Lab 3.1.18, but 3.2.0. This is an issue if one needs to stay on 3.1.x for some reason.
**What docker image you are using?**
`jupyter/scipy-notebook:lab-3.1.18`, specifically `jupyter/scipy-notebook@sha256:4aa1a2cc3126d490c8c6a5505c534623176327ddafdba1efbb4ac52e8dd05e81`
**What complete docker command do you run to launch the container (omitting sensitive values)?**
`docker run --rm -ti jupyter/scipy-notebook:lab-3.1.18 jupyter --version | grep lab`
**What do you expect to happen?**
The Jupyter Lab version should be 3.1.18, as per the docker tag.
**What actually happens?**
The Jupyter Lab version is 3.2.0. | closed | 2021-11-15T12:48:43Z | 2022-07-05T11:31:18Z | https://github.com/jupyter/docker-stacks/issues/1530 | [
"type:Bug"
] | carlosefr | 8 |
alteryx/featuretools | data-science | 2,591 | Add document of how to do basic aggregation on non-default data type [e.g. max(datetime_field) group by] | Hi,
I am using featuretools (1.27 version), I read the docs ,and also searched here
but still struggle to find how to do simple things like SELECT MIN(datetime_field_1) from table
I also checked list_primitives() those related to time, seem not what I need,
I can do this for numeric fields, but seems can't do it on Datetime fields..
For example can we add a tutorial to obtain each team's earliest, latest 'timestamp' field?
`
import pandas as pd
import featuretools as ft
# Define team stats
team_stats = pd.DataFrame({
"id": [0, 1, 2, 3],
"team_id": [100, 200, 100, 200],
"game_id": [0, 0, 1, 1],
})
# Define player stats
player_stats = pd.DataFrame({
"id": [0, 1, 2, 3, 4, 5, 6, 7],
"team_id": [100, 100, 200, 200, 100, 100, 200, 200],
"game_id": [0, 0, 0, 0, 1, 1, 1, 1],
"player_id": [0, 1, 2, 3, 0, 1, 2, 3],
"goals": [1, 2, 0, 1, 2, 0, 1, 2],
"minutes_played": [30, 45, 10, 50, 10, 40, 25, 45],
"timestamp": pd.date_range("Jan 1, 2019", freq="1h", periods=8),
})
# Create new index columns for use in relationship
team_stats["created_idx"] = team_stats["team_id"].astype("string") + "-" + team_stats["game_id"].astype("string")
player_stats["created_idx"] = player_stats["team_id"].astype("string") + "-" + player_stats["game_id"].astype("string")
# Drop these columns from the players table since we no longer need them in this example.
# This prevents Featuretools from generating features from these numeric columns
player_stats = player_stats.drop(columns=["team_id", "game_id"])
# Create the EntitySet and add dataframes
es = ft.EntitySet()
es.add_dataframe(dataframe=team_stats, dataframe_name="teams", index="created_idx")
es.add_dataframe(dataframe=player_stats, dataframe_name="players", index="id",time_index='timestamp')
# Add the relationship
es.add_relationship("teams", "created_idx", "players", "created_idx")
# Run DFS using the `Sum` aggregation primitive
# Use ignore columns to prevent generation of features from player_id
fm, feautres = ft.dfs(entityset=es,
target_dataframe_name="teams",
agg_primitives=["sum","min","count"],
ignore_columns={"players": ["player_id"]})
`
| closed | 2023-07-26T01:20:21Z | 2023-08-21T14:50:51Z | https://github.com/alteryx/featuretools/issues/2591 | [
"documentation"
] | wanalytics8 | 3 |
pyjanitor-devs/pyjanitor | pandas | 745 | [DOC] Check that all functions have a `return` and `raises` section within docstrings | # Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
Currently, the docstrings for some functions are lacking a `return` description and (where applicable) a `raises` description.
I would like to propose a change, such that now the docstrings for all functions within pyjanitor have a valid `return` and (where applicable) valid `raises` statements.
**Requires a look at all functions** (not just the provided examples below).
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Good example of complete docstring - janitor.complete](https://ericmjl.github.io/pyjanitor/reference/janitor.functions/janitor.complete.html)
- Examples for missing `returns`:
- [Link to documentation page - finance.convert_currency](https://ericmjl.github.io/pyjanitor/reference/finance.html)
- [Link to exact file to be edited - finance.py](https://github.com/ericmjl/pyjanitor/blob/dev/janitor/finance.py)
- [Link to documentation page - functions.join_apply](https://ericmjl.github.io/pyjanitor/reference/janitor.functions/janitor.join_apply.html)
- [Link to exact file to be edited - functions.py](https://github.com/ericmjl/pyjanitor/blob/dev/janitor/functions.py)
| closed | 2020-09-13T01:34:28Z | 2020-10-03T13:15:27Z | https://github.com/pyjanitor-devs/pyjanitor/issues/745 | [
"good first issue",
"docfix",
"being worked on",
"hacktoberfest"
] | loganthomas | 8 |
abhiTronix/vidgear | dash | 59 | How to set framerate with ScreenGear | ## Question
I would like to set the frame rate with which the ScreenGear module acquires the frames from my monitor.
How can I do it?
### Other details:
If I grab a small frame, I will have very slow down video effect (i.e. I record for 10 seconds but the writed video is 1 minute), since it is able to grab many fps and put them with the fixed fps of the WriteGear module (e.g. 25).
On the other hand, If I grab a large frame (e.g. my whole 4k monitor), I will have a very fast video effect (i.e. I record for 30 seconds but the writed video is 5 seconds), since it is able to grab few fps because of the large images.
How can I manage this?
| closed | 2019-10-24T09:23:46Z | 2019-11-13T06:34:55Z | https://github.com/abhiTronix/vidgear/issues/59 | [
"INVALID :stop_sign:",
"QUESTION :question:"
] | FrancescoRossi1987 | 3 |
aeon-toolkit/aeon | scikit-learn | 2,129 | [ENH] k-Spectral Centroid (k-SC) clusterer | ### Describe the feature or idea you want to propose
A popular clusterer in the TSCL literature is the k-Spectral Centroid (k-SC). A link to the paper can be found at: https://dl.acm.org/doi/10.1145/1935826.1935863.
### Describe your proposed solution
It would be nice to addition to aeon. I propose the lloyd's variant initially and add the incremental version (in the paper as well) at a later date.
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_ | closed | 2024-10-02T11:57:14Z | 2024-10-11T12:20:33Z | https://github.com/aeon-toolkit/aeon/issues/2129 | [
"enhancement",
"clustering",
"implementing algorithms"
] | chrisholder | 0 |
graphql-python/graphene | graphql | 1,398 | Undefined arguments always passed to resolvers | Contrarily to what's described [here](https://docs.graphene-python.org/en/latest/types/objecttypes/#graphql-argument-defaults): it looks like all the arguments with unset values are passed to the resolvers in graphene-3:
This is the query i am defining
```
class Query(graphene.ObjectType):
hello = graphene.String(required=True, name=graphene.String())
def resolve_hello(parent, info, **kwargs):
return str(kwargs)
```
which i submit as
```
{
hello
}
```
The result is :
```
{
"data": {
"hello": "{'name': None}"
}
}
```
The expected returned value is :
```
{
"data": {
"hello": "{}"
}
}
```
which is what we're getting with graphene-2.
My environment:
```
graphene==3.0
graphql-core==3.1.7
graphql-relay==3.1.0
graphql-server==3.0.0b4
```
| closed | 2022-01-05T23:12:11Z | 2022-08-27T18:41:03Z | https://github.com/graphql-python/graphene/issues/1398 | [
"🐛 bug"
] | stabacco | 3 |
apachecn/ailearning | python | 352 | 代码都编译不过啊。。。 | 在 MachineLearning/src/py3.x/5.Logistic/logistic.py 里:
```
# random.uniform(x, y) 方法将随机生成下一个实数,它在[x,y]范围内,x是这个范围内的最小值,y是这个范围内的最大值。
rand_index = int(np.random.uniform(0, len(data_index)))
h = sigmoid(np.sum(data_mat[dataIndex[randIndex]] * weights))
error = class_labels[dataIndex[randIndex]] - h
weights = weights + alpha * error * data_mat[dataIndex[randIndex]]
del(data_index[rand_index])
```
这一段 dataIndex 和 randIndex 名字写错了。 | closed | 2018-04-13T06:54:20Z | 2018-04-15T06:25:40Z | https://github.com/apachecn/ailearning/issues/352 | [] | ljp215 | 2 |
blacklanternsecurity/bbot | automation | 1,682 | Integrating with additional scanners | **Description**
Hey there, love this tool, I have some ideas/additions which I would build myself if I only had the time.... :
- The Nuclei tool is ran with default setting of stopping a scan of a target after its unreachable for 30 requests, if you put this number a little higher (say 100), in my experience that keeps you from stopping some scans that you do not need to stop.
- I saw that wpscan is implemented, in my experience wpscan requires an API key, you can get the same functionality as premium wpscan with nuclei for free! Using the following set of templates on wordpress hosts: https://github.com/topscoder/nuclei-wordfence-cve
- some internetdb vulnerabilities are verified, as in proven. You could add these as vulnerabilities instead of findings: https://www.shodan.io/search/facet?query=net%3A0%2F0&facet=vuln.verified
- retirejs would be a great addition for javascript vulnerabilities
| open | 2024-08-20T14:54:11Z | 2025-02-06T00:14:58Z | https://github.com/blacklanternsecurity/bbot/issues/1682 | [
"enhancement"
] | joostgrunwald | 6 |
jowilf/starlette-admin | sqlalchemy | 146 | Enhancement: not display sensitive data in relation form | Is there any opportunity to not display fields according to persmision level for the users?

For now there is some invonvenience in safety by displaying some sensitive information whoever the user is
Even if i try to restrict field as much as possible it displays information in relation field of other view:
```
@dataclass
class PasswordField(StringField):
"""A StringField, except renders an `<input type="password">`."""
input_type: str = "password"
class_: str = "field-password form-control"
exclude_from_list: Optional[bool] = True
exclude_from_detail: Optional[bool] = True
exclude_from_create: Optional[bool] = True
exclude_from_edit: Optional[bool] = True
searchable: Optional[bool] = False
orderable: Optional[bool] = False
```
```
class User(MyModelView):
fields = [
Employee.id,
Employee.login,
PasswordField("password"),
Employee.role, #some reltaion
Employee.notes, #another relation
]
```

| closed | 2023-03-29T17:26:10Z | 2023-04-03T15:46:43Z | https://github.com/jowilf/starlette-admin/issues/146 | [
"enhancement"
] | Ilya-Green | 5 |
dask/dask | pandas | 11,416 | Significant slowdown in Numba compiled functions from Dask 2024.8.1 | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
In [sgkit](https://github.com/sgkit-dev/sgkit), we use a lot of Numba compiled functions in Dask Array `map_blocks` calls, and we noticed a significant (approx 4x) slowdown in performance when running the test suite (see https://github.com/sgkit-dev/sgkit/issues/1267).
**Minimal Complete Verifiable Example**:
`dask-slowdown-min.py`:
```python
from numba import guvectorize
import numpy as np
import dask.array as da
@guvectorize(
[
"void(int8[:], int8[:])",
"void(int16[:], int16[:])",
"void(int32[:], int32[:])",
"void(int64[:], int64[:])",
],
"(n)->(n)", nopython=True
)
def inc(x, res):
for i in range(x.shape[0]):
res[i] = x[i] + 1
if __name__ == "__main__":
for i in range(3):
a = da.ones((10000, 1000, 10), chunks=(1000, 1000, 10), dtype=np.int8)
res = da.map_blocks(inc, a, dtype=np.int8).compute()
print(i)
```
With the latest version of Dask:
```shell
pip install 'dask[array]' numba
time python dask-slowdown-min.py
1
2
3
python dask-slowdown-min.py 2.61s user 0.21s system 40% cpu 6.929 total
```
With Dask 2024.8.0:
```shell
pip install -U 'dask[array]====2024.8.0'
time python dask-slowdown-min.py
0
1
2
python dask-slowdown-min.py 0.62s user 0.13s system 99% cpu 0.752 total
```
**Anything else we need to know?**:
I ran a git bisect and it looks like the problem was introduced in 1d771959509d09c34195fa19d9ae8446ae3a8726 (#11320).
I'm not sure what the underlying problem is, but I noticed that the slow version is compiling Numba functions many times compared to the older version:
```shell
# Dask latest version
NUMBA_DEBUG=1 python dask-slowdown-min.py | grep 'DUMP inc' | wc -l
152
# Dask 2024.8.0
NUMBA_DEBUG=1 python dask-slowdown-min.py | grep 'DUMP inc' | wc -l
8
```
**Environment**:
- Dask version: 2024.8.1 and later
- Python version: Python 3.11
- Operating System: macOS
- Install method (conda, pip, source): pip
| closed | 2024-10-07T15:55:42Z | 2024-10-08T10:17:48Z | https://github.com/dask/dask/issues/11416 | [
"needs triage"
] | tomwhite | 2 |
jonaswinkler/paperless-ng | django | 1,110 | [BUG] BadSignature('Signature "..." does not match') on document import | **Describe the bug**
When trying to import PDFs into a fresh installation of Paperless-ng on Archlinux, the procedure gets stuck after uploading and produces below lines in the scheduler log.
**To Reproduce**
Steps to reproduce the behavior:
1. Clean installation of paperless-ng
2. Launch the webserver, consumer and scheduler
3. Drop a PDF onto the web interface
4. See error
**Expected behavior**
Process the imported PDF
**Webserver logs**
```
--- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/django_q/cluster.py", line 351, in pusher
task = SignedPackage.loads(task[1])
File "/usr/lib/python3.9/site-packages/django_q/signing.py", line 25, in loads
return signing.loads(
File "/usr/lib/python3.9/site-packages/django_q/core_signing.py", line 40, in loads
base64d = force_bytes(TimestampSigner(key, salt=salt).unsign(s, max_age=max_age))
File "/usr/lib/python3.9/site-packages/django_q/core_signing.py", line 75, in unsign
result = super(TimestampSigner, self).unsign(value)
File "/usr/lib/python3.9/site-packages/django_q/core_signing.py", line 60, in unsign
raise BadSignature('Signature "%s" does not match' % sig)
django.core.signing.BadSignature: Signature "FA9yQBJYTC2IPm6pryl5MV0kF4R9t1-SlWVX8W_i5BI" does not match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.9/logging/__init__.py", line 1083, in emit
msg = self.format(record)
File "/usr/lib/python3.9/logging/__init__.py", line 927, in format
return fmt.format(record)
File "/usr/lib/python3.9/logging/__init__.py", line 663, in format
record.message = record.getMessage()
File "/usr/lib/python3.9/logging/__init__.py", line 367, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "/usr/bin/paperless-manage", line 11, in <module>
execute_from_command_line(sys.argv)
File "/usr/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/usr/lib/python3.9/site-packages/django_q/management/commands/qcluster.py", line 22, in handle
q.start()
File "/usr/lib/python3.9/site-packages/django_q/cluster.py", line 75, in start
self.sentinel.start()
File "/usr/lib/python3.9/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/usr/lib/python3.9/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/usr/lib/python3.9/multiprocessing/context.py", line 277, in _Popen
return Popen(process_obj)
File "/usr/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/usr/lib/python3.9/multiprocessing/popen_fork.py", line 71, in _launch
code = process_obj._bootstrap(parent_sentinel=child_r)
File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.9/site-packages/django_q/cluster.py", line 165, in __init__
self.start()
File "/usr/lib/python3.9/site-packages/django_q/cluster.py", line 169, in start
self.spawn_cluster()
File "/usr/lib/python3.9/site-packages/django_q/cluster.py", line 243, in spawn_cluster
self.pusher = self.spawn_pusher()
File "/usr/lib/python3.9/site-packages/django_q/cluster.py", line 198, in spawn_pusher
return self.spawn_process(pusher, self.task_queue, self.event_out, self.broker)
File "/usr/lib/python3.9/site-packages/django_q/cluster.py", line 194, in spawn_process
p.start()
File "/usr/lib/python3.9/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/usr/lib/python3.9/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/usr/lib/python3.9/multiprocessing/context.py", line 277, in _Popen
return Popen(process_obj)
File "/usr/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/usr/lib/python3.9/multiprocessing/popen_fork.py", line 71, in _launch
code = process_obj._bootstrap(parent_sentinel=child_r)
File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.9/site-packages/django_q/cluster.py", line 353, in pusher
logger.error(e, traceback.format_exc())
Message: BadSignature('Signature "FA9yQBJYTC2IPm6pryl5MV0kF4R9t1-SlWVX8W_i5BI" does not match')
Arguments: ('Traceback (most recent call last):\n File "/usr/lib/python3.9/site-packages/django_q/cluster.py", line 351, in pusher\n task = SignedPackage.loads(task[1])\n File "/usr/lib/python3.9/site-packages/django_q/signing.py", line 25, in loads\n return signing.loads(\n File "/usr/lib/python3.9/site-packages/django_q/core_signing.py", line 40, in loads\n base64d = force_bytes(TimestampSigner(key, salt=salt).unsign(s, max_age=max_age))\n File "/usr/lib/python3.9/site-packages/django_q/core_signing.py", line 75, in unsign\n result = super(TimestampSigner, self).unsign(value)\n File "/usr/lib/python3.9/site-packages/django_q/core_signing.py", line 60, in unsign\n raise BadSignature(\'Signature "%s" does not match\' % sig)\ndjango.core.signing.BadSignature: Signature "FA9yQBJYTC2IPm6pryl5MV0kF4R9t1-SlWVX8W_i5BI" does not match\n',)
```
**Relevant information**
- Host OS of the machine running paperless: Archlinux
- Browser: Firefox
- Version 1.4.4
- Installation method: bare metal
- Any configuration changes you made in `paperless.conf`:
```
PAPERLESS_REDIS=redis://localhost:6379
PAPERLESS_CONSUMPTION_DIR=/var/lib/paperless/consume
PAPERLESS_DATA_DIR=/var/lib/paperless/data
PAPERLESS_MEDIA_ROOT=/var/lib/paperless/media
PAPERLESS_SECRET_KEY=46c8d71de1c077e702bdkuzg42guk24
```
| closed | 2021-06-10T09:16:26Z | 2021-06-19T08:15:42Z | https://github.com/jonaswinkler/paperless-ng/issues/1110 | [] | amo13 | 3 |
miguelgrinberg/python-socketio | asyncio | 453 | stopping socketio with Ctrl+C KeyboardInterrupt on sio.wait() | hi, @miguelgrinberg
I have a socketio client below.
This code has a background task func() that record audio in put and make it as base64 chunks.
And I'm going to make this stopped and exited when I press `CTRL+C` once.
```python
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import threading
import pyaudio
import base64
import socketio
def main():
RATE = 16000
CHUNK = int(RATE / 10)
sio = socketio.Client()
sio.connect('http://localhost:10001')
@sio.on('asr')
def asr(data):
print('asr:%s' % data)
audio = pyaudio.PyAudio()
options = {
"format":pyaudio.paInt16,
"channels":1,
"rate":RATE,
"input":True,
"frames_per_buffer":CHUNK,
}
stream = audio.open(**options)
def func():
n = 0
data = {
"lang":"ko-KR",
"rate":RATE,
"intr":True,
}
sio.emit('asr', data)
while True:
chunk = stream.read(CHUNK)
print("chunk[%d]@%d" % (len(chunk), n + 1))
chunk = base64.b64encode(bytes(chunk)).decode()
sio.emit('asr', chunk)
n += 1
#threading.Thread(target=func).start()
sio.start_background_task(func)
try:
print("waiting..")
sio.wait()
except KeyboardInterrupt as error:
#sio.emit('asr', {"command":"disconnect"})
sio.disconnect()
print("sio.connected:%s" % sio.connected)
stream.stop_stream()
stream.close()
print("end")
if __name__ == '__main__':
llv = logging.DEBUG
logging.getLogger('requests').setLevel(llv)
logging.getLogger('websocket').setLevel(llv)
logging.getLogger('socketIO-client-2').setLevel(llv)
logging.getLogger('urllib3.connectionpool').setLevel(llv)
logfmt = "%(name)-8s [%(levelname)-5s] %(message)-120s (%(filename)s:%(lineno)d)"
#logging.basicConfig(level=logging.DEBUG, format=logfmt)
main()
```
But this app does not exit when `CTRL+C` once.
I should press `CTRL+C` once more.
And I finally see `original_signal_handler()`.
It seems that socketio thread (or worker) is still working after connection disconnected and closed.
```
waiting..
chunk[3200]@1
chunk[3200]@2
chunk[3200]@3
...
chunk[3200]@23
chunk[3200]@24
asr:{'idx': 1, 'transcript': '안녕하세요', 'is_final': False}
chunk[3200]@25
chunk[3200]@26
chunk[3200]@27
...
chunk[3200]@35
chunk[3200]@36
asr:{'idx': 2, 'transcript': '안녕하세요', 'is_final': True}
chunk[3200]@37
chunk[3200]@38
...
chunk[3200]@44
chunk[3200]@45
^Csio.connected:False
end
^CException ignored in: <module 'threading' from '/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py'>
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 1307, in _shutdown
lock.acquire()
File "/usr/local/lib/python3.7/site-packages/socketio/client.py", line 25, in signal_handler
return original_signal_handler(sig, frame)
File "/usr/local/lib/python3.7/site-packages/engineio/client.py", line 41, in signal_handler
return original_signal_handler(sig, frame)
KeyboardInterrupt
Interrupt: Press ENTER or type command to continue
```
Please help me, how can I fix this problem and complete the feature?
always thank you.
| closed | 2020-03-30T14:23:26Z | 2020-06-30T22:47:52Z | https://github.com/miguelgrinberg/python-socketio/issues/453 | [
"question"
] | BRIDGE-AI | 1 |
explosion/spaCy | data-science | 13,405 | Code example discrepancy for `Span.lemma_` in API docs | Hello spaCy team,
I found a small discrepancy in the documentation.
---
<!-- Describe the problem or suggestion here. If you've found a mistake and you know the answer, feel free to submit a pull request straight away: https://github.com/explosion/spaCy/pulls -->
The attribute `lemma_` for a `Span` is described as follows in the API docs:
> The span's lemma. Equivalent to `"".join(token.text_with_ws for token in span)`.
### Suggested Change
The equivalent code example should not contain `token.text_with_ws` in the comprehension, but `token.lemma_ + token.whitespace_`:
```diff
- | `lemma_` | The span's lemma. Equivalent to `"".join(token.text_with_ws for token in span)`. ~~str~~ |
+ | `lemma_` | The span's lemma. Equivalent to `"".join(token.lemma_ + token.whitespace_ for token in span).strip()`. ~~str~~ |
````
## Which page or section is this issue related to?
<!-- Please include the URL and/or source. -->
* Docs: [API Docs / `Span`](https://spacy.io/api/span#attributes)
* Code: [`website/docs/api/span.mdx` / Line 564](https://github.com/explosion/spaCy/blob/5f8a398/website/docs/api/span.mdx?plain=1#L564) | open | 2024-04-01T09:22:56Z | 2024-04-12T14:21:41Z | https://github.com/explosion/spaCy/issues/13405 | [
"docs"
] | schorfma | 1 |
InstaPy/InstaPy | automation | 5,817 | HELP 'account' might be temporarily blocked from following | Hi there,
Im running instapy on my linux server, and on average I`m getting temporary block or ban once a day saying "'account' might be temporarily blocked from following". All I can do is to request via email a new password. Once I change password I get my instagram account back working again. Anybody has similar problems? thanks | closed | 2020-10-08T17:01:47Z | 2020-12-13T01:49:19Z | https://github.com/InstaPy/InstaPy/issues/5817 | [
"wontfix"
] | PepeRender | 8 |
xuebinqin/U-2-Net | computer-vision | 23 | RuntimeWarning: invalid value encountered in true_divide | Your work is so great, thank you for sharing your code!
I tried to inference some images using your model and your code.
Almost everything is good, but with some images, I receive warning:
data_loader.py:197: RuntimeWarning: invalid value encountered in true_divide
image = image/np.max(image)
Like this image: https://drive.google.com/file/d/1iFTb29lu3cWQzrMMdMB3y03Fcoqd7Gkg/view?usp=sharing
I do not know why this happen? I mean what happens in data_loader.py file give this warning. Could warning affect the quality of result?
| closed | 2020-05-19T11:25:43Z | 2020-06-09T13:27:34Z | https://github.com/xuebinqin/U-2-Net/issues/23 | [] | LeCongThuong | 2 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,404 | Bypass CloudFlare new version | To bypass cloudflare you need the chrome version 115, it seems that the versions prior to 115 do not like it,but with 115 the captcha is bypassed, (changing the user agent wont work) | closed | 2023-07-20T14:35:32Z | 2023-07-31T14:31:04Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1404 | [] | devCanario | 32 |
SciTools/cartopy | matplotlib | 1,553 | Zoom in GUI on a global map | ### Description
<!-- Please provide a general introduction to the issue/proposal. -->
Zoom seems not working in the matplotlib GUI when the map is global.
However it works when `set_extent` is previously applied.
<!--
If you are reporting a bug, attach the *entire* traceback from Python.
If you are proposing an enhancement/new feature, provide links to related articles, reference examples, etc.
If you are asking a question, please ask on StackOverflow and use the cartopy tag. All cartopy
questions on StackOverflow can be found at https://stackoverflow.com/questions/tagged/cartopy
-->
#### Code to reproduce
````python
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
ax = plt.axes(projection=ccrs.PlateCarree())
ax.add_feature(cfeature.LAND.with_scale('110m'))
plt.show()
````
#### Traceback
```
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
### Cartopy version
Latest from master
### conda list
```
```
### pip list
```
```
</details>
| open | 2020-05-08T13:12:48Z | 2020-05-11T14:36:43Z | https://github.com/SciTools/cartopy/issues/1553 | [
"Type: Bug"
] | stefraynaud | 5 |
flasgger/flasgger | flask | 528 | While testing by 'make test', ImportError: cannot import name 'safe_str_cmp' from 'werkzeug.security' | [ImportError: cannot import name 'safe_str_cmp' from 'werkzeug.security'](https://stackoverflow.com/questions/71652965/importerror-cannot-import-name-safe-str-cmp-from-werkzeug-security)
The issue above could be resolved by the linked suggestion, only leading to another error as below,
`ERROR tests/test_examples.py - json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)`
Could this sequence of errors be identified and addressed? | open | 2022-04-18T07:53:33Z | 2022-05-14T07:42:19Z | https://github.com/flasgger/flasgger/issues/528 | [] | ikjun-jang | 2 |
microsoft/nni | tensorflow | 5,526 | Enas Tutorial is not up-to-date | I can not run the https://github.com/microsoft/nni/blob/v2.10/examples/nas/oneshot/enas/search.py
it give me error on this one `from nni.nas.pytorch import mutables` to indicate no mutables in nni.nas.pytorch, when I change it, it give me errors because the `MutableScope`(See https://github.com/microsoft/nni/blob/c31d2574cb418acfb80c17bb2bd03531556325bd/examples/nas/oneshot/enas/micro.py#L62) is deleted since v2.9
**Environment**:
- NNI version: v2.10
- Training service (local):
- Client OS: Ubuntu20.04
- Python version: 3.10
- PyTorch version: 1.10+cu113
- Is conda/virtualenv/venv used?: No
- Is running in Docker?: Yes
**Configuration**:
- Search space: ENAS | closed | 2023-04-21T19:04:06Z | 2023-05-09T08:23:28Z | https://github.com/microsoft/nni/issues/5526 | [] | dzk9528 | 14 |
babysor/MockingBird | pytorch | 232 | 方言的训练相关问题 | 想请教如何创建一个方言数据集并合理生成模型? 目标是训练 **上海话** 以及其它吴语系方言 | closed | 2021-11-24T06:16:07Z | 2022-03-07T15:28:37Z | https://github.com/babysor/MockingBird/issues/232 | [] | ycMia | 8 |
waditu/tushare | pandas | 1,527 | 数据采集 Init_StockAll_Sp.py中的resu0 = list(df.ix[c_len - 1 - j])方法出错 | 官方文档链接:https://waditu.com/document/1?doc_id=63
在数据采集模块提供的 Init_StockAll_Sp.py文件中,方法resu0 = list(df.ix[c_len - 1 - j])执行出错,提示df没有ix属性,实际执行中改成resu0 = list(df.T[c_len - 1 - j])时,执行通过,请帮忙核实,谢谢!
个人ID:431870
| open | 2021-03-24T02:19:52Z | 2021-03-24T02:20:19Z | https://github.com/waditu/tushare/issues/1527 | [] | zhizu2030 | 0 |
mars-project/mars | numpy | 2,867 | [BUG] ref count error | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
```
2022-03-25 12:31:18,361 ERROR processor.py:74 -- Unexpected error happens in <function TaskProcessor.decref_stage at 0x7fcfc7ebd0e0>
Traceback (most recent call last):
File "/home/admin/ray-pack/tmp/job/b8050080/pyenv/lib/python3.7/site-packages/mars/services/task/supervisor/processor.py", line 71, in inner
return await func(processor, *args, **kwargs)
File "/home/admin/ray-pack/tmp/job/b8050080/pyenv/lib/python3.7/site-packages/mars/services/task/supervisor/processor.py", line 348, in decref_stage
await self._lifecycle_api.decref_chunks(decref_chunk_keys)
File "/home/admin/ray-pack/tmp/job/b8050080/pyenv/lib/python3.7/site-packages/mars/services/lifecycle/api/oscar.py", line 134, in decref_chunks
return await self._lifecycle_tracker_ref.decref_chunks(chunk_keys)
File "/home/admin/ray-pack/tmp/job/b8050080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/context.py", line 186, in send
return self._process_result_message(result)
File "/home/admin/ray-pack/tmp/job/b8050080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/context.py", line 70, in _process_result_message
raise message.as_instanceof_cause()
File "/home/admin/ray-pack/tmp/job/b8050080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/pool.py", line 590, in send
result = await self._run_coro(message.message_id, coro)
File "/home/admin/ray-pack/tmp/job/b8050080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/pool.py", line 343, in _run_coro
return await coro
File "/home/admin/ray-pack/tmp/job/b8050080/pyenv/lib/python3.7/site-packages/mars/oscar/api.py", line 115, in __on_receive__
return await super().__on_receive__(message)
File "mars/oscar/core.pyx", line 371, in __on_receive__
raise ex
File "mars/oscar/core.pyx", line 365, in mars.oscar.core._BaseActor.__on_receive__
return await self._handle_actor_result(result)
File "mars/oscar/core.pyx", line 250, in _handle_actor_result
task_result = await coros[0]
File "mars/oscar/core.pyx", line 293, in mars.oscar.core._BaseActor._run_actor_async_generator
async with self._lock:
File "mars/oscar/core.pyx", line 294, in mars.oscar.core._BaseActor._run_actor_async_generator
with debug_async_timeout('actor_lock_timeout',
File "mars/oscar/core.pyx", line 297, in mars.oscar.core._BaseActor._run_actor_async_generator
res = await gen.asend(res)
File "/home/admin/ray-pack/tmp/job/b8050080/pyenv/lib/python3.7/site-packages/mars/services/lifecycle/supervisor/tracker.py", line 89, in decref_chunks
to_remove_chunk_keys = self._get_remove_chunk_keys(chunk_keys)
File "/home/admin/ray-pack/tmp/job/b8050080/pyenv/lib/python3.7/site-packages/mars/services/lifecycle/supervisor/tracker.py", line 81, in _get_remove_chunk_keys
assert ref_count >= 0
mars.oscar.backends.message.ErrorMessage.as_instanceof_cause.<locals>._MarsError: [address=ray://mars_cluster_1648179994/0/0, pid=100894]
2022-03-25 12:31:18,365 INFO processor.py:839 -- Processor OgL9bjFjOwr7JIJVrelZS1gA finished.
```
6. Minimized code to reproduce the error.
| open | 2022-03-25T05:59:53Z | 2022-03-25T05:59:53Z | https://github.com/mars-project/mars/issues/2867 | [] | chaokunyang | 0 |
apachecn/ailearning | python | 504 | 贝叶斯教程 第57行更改 修改了markdown language引用图片的错误 | 贝叶斯文档当中 图片引用作者忘打了一个“]”,导致图片引用失败。我已在新的pull request中提交了修改 | closed | 2019-05-03T04:55:11Z | 2019-05-06T14:13:01Z | https://github.com/apachecn/ailearning/issues/504 | [] | Logenleedev | 0 |
sgl-project/sglang | pytorch | 4,236 | [Bug] SGLang QwQ Tool use with LibreChat agent fails | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 5. Please use English, otherwise it will be closed.
### Describe the bug
I'm running Librechat in docker and connect to sglang server QwQ per below. When I make an assistant message via the librechat agent it fails during the tool call with the following:
```
pydantic_core._pydantic_core.ValidationError: 5 validation errors for ChatCompletionRequest
messages.1.ChatCompletionMessageGenericParam.content.str
Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
messages.1.ChatCompletionMessageGenericParam.content.list[ChatCompletionMessageContentTextPart]
Input should be a valid list [type=list_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/list_type
messages.1.ChatCompletionMessageUserParam.role
Input should be 'user' [type=literal_error, input_value='assistant', input_type=str]
For further information visit https://errors.pydantic.dev/2.10/v/literal_error
messages.1.ChatCompletionMessageUserParam.content.str
Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
messages.1.ChatCompletionMessageUserParam.content.list[union[ChatCompletionMessageContentTextPart,ChatCompletionMessageContentImagePart]]
Input should be a valid list [type=list_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/list_type
```
I'm not sure where to begin, if this is a librechat or sglang issue
```bash
$ python -m sglang.launch_server --model-path Qwen/QwQ-32B-AWQ --tp 2 --enable-p2p-check --host 0.0.0.0 --reasoning-parser deepseek-r1 --tool-call-parser qwen25 --port 30000
2025-03-09 11:41:25.067056: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1741534885.082405 79082 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1741534885.086827 79082 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-03-09 11:41:25.102197: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
/home/john/Downloads/exit/lib/python3.11/site-packages/pandas/core/arrays/masked.py:60: UserWarning: Pandas requires version '1.3.6' or newer of 'bottleneck' (version '1.3.5' currently installed).
from pandas.core import (
WARNING 03-09 11:41:27 cuda.py:32] You are using a deprecated `pynvml` package. Please install `nvidia-ml-py` instead, and make sure to uninstall `pynvml`. When both of them are installed, `pynvml` will take precedence and cause errors. See https://pypi.org/project/pynvml for more information.
[2025-03-09 11:41:29] server_args=ServerArgs(model_path='Qwen/QwQ-32B-AWQ', tokenizer_path='Qwen/QwQ-32B-AWQ', tokenizer_mode='auto', skip_tokenizer_init=False, load_format='auto', trust_remote_code=False, dtype='auto', kv_cache_dtype='auto', quantization=None, quantization_param_path=None, context_length=None, device='cuda', served_model_name='Qwen/QwQ-32B-AWQ', chat_template=None, is_embedding=False, revision=None, host='0.0.0.0', port=30000, mem_fraction_static=0.87, max_running_requests=None, max_total_tokens=None, chunked_prefill_size=2048, max_prefill_tokens=16384, schedule_policy='fcfs', schedule_conservativeness=1.0, cpu_offload_gb=0, tp_size=2, stream_interval=1, stream_output=False, random_seed=855533300, constrained_json_whitespace_pattern=None, watchdog_timeout=300, dist_timeout=None, download_dir=None, base_gpu_id=0, gpu_id_step=1, log_level='info', log_level_http=None, log_requests=False, log_requests_level=0, show_time_cost=False, enable_metrics=False, decode_log_interval=40, api_key=None, file_storage_path='sglang_storage', enable_cache_report=False, reasoning_parser='deepseek-r1', dp_size=1, load_balance_method='round_robin', ep_size=1, dist_init_addr=None, nnodes=1, node_rank=0, json_model_override_args='{}', lora_paths=None, max_loras_per_batch=8, lora_backend='triton', attention_backend='flashinfer', sampling_backend='flashinfer', grammar_backend='outlines', speculative_algorithm=None, speculative_draft_model_path=None, speculative_num_steps=5, speculative_eagle_topk=4, speculative_num_draft_tokens=8, speculative_accept_threshold_single=1.0, speculative_accept_threshold_acc=1.0, speculative_token_map=None, enable_double_sparsity=False, ds_channel_config_path=None, ds_heavy_channel_num=32, ds_heavy_token_num=256, ds_heavy_channel_type='qk', ds_sparse_decode_threshold=4096, disable_radix_cache=False, disable_cuda_graph=False, disable_cuda_graph_padding=False, enable_nccl_nvls=False, disable_outlines_disk_cache=False, disable_custom_all_reduce=False, disable_mla=False, disable_overlap_schedule=False, enable_mixed_chunk=False, enable_dp_attention=False, enable_ep_moe=False, enable_torch_compile=False, torch_compile_max_bs=32, cuda_graph_max_bs=8, cuda_graph_bs=None, torchao_config='', enable_nan_detection=False, enable_p2p_check=True, triton_attention_reduce_in_fp32=False, triton_attention_num_kv_splits=8, num_continuous_decode_steps=1, delete_ckpt_after_loading=False, enable_memory_saver=False, allow_auto_truncate=False, enable_custom_logit_processor=False, tool_call_parser='qwen25', enable_hierarchical_cache=False, enable_flashinfer_mla=False, flashinfer_mla_disable_ragged=False, warmups=None, debug_tensor_dump_output_folder=None, debug_tensor_dump_input_file=None, debug_tensor_dump_inject=False)
INFO 03-09 11:41:29 awq_marlin.py:109] The model is convertible to awq_marlin during runtime. Using awq_marlin kernel.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1741534892.349074 79125 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1741534892.353386 79125 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1741534892.399218 79124 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1741534892.399216 79123 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1741534892.403563 79123 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
E0000 00:00:1741534892.403590 79124 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
/home/john/Downloads/exit/lib/python3.11/site-packages/pandas/core/arrays/masked.py:60: UserWarning: Pandas requires version '1.3.6' or newer of 'bottleneck' (version '1.3.5' currently installed).
from pandas.core import (
/home/john/Downloads/exit/lib/python3.11/site-packages/pandas/core/arrays/masked.py:60: UserWarning: Pandas requires version '1.3.6' or newer of 'bottleneck' (version '1.3.5' currently installed).
from pandas.core import (
/home/john/Downloads/exit/lib/python3.11/site-packages/pandas/core/arrays/masked.py:60: UserWarning: Pandas requires version '1.3.6' or newer of 'bottleneck' (version '1.3.5' currently installed).
from pandas.core import (
WARNING 03-09 11:41:34 cuda.py:32] You are using a deprecated `pynvml` package. Please install `nvidia-ml-py` instead, and make sure to uninstall `pynvml`. When both of them are installed, `pynvml` will take precedence and cause errors. See https://pypi.org/project/pynvml for more information.
WARNING 03-09 11:41:34 cuda.py:32] You are using a deprecated `pynvml` package. Please install `nvidia-ml-py` instead, and make sure to uninstall `pynvml`. When both of them are installed, `pynvml` will take precedence and cause errors. See https://pypi.org/project/pynvml for more information.
WARNING 03-09 11:41:34 cuda.py:32] You are using a deprecated `pynvml` package. Please install `nvidia-ml-py` instead, and make sure to uninstall `pynvml`. When both of them are installed, `pynvml` will take precedence and cause errors. See https://pypi.org/project/pynvml for more information.
INFO 03-09 11:41:36 awq_marlin.py:109] The model is convertible to awq_marlin during runtime. Using awq_marlin kernel.
INFO 03-09 11:41:36 awq_marlin.py:109] The model is convertible to awq_marlin during runtime. Using awq_marlin kernel.
INFO 03-09 11:41:37 awq_marlin.py:109] The model is convertible to awq_marlin during runtime. Using awq_marlin kernel.
[2025-03-09 11:41:37 TP0] Init torch distributed begin.
INFO 03-09 11:41:37 awq_marlin.py:109] The model is convertible to awq_marlin during runtime. Using awq_marlin kernel.
[2025-03-09 11:41:37 TP1] Init torch distributed begin.
[2025-03-09 11:41:37 TP1] sglang is using nccl==2.21.5
[2025-03-09 11:41:37 TP0] sglang is using nccl==2.21.5
[2025-03-09 11:41:37 TP0] reading GPU P2P access cache from /home/john/.cache/sglang/gpu_p2p_access_cache_for_0,1.json
[2025-03-09 11:41:37 TP1] reading GPU P2P access cache from /home/john/.cache/sglang/gpu_p2p_access_cache_for_0,1.json
[2025-03-09 11:41:37 TP0] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly.
[2025-03-09 11:41:37 TP1] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly.
[2025-03-09 11:41:37 TP1] Init torch distributed ends. mem usage=0.15 GB
[2025-03-09 11:41:37 TP0] Init torch distributed ends. mem usage=0.15 GB
[2025-03-09 11:41:37 TP1] Load weight begin. avail mem=22.19 GB
[2025-03-09 11:41:37 TP0] Load weight begin. avail mem=23.16 GB
[2025-03-09 11:41:37 TP0] The following error message 'operation scheduled before its operands' can be ignored.
[2025-03-09 11:41:37 TP1] The following error message 'operation scheduled before its operands' can be ignored.
quant_method None
quant_method None
quant_method None
quant_method None
[2025-03-09 11:41:38 TP0] Using model weights format ['*.safetensors']
[2025-03-09 11:41:38 TP1] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards: 0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 20% Completed | 1/5 [00:00<00:03, 1.14it/s]
Loading safetensors checkpoint shards: 40% Completed | 2/5 [00:01<00:02, 1.44it/s]
Loading safetensors checkpoint shards: 60% Completed | 3/5 [00:02<00:01, 1.08it/s]
Loading safetensors checkpoint shards: 80% Completed | 4/5 [00:03<00:01, 1.03s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:04<00:00, 1.10it/s]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:04<00:00, 1.11it/s]
[2025-03-09 11:41:44 TP0] Load weight end. type=Qwen2ForCausalLM, dtype=torch.float16, avail mem=13.87 GB, mem usage=9.29 GB.
[2025-03-09 11:41:46 TP1] Load weight end. type=Qwen2ForCausalLM, dtype=torch.float16, avail mem=12.89 GB, mem usage=9.29 GB.
[2025-03-09 11:41:46 TP0] KV Cache is allocated. #tokens: 81982, K size: 5.00 GB, V size: 5.00 GB
[2025-03-09 11:41:46 TP1] KV Cache is allocated. #tokens: 81982, K size: 5.00 GB, V size: 5.00 GB
[2025-03-09 11:41:46 TP0] Memory pool end. avail mem=2.62 GB
[2025-03-09 11:41:46 TP1] Memory pool end. avail mem=1.64 GB
[2025-03-09 11:41:46 TP1] Capture cuda graph begin. This can take up to several minutes. avail mem=1.06 GB
[2025-03-09 11:41:46 TP0] Capture cuda graph begin. This can take up to several minutes. avail mem=2.04 GB
100%|████████████████████████████████████████████████████████████████████████████████| 4/4 [00:02<00:00, 1.40it/s]
[2025-03-09 11:41:49 TP0] Capture cuda graph end. Time elapsed: 2.86 s. avail mem=1.87 GB. mem usage=0.17 GB.
[2025-03-09 11:41:49 TP1] Capture cuda graph end. Time elapsed: 2.86 s. avail mem=0.89 GB. mem usage=0.17 GB.
[2025-03-09 11:41:49 TP0] max_total_num_tokens=81982, chunked_prefill_size=2048, max_prefill_tokens=16384, max_running_requests=2049, context_len=131072
[2025-03-09 11:41:49 TP1] max_total_num_tokens=81982, chunked_prefill_size=2048, max_prefill_tokens=16384, max_running_requests=2049, context_len=131072
[2025-03-09 11:41:49] INFO: Started server process [79082]
[2025-03-09 11:41:49] INFO: Waiting for application startup.
[2025-03-09 11:41:49] INFO: Application startup complete.
[2025-03-09 11:41:49] INFO: Uvicorn running on http://0.0.0.0:30000 (Press CTRL+C to quit)
[2025-03-09 11:41:50] INFO: 127.0.0.1:38388 - "GET /get_model_info HTTP/1.1" 200 OK
[2025-03-09 11:41:50 TP0] Prefill batch. #new-seq: 1, #new-token: 6, #cached-token: 0, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-03-09 11:41:55] INFO: 127.0.0.1:38400 - "POST /generate HTTP/1.1" 200 OK
[2025-03-09 11:41:55] The server is fired up and ready to roll!
[2025-03-09 11:42:19] INFO: 172.18.0.6:57454 - "POST /v1/chat/completions HTTP/1.1" 200 OK
[2025-03-09 11:42:19 TP0] Prefill batch. #new-seq: 1, #new-token: 2048, #cached-token: 0, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-03-09 11:42:19 TP0] Prefill batch. #new-seq: 1, #new-token: 214, #cached-token: 0, token usage: 0.02, #running-req: 0, #queue-req: 0,
[2025-03-09 11:42:23 TP0] Decode batch. #running-req: 1, #token: 2295, token usage: 0.03, gen throughput (token/s): 1.19, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:24 TP0] Decode batch. #running-req: 1, #token: 2335, token usage: 0.03, gen throughput (token/s): 46.58, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:25 TP0] Decode batch. #running-req: 1, #token: 2375, token usage: 0.03, gen throughput (token/s): 46.58, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:26 TP0] Decode batch. #running-req: 1, #token: 2415, token usage: 0.03, gen throughput (token/s): 46.53, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:26 TP0] Decode batch. #running-req: 1, #token: 2455, token usage: 0.03, gen throughput (token/s): 46.58, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:27 TP0] Decode batch. #running-req: 1, #token: 2495, token usage: 0.03, gen throughput (token/s): 46.57, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:28 TP0] Decode batch. #running-req: 1, #token: 2535, token usage: 0.03, gen throughput (token/s): 46.55, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:29 TP0] Decode batch. #running-req: 1, #token: 2575, token usage: 0.03, gen throughput (token/s): 46.53, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:30 TP0] Decode batch. #running-req: 1, #token: 2615, token usage: 0.03, gen throughput (token/s): 46.52, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:31 TP0] Decode batch. #running-req: 1, #token: 2655, token usage: 0.03, gen throughput (token/s): 46.53, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:32 TP0] Decode batch. #running-req: 1, #token: 2695, token usage: 0.03, gen throughput (token/s): 46.48, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:32 TP0] Decode batch. #running-req: 1, #token: 2735, token usage: 0.03, gen throughput (token/s): 46.38, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:33 TP0] Decode batch. #running-req: 1, #token: 2775, token usage: 0.03, gen throughput (token/s): 46.40, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:34 TP0] Decode batch. #running-req: 1, #token: 2815, token usage: 0.03, gen throughput (token/s): 46.34, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:35 TP0] Decode batch. #running-req: 1, #token: 2855, token usage: 0.03, gen throughput (token/s): 46.34, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:36 TP0] Decode batch. #running-req: 1, #token: 2895, token usage: 0.04, gen throughput (token/s): 46.35, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:37 TP0] Decode batch. #running-req: 1, #token: 2935, token usage: 0.04, gen throughput (token/s): 46.22, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:38 TP0] Decode batch. #running-req: 1, #token: 2975, token usage: 0.04, gen throughput (token/s): 46.16, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:38 TP0] Decode batch. #running-req: 1, #token: 3015, token usage: 0.04, gen throughput (token/s): 46.13, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:39 TP0] Decode batch. #running-req: 1, #token: 3055, token usage: 0.04, gen throughput (token/s): 46.19, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:40 TP0] Decode batch. #running-req: 1, #token: 3095, token usage: 0.04, gen throughput (token/s): 45.54, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:41 TP0] Decode batch. #running-req: 1, #token: 3135, token usage: 0.04, gen throughput (token/s): 44.70, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:42 TP0] Decode batch. #running-req: 1, #token: 3175, token usage: 0.04, gen throughput (token/s): 45.84, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:43 TP0] Decode batch. #running-req: 1, #token: 3215, token usage: 0.04, gen throughput (token/s): 45.92, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:44 TP0] Decode batch. #running-req: 1, #token: 3255, token usage: 0.04, gen throughput (token/s): 45.97, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:45 TP0] Decode batch. #running-req: 1, #token: 3295, token usage: 0.04, gen throughput (token/s): 45.93, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:45 TP0] Decode batch. #running-req: 1, #token: 3335, token usage: 0.04, gen throughput (token/s): 45.88, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:46 TP0] Decode batch. #running-req: 1, #token: 3375, token usage: 0.04, gen throughput (token/s): 45.68, largest-len: 0, #queue-req: 0,
[2025-03-09 11:42:46] INFO: 172.18.0.6:57454 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
[2025-03-09 11:42:46] ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/john/Downloads/exit/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/middleware/cors.py", line 85, in __call__
await self.app(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 714, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 734, in app
await route.handle(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/sglang/srt/entrypoints/http_server.py", line 495, in openai_v1_chat_completions
return await v1_chat_completions(_global_state.tokenizer_manager, raw_request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/sglang/srt/openai_api/adapter.py", line 1231, in v1_chat_completions
all_requests = [ChatCompletionRequest(**request_json)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/pydantic/main.py", line 214, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 5 validation errors for ChatCompletionRequest
messages.1.ChatCompletionMessageGenericParam.content.str
Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
messages.1.ChatCompletionMessageGenericParam.content.list[ChatCompletionMessageContentTextPart]
Input should be a valid list [type=list_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/list_type
messages.1.ChatCompletionMessageUserParam.role
Input should be 'user' [type=literal_error, input_value='assistant', input_type=str]
For further information visit https://errors.pydantic.dev/2.10/v/literal_error
messages.1.ChatCompletionMessageUserParam.content.str
Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
messages.1.ChatCompletionMessageUserParam.content.list[union[ChatCompletionMessageContentTextPart,ChatCompletionMessageContentImagePart]]
Input should be a valid list [type=list_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/list_type
[2025-03-09 11:42:48] INFO: 172.18.0.6:45012 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
[2025-03-09 11:42:48] ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/john/Downloads/exit/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/middleware/cors.py", line 85, in __call__
await self.app(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 714, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 734, in app
await route.handle(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/sglang/srt/entrypoints/http_server.py", line 495, in openai_v1_chat_completions
return await v1_chat_completions(_global_state.tokenizer_manager, raw_request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/sglang/srt/openai_api/adapter.py", line 1231, in v1_chat_completions
all_requests = [ChatCompletionRequest(**request_json)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/pydantic/main.py", line 214, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 5 validation errors for ChatCompletionRequest
messages.1.ChatCompletionMessageGenericParam.content.str
Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
messages.1.ChatCompletionMessageGenericParam.content.list[ChatCompletionMessageContentTextPart]
Input should be a valid list [type=list_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/list_type
messages.1.ChatCompletionMessageUserParam.role
Input should be 'user' [type=literal_error, input_value='assistant', input_type=str]
For further information visit https://errors.pydantic.dev/2.10/v/literal_error
messages.1.ChatCompletionMessageUserParam.content.str
Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
messages.1.ChatCompletionMessageUserParam.content.list[union[ChatCompletionMessageContentTextPart,ChatCompletionMessageContentImagePart]]
Input should be a valid list [type=list_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/list_type
[2025-03-09 11:42:52] INFO: 172.18.0.6:55218 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
[2025-03-09 11:42:52] ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/john/Downloads/exit/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/middleware/cors.py", line 85, in __call__
await self.app(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 714, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 734, in app
await route.handle(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/john/Downloads/exit/lib/python3.11/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/sglang/srt/entrypoints/http_server.py", line 495, in openai_v1_chat_completions
return await v1_chat_completions(_global_state.tokenizer_manager, raw_request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/sglang/srt/openai_api/adapter.py", line 1231, in v1_chat_completions
all_requests = [ChatCompletionRequest(**request_json)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/john/Downloads/exit/lib/python3.11/site-packages/pydantic/main.py", line 214, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 5 validation errors for ChatCompletionRequest
messages.1.ChatCompletionMessageGenericParam.content.str
Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
messages.1.ChatCompletionMessageGenericParam.content.list[ChatCompletionMessageContentTextPart]
Input should be a valid list [type=list_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/list_type
messages.1.ChatCompletionMessageUserParam.role
Input should be 'user' [type=literal_error, input_value='assistant', input_type=str]
For further information visit https://errors.pydantic.dev/2.10/v/literal_error
messages.1.ChatCompletionMessageUserParam.content.str
Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
messages.1.ChatCompletionMessageUserParam.content.list[union[ChatCompletionMessageContentTextPart,ChatCompletionMessageContentImagePart]]
Input should be a valid list [type=list_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/list_type
```
### Reproduction
python -m sglang.launch_server --model-path Qwen/QwQ-32B-AWQ --tp 2 --enable-p2p-check --host 0.0.0.0 --reasoning-parser deepseek-r1 --tool-call-parser qwen25 --port 30000
### Environment
CUDA available: True
GPU 0,1: NVIDIA GeForce RTX 3090
GPU 0,1 Compute Capability: 8.6
CUDA_HOME: /usr
NVCC: Cuda compilation tools, release 12.0, V12.0.140
CUDA Driver Version: 560.35.03
PyTorch: 2.5.1+cu124
sglang: 0.4.3.post4
sgl_kernel: 0.0.3.post6
flashinfer: 0.2.2.post1+cu124torch2.5
triton: 3.1.0
transformers: 4.48.3
torchao: 0.9.0
numpy: 1.26.4
aiohttp: 3.11.13
fastapi: 0.115.11
hf_transfer: 0.1.6
huggingface_hub: 0.29.2
interegular: 0.3.3
modelscope: 1.23.2
orjson: 3.10.1
packaging: 23.2
psutil: 5.9.5
pydantic: 2.10.6
multipart: 0.0.9
zmq: 26.2.1
uvicorn: 0.30.6
uvloop: 0.19.0
vllm: 0.6.5
openai: 1.65.4
tiktoken: 0.9.0
anthropic: 0.42.0
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PHB 0-5 0 N/A
GPU1 PHB X 0-5 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
ulimit soft: 1024
| open | 2025-03-09T16:03:00Z | 2025-03-10T05:40:01Z | https://github.com/sgl-project/sglang/issues/4236 | [] | JohnZolton | 0 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 936 | Release 2.5.0 is not py27 compatible? | I use Airflow 1.10.9 that is expected to support py27 but looks like your recent release breaks it.
Just let me know if it's intended. - I will then open an issue for Airflow to pin `Flask-SQLAlchemy<2.5` to keep compatibility.
```
Traceback (most recent call last):
...
from airflow.bin.cli import CLIFactory
File "..py27/lib/python2.7/site-packages/airflow/bin/cli.py", line 71, in <module>
from airflow.www_rbac.app import cached_app as cached_app_rbac
File "../py27/lib/python2.7/site-packages/airflow/www_rbac/app.py", line 27, in <module>
from flask_appbuilder import AppBuilder, SQLA
File "../py27/lib/python2.7/site-packages/flask_appbuilder/__init__.py", line 4, in <module>
from .models.sqla import Model, Base, SQLA
File "../py27/lib/python2.7/site-packages/flask_appbuilder/models/sqla/__init__.py", line 5, in <module>
from flask_sqlalchemy import SQLAlchemy, DefaultMeta, _QueryProperty
File "../py27/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 39, in <module>
from threading import get_ident as _ident_func
ImportError: cannot import name get_ident
```
Environment:
- Python version: 2.7
- Flask-SQLAlchemy version: 2.5.0
| closed | 2021-03-18T18:31:32Z | 2021-04-02T00:19:50Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/936 | [] | POD666 | 2 |
roboflow/supervision | deep-learning | 790 | [LineZone] - allow per class counting | ### Description
Currently, [sv.LineZone](https://github.com/roboflow/supervision/blob/3024ddca83ad837651e59d040e2a5ac5b2b4f00f/supervision/detection/line_counter.py#L11) provides only aggregated counts - all classes are thrown into one bucket. In the past, many users have asked us to provide more granular - per class count. This can be achieved by adding `class_in_count` and `class_out_count` dictionaries that will store per-class counts.
### API
```python
class LineZone:
def __init__(
self,
start: Point,
end: Point,
triggering_anchors: Iterable[Position] = (
Position.TOP_LEFT,
Position.TOP_RIGHT,
Position.BOTTOM_LEFT,
Position.BOTTOM_RIGHT,
),
):
# Existing initialization code...
self.class_in_count: Dict[int, int] = {}
self.class_out_count: Dict[int, int] = {}
def trigger(self, detections: Detections) -> Tuple[np.ndarray, np.ndarray]:
crossed_in = np.full(len(detections), False)
crossed_out = np.full(len(detections), False)
# Required logic changes...
```
### Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will definitely speed up the review process. Each change must be tested by the reviewer. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻 | closed | 2024-01-26T13:20:45Z | 2024-10-01T09:45:59Z | https://github.com/roboflow/supervision/issues/790 | [
"enhancement",
"good first issue",
"api: linezone",
"Q2.2024"
] | SkalskiP | 10 |
predict-idlab/plotly-resampler | plotly | 65 | lttbc -> rounding errors when passing int-indexed data | fact: *lttbc* requires a int/float index as input and not a (datetime64) time-index.
As a result, [this code](https://github.com/predict-idlab/plotly-resampler/blob/fbf8d5ed9c3b29c8bd337868bcb63dd30ba49cde/plotly_resampler/aggregation/aggregators.py#L81) was written, where the time-index series is converted into a *int-index* representing the time in nanoseconds.
However, we observed that rounding errors occur because this *int-index* is internally converted by *ltbbc* into a float index, after which we aim to again derive an int index ➡️ rounding errors.
As a result, this [code adjustment was made](https://github.com/predict-idlab/plotly-resampler/commit/45bb304b9c3dae4f12ca150b1790cfacfdfdc54f), mitigating this rounding error in most cases.
Note that this is not 100% solved! Rounding errors can still occur. An ideal solution would be that LTTBc just returns the data index positions of the selected data-points
| closed | 2022-05-19T15:39:35Z | 2022-05-19T17:40:34Z | https://github.com/predict-idlab/plotly-resampler/issues/65 | [] | jonasvdd | 0 |
mitmproxy/pdoc | api | 524 | Add support for additional markdown extras / mermaid | #### Problem Description
GitHub supports rendering Markdown files with Mermaid diagrams, which aren't supported by default with `markdown2` and thus don't get translated correctly.
It seems like a pretty common use case to include your project's README in your module's top-level docstring. Since the diagram doesn't render, I've found that I need to maintain 2 copies of the README, one using the `mermaid` fence block syntax and another with a static `<img>` of the diagram (and making them both an `img` doesn't work since the `img` doesn't respond to GitHub's theme changes).
#### Proposal
This proposal requires bumping the vendored version of the markdown2.
More recent versions have built-in support for rendering mermaid diagrams via the [mermaid extra](https://github.com/trentm/python-markdown2/wiki/mermaid). The extra requires some additional HTML stuff, but I don't think that needs to be included in default template (people can add the necessary JavaScript themselves via a custom template, maybe with a doc explaining how to do it).
#### Alternatives
Rather than always enable the `mermaid` extra and having to maintain the list of `extras` enabled, more recent versions of markdown2 also support enabling extras from within a markdown document via the [use-file-vars extra](https://github.com/trentm/python-markdown2/wiki/use-file-vars). Adding that one extra would let people customize the rendering of their markdowns themselves rather than either fork this project or find another solution. | closed | 2023-03-25T16:17:50Z | 2023-03-28T13:09:31Z | https://github.com/mitmproxy/pdoc/issues/524 | [
"enhancement"
] | thearchitector | 2 |
pallets-eco/flask-wtf | flask | 364 | Why use referrer header for CSRF protection when you have sychronizer tokens? | I'm new to the security game. I'd understood that a sychronizer token and a referrer header were doing basically the same thing, and that a sychronizer token is more robust. What does a referrer header add that a sychronizer token doesn't address? | closed | 2019-04-09T15:11:39Z | 2021-07-13T00:38:36Z | https://github.com/pallets-eco/flask-wtf/issues/364 | [
"csrf"
] | knod | 9 |
tqdm/tqdm | pandas | 995 | I think the project name should be changed | I think we should use a meaningful English name, because I always forget this name | closed | 2020-06-26T16:02:37Z | 2020-08-19T20:29:08Z | https://github.com/tqdm/tqdm/issues/995 | [
"p0-bug-critical ☢",
"duplicate 🗐",
"invalid ⛔",
"question/docs ‽",
"c1-quick 🕐"
] | junknet | 2 |
AutoGPTQ/AutoGPTQ | nlp | 463 | [BUG] v0.5.1-release can't support aarch64 platform | **Describe the bug**
v0.4.2-release build ok, but when update v0.5.1, it can't be compile with aarch64 host machine.
**Hardware details**
jetson xavier nx, 6 core.
aarch64 mainchine, can't support qigen extensions. just without -mavx, -mavx2,that for x86-64 CPU.
**Software version**
Version of relevant software such as operation system, cuda toolkit, python, auto-gptq, pytorch, transformers, accelerate, etc.
**To Reproduce**
Steps to reproduce the behavior:
git checkout v5.1.0-release
pip install -v .
**Expected behavior**
build ok.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
| closed | 2023-12-03T03:00:16Z | 2023-12-07T11:31:28Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/463 | [
"bug"
] | st7109 | 3 |
SciTools/cartopy | matplotlib | 1,577 | Cartopy 0.18.0 fills the inside of a Line/MultiLineString shapefile | ### Description
I have a fault line shapefile that has a multilinestring geometry. When imported via Cartopy's ShapelyFeature() and Reader() functions, it comes out as a polygon with a fill instead of a line.
The issue has already been filed in 2017.
https://github.com/SciTools/cartopy/issues/856 , https://www.net-analysis.com/blog/cartopymultiline.html
It was said in the issue that `facecolor='none` should work. I have tried `none`, `'none'`, and `"none"` but nothing works.
<!--
If you are reporting a bug, attach the *entire* traceback from Python.
If you are proposing an enhancement/new feature, provide links to related articles, reference examples, etc.
If you are asking a question, please ask on StackOverflow and use the cartopy tag. All cartopy
questions on StackOverflow can be found at https://stackoverflow.com/questions/tagged/cartopy
-->
#### Code to reproduce
```
import os
import matplotlib.pyplot as plt
import geopandas as gpd
from geopandas import GeoDataFrame
import cartopy.crs as ccrs
import cartopy.io.img_tiles as cimgt
from cartopy.feature import ShapelyFeature
from cartopy.io import shapereader
from cartopy.io.shapereader import Reader
from shapely.geometry import MultiLineString
os.chdir(r'path')
proj = ccrs.PlateCarree()
bounds = [116.9283371, 126.90534668, 4.58693981, 21.07014084]
stamen_terrain = cimgt.Stamen('terrain-background')
fault_line = 'faultLines.shp' #MultiLineString
shp_feature = ShapelyFeature(Reader(fault_line).geometries(), ccrs.epsg(32651),
linewidth=1, color='black', facecolor='None')
fig = plt.figure(figsize=(15,10))
ax = fig.add_subplot(1, 1, 1, projection=proj)
ax.set_extent(bounds)
ax.add_image(stamen_terrain, 8)
ax.add_feature(shp_feature, zorder=1)
plt.show()
```
#### Result

<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
Windows 10 Pro 1909 build
### Cartopy version
Cartopy 0.18.0
### conda list
Name Version Build Channel
alabaster 0.7.12 py_0 conda-forge
argh 0.26.2 py38_1001 conda-forge
astroid 2.4.1 py38h32f6830_0 conda-forge
atomicwrites 1.4.0 pyh9f0ad1d_0 conda-forge
attrs 19.3.0 py_0 conda-forge
autopep8 1.5.2 pyh9f0ad1d_0 conda-forge
babel 2.8.0 py_0 conda-forge
backcall 0.1.0 py_0 conda-forge
basemap 1.3.0 py38hcdd3ad8_2 conda-forge
bcrypt 3.1.7 py38h9de7a3e_1 conda-forge
bleach 3.1.5 pyh9f0ad1d_0 conda-forge
boost-cpp 1.72.0 h0caebb8_0 conda-forge
brotlipy 0.7.0 py38h1e8a9f7_1000 conda-forge
bzip2 1.0.8 hfa6e2cd_2 conda-forge
ca-certificates 2020.4.5.1 hecc5488_0 conda-forge
cartopy 0.18.0 py38h4990f70_0 conda-forge
certifi 2020.4.5.1 py38h32f6830_0 conda-forge
cffi 1.14.0 py38ha419a9e_0 conda-forge
cfitsio 3.470 he774522_5 conda-forge
chardet 3.0.4 py38h32f6830_1006 conda-forge
click 7.1.2 pyh9f0ad1d_0 conda-forge
click-plugins 1.1.1 py_0 conda-forge
cligj 0.5.0 py_0 conda-forge
cloudpickle 1.4.1 py_0 conda-forge
colorama 0.4.3 py_0 conda-forge
cryptography 2.9.2 py38hba49e27_0 conda-forge
curl 7.69.1 h1dcc11c_0 conda-forge
cycler 0.10.0 py_2 conda-forge
decorator 4.4.2 py_0 conda-forge
defusedxml 0.6.0 py_0 conda-forge
descartes 1.1.0 py_4 conda-forge
diff-match-patch 20181111 py_0 conda-forge
docutils 0.16 py38h32f6830_1 conda-forge
entrypoints 0.3 py38h32f6830_1001 conda-forge
expat 2.2.9 he025d50_2 conda-forge
ffmpeg 4.2 h6538335_0 conda-forge
fiona 1.8.13 py38h41bf4fa_1 conda-forge
flake8 3.7.9 py38h32f6830_1 conda-forge
freetype 2.10.1 ha9979f8_0 conda-forge
freexl 1.0.5 hd288d7e_1002 conda-forge
future 0.18.2 py38h32f6830_1 conda-forge
gdal 3.0.4 py38h3ba59e7_9 conda-forge
geopandas 0.7.0 py_1 conda-forge
geos 3.8.1 he025d50_0 conda-forge
geotiff 1.5.1 h3d29ae3_10 conda-forge
gettext 0.19.8.1 hb01d8f6_1002 conda-forge
glib 2.64.2 he4de6d7_0 conda-forge
hdf4 4.2.13 hf8e6fe8_1003 conda-forge
hdf5 1.10.6 nompi_ha405e13_100 conda-forge
icc_rt 2019.0.0 h0cc432a_1
icu 64.2 he025d50_1 conda-forge
idna 2.9 py_1 conda-forge
imageio 2.8.0 py_0 conda-forge
imageio-ffmpeg 0.4.2 py_0 conda-forge
imagesize 1.2.0 py_0 conda-forge
importlib-metadata 1.6.0 py38h32f6830_0 conda-forge
importlib_metadata 1.6.0 0 conda-forge
intel-openmp 2020.0 166
intervaltree 3.0.2 py_0 conda-forge
ipykernel 5.2.1 py38h5ca1d4c_0 conda-forge
ipython 7.14.0 py38h32f6830_0 conda-forge
ipython_genutils 0.2.0 py_1 conda-forge
isort 4.3.21 py38h32f6830_1 conda-forge
jedi 0.15.2 py38_0 conda-forge
jinja2 2.11.2 pyh9f0ad1d_0 conda-forge
jpeg 9c hfa6e2cd_1001 conda-forge
jsonschema 3.2.0 py38h32f6830_1 conda-forge
jupyter_client 6.1.3 py_0 conda-forge
jupyter_core 4.6.3 py38h32f6830_1 conda-forge
kealib 1.4.13 h3b59ab9_1 conda-forge
keyring 21.2.1 py38h32f6830_0 conda-forge
kiwisolver 1.2.0 py38heaebd3c_0 conda-forge
krb5 1.17.1 hdd46e55_0 conda-forge
lazy-object-proxy 1.4.3 py38h9de7a3e_2 conda-forge
libblas 3.8.0 15_mkl conda-forge
libcblas 3.8.0 15_mkl conda-forge
libclang 9.0.1 default_hf44288c_0 conda-forge
libcurl 7.69.1 h1dcc11c_0 conda-forge
libffi 3.2.1 h6538335_1007 conda-forge
libgdal 3.0.4 h6f60a84_9 conda-forge
libiconv 1.15 hfa6e2cd_1006 conda-forge
libkml 1.3.0 h7e985d0_1011 conda-forge
liblapack 3.8.0 15_mkl conda-forge
libnetcdf 4.7.4 nompi_h256d12c_104 conda-forge
libpng 1.6.37 hfe6a214_1 conda-forge
libpq 12.2 hd9aa61d_1 conda-forge
libsodium 1.0.17 h2fa13f4_0 conda-forge
libspatialindex 1.9.3 he025d50_3 conda-forge
libspatialite 4.3.0a h51df0ed_1038 conda-forge
libssh2 1.8.2 h642c060_2 conda-forge
libtiff 4.1.0 h885aae3_6 conda-forge
libwebp-base 1.1.0 hfa6e2cd_3 conda-forge
libxml2 2.9.10 h9ce36c8_0 conda-forge
lz4-c 1.9.2 h62dcd97_1 conda-forge
m2w64-expat 2.1.1 2
m2w64-gcc-libgfortran 5.3.0 6
m2w64-gcc-libs 5.3.0 7
m2w64-gcc-libs-core 5.3.0 7
m2w64-gettext 0.19.7 2
m2w64-gmp 6.1.0 2
m2w64-libiconv 1.14 6
m2w64-libwinpthread-git 5.0.0.4634.697f757 2
m2w64-xz 5.2.2 2
markupsafe 1.1.1 py38h9de7a3e_1 conda-forge
matplotlib 3.2.1 0 conda-forge
matplotlib-base 3.2.1 py38h1626042_0 conda-forge
mccabe 0.6.1 py_1 conda-forge
mistune 0.8.4 py38h9de7a3e_1001 conda-forge
mkl 2020.0 166
moviepy 1.0.1 py_0 conda-forge
msys2-conda-epoch 20160418 1
munch 2.5.0 py_0 conda-forge
natsort 7.0.1 pypi_0 pypi
nbconvert 5.6.1 py38h32f6830_1 conda-forge
nbformat 5.0.6 py_0 conda-forge
numpy 1.18.4 py38h72c728b_0 conda-forge
numpydoc 0.9.2 py_0 conda-forge
olefile 0.46 py_0 conda-forge
openjpeg 2.3.1 h57dd2e7_3 conda-forge
openssl 1.1.1g he774522_0 conda-forge
ospybook 1.0 pypi_0 pypi
owslib 0.19.2 py_1 conda-forge
packaging 20.1 py_0 conda-forge
pandas 1.0.3 py38he6e81aa_1 conda-forge
pandoc 2.9.2.1 0 conda-forge
pandocfilters 1.4.2 py_1 conda-forge
paramiko 2.7.1 py38_0 conda-forge
parso 0.5.2 py_0 conda-forge
pathtools 0.1.2 py_1 conda-forge
pcre 8.44 h6538335_0 conda-forge
pexpect 4.8.0 py38h32f6830_1 conda-forge
pickleshare 0.7.5 py38h32f6830_1001 conda-forge
pillow 7.1.2 py38h7011068_0 conda-forge
pip 20.1 pyh9f0ad1d_0 conda-forge
pluggy 0.13.1 py38h32f6830_1 conda-forge
poppler 0.87.0 h0cd1227_1 conda-forge
poppler-data 0.4.9 1 conda-forge
postgresql 12.2 he14cc48_1 conda-forge
proglog 0.1.9 py_0 conda-forge
proj 7.0.0 haa36216_3 conda-forge
prompt-toolkit 3.0.5 py_0 conda-forge
psutil 5.7.0 py38h9de7a3e_1 conda-forge
pycodestyle 2.5.0 py_0 conda-forge
pycparser 2.20 py_0 conda-forge
pydocstyle 5.0.2 py_0 conda-forge
pyepsg 0.4.0 py_0 conda-forge
pyflakes 2.1.1 py_0 conda-forge
pygments 2.6.1 py_0 conda-forge
pylint 2.5.2 py38h32f6830_0 conda-forge
pynacl 1.3.0 py38h2fa13f4_1001 conda-forge
pyopenssl 19.1.0 py_1 conda-forge
pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge
pyproj 2.6.1.post1 py38h1dd9442_0 conda-forge
pyqt 5.12.3 py38h7ae7562_3 conda-forge
pyqt5-sip 4.19.18 pypi_0 pypi
pyqtchart 5.12 pypi_0 pypi
pyqtwebengine 5.12.1 pypi_0 pypi
pyrsistent 0.16.0 py38h9de7a3e_0 conda-forge
pyshp 2.1.0 py_0 conda-forge
pysocks 1.7.1 py38h32f6830_1 conda-forge
python 3.8.2 h5fd99cc_7_cpython conda-forge
python-dateutil 2.8.1 py_0 conda-forge
python-jsonrpc-server 0.3.4 pyh9f0ad1d_1 conda-forge
python-language-server 0.31.10 py38h32f6830_0 conda-forge
python_abi 3.8 1_cp38 conda-forge
pytz 2020.1 pyh9f0ad1d_0 conda-forge
pywin32 227 py38hfa6e2cd_0 conda-forge
pywin32-ctypes 0.2.0 py38h32f6830_1001 conda-forge
pyyaml 5.3.1 py38h9de7a3e_0 conda-forge
pyzmq 19.0.1 py38h77b9d75_0 conda-forge
qdarkstyle 2.8.1 pyh9f0ad1d_0 conda-forge
qt 5.12.5 h7ef1ec2_0 conda-forge
qtawesome 0.7.2 pyh9f0ad1d_0 conda-forge
qtconsole 4.7.3 pyh9f0ad1d_0 conda-forge
qtpy 1.9.0 py_0 conda-forge
requests 2.23.0 pyh8c360ce_2 conda-forge
rope 0.17.0 pyh9f0ad1d_0 conda-forge
rtree 0.9.4 py38h7ad75cc_1 conda-forge
scipy 1.3.2 py38h582fac2_0 conda-forge
setuptools 46.1.3 py38h32f6830_0 conda-forge
shapely 1.7.0 py38hbf43935_3 conda-forge
six 1.14.0 py_1 conda-forge
snowballstemmer 2.0.0 py_0 conda-forge
sortedcontainers 2.1.0 py_0 conda-forge
sphinx 3.0.3 py_0 conda-forge
sphinxcontrib-applehelp 1.0.2 py_0 conda-forge
sphinxcontrib-devhelp 1.0.2 py_0 conda-forge
sphinxcontrib-htmlhelp 1.0.3 py_0 conda-forge
sphinxcontrib-jsmath 1.0.1 py_0 conda-forge
sphinxcontrib-qthelp 1.0.3 py_0 conda-forge
sphinxcontrib-serializinghtml 1.1.4 py_0 conda-forge
spyder 4.1.3 py38h32f6830_0 conda-forge
spyder-kernels 1.9.1 py38h32f6830_0 conda-forge
sqlite 3.30.1 hfa6e2cd_0 conda-forge
tbb 2018.0.5 he980bc4_0 conda-forge
testpath 0.4.4 py_0 conda-forge
tiledb 1.7.7 h0b90766_1 conda-forge
tk 8.6.10 hfa6e2cd_0 conda-forge
toml 0.10.0 py_0 conda-forge
tornado 6.0.4 py38hfa6e2cd_0 conda-forge
tqdm 4.46.0 pyh9f0ad1d_0 conda-forge
traitlets 4.3.3 py38h32f6830_1 conda-forge
ujson 1.35 py38hb99c5c2_1002 conda-forge
urllib3 1.25.9 py_0 conda-forge
vc 14.1 h869be7e_1 conda-forge
vs2015_runtime 14.16.27012 h30e32a0_2 conda-forge
watchdog 0.10.2 py38_0 conda-forge
wcwidth 0.1.9 pyh9f0ad1d_0 conda-forge
webencodings 0.5.1 py_1 conda-forge
wheel 0.34.2 py_1 conda-forge
win_inet_pton 1.1.0 py38_0 conda-forge
wincertstore 0.2 py38_1003 conda-forge
wrapt 1.11.2 py38h9de7a3e_0 conda-forge
xerces-c 3.2.2 h6538335_1004 conda-forge
xz 5.2.5 h2fa13f4_0 conda-forge
yaml 0.2.4 he774522_0 conda-forge
yapf 0.29.0 py_0 conda-forge
zeromq 4.3.2 h6538335_2 conda-forge
zipp 3.1.0 py_0 conda-forge
zlib 1.2.11 h2fa13f4_1006 conda-forge
zstd 1.4.4 h9f78265_3 conda-forge
```
```
### pip list
```
```
</details>
| closed | 2020-06-02T05:23:21Z | 2020-06-03T06:39:23Z | https://github.com/SciTools/cartopy/issues/1577 | [] | miguel123-gis | 3 |
lanpa/tensorboardX | numpy | 482 | Input to tensorboard add_graph is not a Tensor | What should I do if the input to my model not only consists of Tensors but also python lists?
That is,
model = Net()
add_graph(model, (Tensor1, Tensor2, list))
In this case, I get errors. How can I pass a list as an input to ````add_graph````? | closed | 2019-08-10T07:02:39Z | 2019-10-23T15:53:35Z | https://github.com/lanpa/tensorboardX/issues/482 | [] | pcy1302 | 2 |
gradio-app/gradio | deep-learning | 10,181 | `gr.render` can not use `gr.Request` and `gr.EventData` | ### Describe the bug
I want to use request value(`gr.Request`) in render function.
But it occurs below errors. Also It can not use helpers like `gr.EventData`
I can work this issue by bypassing it using a `gr.State`. But It is uncomfortable and makes hard to read codes.
``` python
import gradio as gr
with gr.Blocks() as demo:
query_params = gr.State({})
@demo.load(outputs=[query_params])
def query_loader(request:gr.Request):
return request.query_params
@gr.render(inputs=[query_params], triggers=[query_params.change])
def query_render(query_params):
for key, value in query_params.items():
with gr.Row():
gr.Textbox(key)
gr.Textbox(value)
demo.launch()
```
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as demo:
@gr.render()
def query_render(request:gr.Request):
for key, value in request.query_params.items():
with gr.Row():
gr.Textbox(key)
gr.Textbox(value)
demo.launch()
```
### Screenshot
_No response_
### Logs
```shell
Traceback (most recent call last):
File "/opt/anaconda3/envs/gradio/lib/python3.13/site-packages/gradio/queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
)
^
File "/opt/anaconda3/envs/gradio/lib/python3.13/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<11 lines>...
)
^
File "/opt/anaconda3/envs/gradio/lib/python3.13/site-packages/gradio/blocks.py", line 2043, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
...<8 lines>...
)
^
File "/opt/anaconda3/envs/gradio/lib/python3.13/site-packages/gradio/blocks.py", line 1590, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
fn, *processed_input, limiter=self.limiter
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/anaconda3/envs/gradio/lib/python3.13/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
func, args, abandon_on_cancel=abandon_on_cancel, limiter=limiter
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/anaconda3/envs/gradio/lib/python3.13/site-packages/anyio/_backends/_asyncio.py", line 2505, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/opt/anaconda3/envs/gradio/lib/python3.13/site-packages/anyio/_backends/_asyncio.py", line 1005, in run
result = context.run(func, *args)
File "/opt/anaconda3/envs/gradio/lib/python3.13/site-packages/gradio/utils.py", line 865, in wrapper
response = f(*args, **kwargs)
File "/opt/anaconda3/envs/gradio/lib/python3.13/site-packages/gradio/renderable.py", line 80, in apply
self.fn(*args, **kwargs)
~~~~~~~^^^^^^^^^^^^^^^^^
TypeError: _render() missing 1 required positional argument: 'request'
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.8.0
gradio_client version: 1.5.1
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.7.0
audioop-lts: 0.2.1
fastapi: 0.115.6
ffmpy: 0.4.0
gradio-client==1.5.1 is not installed.
httpx: 0.28.1
huggingface-hub: 0.26.5
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.2.0
orjson: 3.10.12
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.10.3
pydub: 0.25.1
python-multipart: 0.0.19
pyyaml: 6.0.2
ruff: 0.8.2
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.28.1
huggingface-hub: 0.26.5
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.1
```
### Severity
I can work around it | closed | 2024-12-11T14:29:25Z | 2024-12-13T18:50:55Z | https://github.com/gradio-app/gradio/issues/10181 | [
"bug"
] | BLESS11186 | 1 |
prkumar/uplink | rest-api | 129 | Add support for parsing JSON objects using `glom` | **Is your feature request related to a problem? Please describe.**
[`glom`](https://glom.readthedocs.io/en/latest/) is a library that provides a lot of neat functionality in terms of parsing nested structures, like JSON objects.
**Describe the solution you'd like**
We can introduce a `parser` argument for `@uplink.returns.json` that supports custom parsing strategies for JSON responses:
```python
@uplink.returns.json(key="a.b.c", parser="glom")
```
Further, `glom` provides support for converting dictionary-like objects into objects, which seems like a great fit for adding a [custom `ConverterFactory`](https://uplink.readthedocs.io/en/stable/dev/converters.html#writing-custom-json-converters) for `glom`, too.
**Additional context**
This issue is related to feedback provided by @liiight through the Uplink Gitter lobby: https://gitter.im/python-uplink/Lobby?at=5c1a04f7b4ef82024857910d. @liiight also proposed exposing a specific decorator for `glom`:
```python
@uplink.returns.glom('a.b.c')
```
| open | 2018-12-19T23:26:04Z | 2018-12-19T23:46:12Z | https://github.com/prkumar/uplink/issues/129 | [
"Feature Request",
"help wanted"
] | prkumar | 0 |
TencentARC/GFPGAN | deep-learning | 141 | FacialComponentDiscriminator | FacialComponentDiscriminator是一个全卷积网络,输出是一个20*20的张量,似乎和常规的判别器形式不太一致,为什么没有用全连接层使结果以标量的形式输出呢?
| open | 2022-01-06T06:53:07Z | 2022-01-08T12:17:43Z | https://github.com/TencentARC/GFPGAN/issues/141 | [] | StephanPan | 1 |
deepset-ai/haystack | nlp | 8,600 | Unify `DocumentSplitter` and `NLTKDocumentSplitter` | These two classes are very much alike. The only difference is that the `NLTKDocumentSplitter` uses NLTK's sentence boundary detection algorithm. We should merge those two into one single component.
It could still be possible to give the user the choice to either use a naive approach for sentence boundary detection (e.g., ".") or, if he/she wishes so, use NLTK sentence boundary detection. | closed | 2024-12-03T11:15:47Z | 2024-12-12T14:22:29Z | https://github.com/deepset-ai/haystack/issues/8600 | [
"P2"
] | davidsbatista | 0 |
stanford-oval/storm | nlp | 322 | Question regarding the "Writing Purpose" box in the STORM website | Hi,
I noticed that the STORM website includes a feature allowing users to input additional information about the writing purpose. However, I couldn't find this feature in the provided source code when running CO-STORM locally.
I came across this issue from August 2024 (https://github.com/stanford-oval/storm/issues/150), where it’s mentioned that the writing purpose doesn't affect STORM's writing output. Is this still the case for CO-STORM? If CO-STORM does use the writing purpose to guide the background discussion, could you provide some guidance on how I can implement this feature locally?
Thanks!
| closed | 2025-02-20T19:55:14Z | 2025-03-08T09:02:19Z | https://github.com/stanford-oval/storm/issues/322 | [] | akilgiri-nrc | 1 |
huggingface/datasets | numpy | 6,814 | `map` with `num_proc` > 1 leads to OOM | ### Describe the bug
When running `map` on parquet dataset loaded from local machine, the RAM usage increases linearly eventually leading to OOM. I was wondering if I should I save the `cache_file` after every n steps in order to prevent this?
### Steps to reproduce the bug
```
ds = load_dataset("parquet", data_files=dataset_path, split="train")
ds = ds.shard(num_shards=4, index=0)
ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000))
ds = ds.map(prepare_dataset,
num_proc=32,
writer_batch_size=1000,
keep_in_memory=False,
desc="preprocess dataset")
```
```
def prepare_dataset(batch):
# load audio
sample = batch["audio"]
inputs = feature_extractor(sample["array"], sampling_rate=16000)
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(sample["array"].squeeze())
return batch
```
### Expected behavior
It shouldn't run into OOM problem.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.17
- Python version: 3.8.19
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.0.3
- `fsspec` version: 2024.2.0 | open | 2024-04-16T11:56:03Z | 2024-04-19T11:53:41Z | https://github.com/huggingface/datasets/issues/6814 | [] | bhavitvyamalik | 1 |
RobertCraigie/prisma-client-py | asyncio | 594 | Invalid code is generated for models with reserved names | <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
## How to reproduce
A database with class as the field will have this bug.
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Prisma information
<!-- Your Prisma schema, Prisma Client Python queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
```prisma
model Class {
classID Int @id @default(autoincrement())
indicationID Int?
title String? @db.VarChar(512)
orderID Int?
content String? @db.Text
createdAt DateTime @db.Timestamp(0)
updatedAt DateTime @db.Timestamp(0)
Indication Indication? @relation(fields: [indicationID], references: [indicationID], onDelete: NoAction, onUpdate: NoAction, map: "Class_ibfk_1")
Progress Progress[]
@@index([indicationID], map: "indicationID")
}
```
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]-->
- Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]-->
- Python version: <!--[Run `python -V` to see your Python version]-->
- Prisma version:
<!--[Run `prisma py version` to see your Prisma version and paste it between the ´´´]-->
```
```
| closed | 2022-11-17T07:21:38Z | 2022-11-21T09:20:52Z | https://github.com/RobertCraigie/prisma-client-py/issues/594 | [
"bug/2-confirmed",
"kind/bug",
"priority/high",
"level/unknown"
] | joezhoujinjing | 8 |
littlecodersh/ItChat | api | 415 | 关于长时间在线 | “如果要保持本项目超长时间(数月等)在线,建议手机保持联网。”
如你所说手机需要保持联网,那么如果几个月不碰手机是不是也坚持不了多久
这个问题可能和itchat本身无关,不知道能否得到解答,就是想无人值守一直让机器人跑 | closed | 2017-06-20T09:53:24Z | 2017-07-07T02:52:57Z | https://github.com/littlecodersh/ItChat/issues/415 | [
"question"
] | yhu-aa | 6 |
huggingface/transformers | tensorflow | 36,155 | `TFViTModel` and `interpolate_pos_encoding=True` | ### System Info
- `transformers` version: 4.48.3
- Platform: Linux-5.15.0-1078-azure-x86_64-with-glibc2.35
- Python version: 3.11.0rc1
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.4.2
- Accelerate version: 0.31.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.1+cu121 (True)
- Tensorflow version (GPU?): 2.16.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: NO
- Using GPU in script?: YES
- GPU type: Tesla V100-PCIE-16GB
### Who can help?
@amyeroberts, @qubvel, @gante, @Rocketknight1
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
This simple script is used to create a Keras Model based on Vision Transformer `TFViTModel`.
I want to use higher resolution images than the default value of 224, as described in the documentation.
**Enabling `interpolate_pos_encoding=True` returns an error during fit.**
Using the default resolution and `interpolate_pos_encoding=False` makes the script work.
```
from transformers import ViTConfig, TFViTModel
config = ViTConfig(image_size=512)
base_model = TFViTModel(config).from_pretrained('google/vit-base-patch16-224')
inputs = tf.keras.Input((3, 512, 512), dtype='float32')
x = base_model.vit(inputs, interpolate_pos_encoding=True, training=True).pooler_output
output= tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs=[inputs], outputs=[output])
```
**Error code:**
```
OperatorNotAllowedInGraphError: in user code:
File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 1398, in train_function *
return step_function(self, iterator)
File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 1370, in run_step *
outputs = model.train_step(data)
File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 1147, in train_step *
y_pred = self(x, training=True)
File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 565, in error_handler *
del filtered_tb
File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 588, in __call__ *
return super().__call__(*args, **kwargs)
File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 565, in error_handler *
del filtered_tb
File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/base_layer.py", line 1136, in __call__ *
outputs = call_fn(inputs, *args, **kwargs)
File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/functional.py", line 514, in call *
return self._run_internal_graph(inputs, training=training, mask=mask)
File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/functional.py", line 671, in _run_internal_graph *
outputs = node.layer(*args, **kwargs)
File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 560, in error_handler *
filtered_tb = _process_traceback_frames(e.__traceback__)
File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/base_layer.py", line 1136, in __call__ *
outputs = call_fn(inputs, *args, **kwargs)
File "/tmp/__autograph_generated_filepnn_cad_.py", line 162, in error_handler **
raise ag__.converted_call(ag__.ld(new_e).with_traceback, (ag__.ld(e).__traceback__,), None, fscope_1) from None
File "/tmp/__autograph_generated_filepnn_cad_.py", line 34, in error_handler
retval__1 = ag__.converted_call(ag__.ld(fn), tuple(ag__.ld(args)), dict(**ag__.ld(kwargs)), fscope_1)
OperatorNotAllowedInGraphError: Exception encountered when calling layer 'vit' (type TFViTMainLayer).
in user code:
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/transformers/modeling_tf_utils.py", line 598, in run_call_with_unpacked_inputs *
return func(self, **unpacked_inputs)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/transformers/models/vit/modeling_tf_vit.py", line 595, in call *
embedding_output = self.embeddings(
File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 560, in error_handler *
filtered_tb = _process_traceback_frames(e.__traceback__)
File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/base_layer.py", line 1136, in __call__ *
outputs = call_fn(inputs, *args, **kwargs)
File "/tmp/__autograph_generated_filepnn_cad_.py", line 162, in error_handler **
raise ag__.converted_call(ag__.ld(new_e).with_traceback, (ag__.ld(e).__traceback__,), None, fscope_1) from None
File "/tmp/__autograph_generated_filepnn_cad_.py", line 34, in error_handler
retval__1 = ag__.converted_call(ag__.ld(fn), tuple(ag__.ld(args)), dict(**ag__.ld(kwargs)), fscope_1)
OperatorNotAllowedInGraphError: Exception encountered when calling layer 'embeddings' (type TFViTEmbeddings).
in user code:
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/transformers/models/vit/modeling_tf_vit.py", line 128, in call *
batch_size, num_channels, height, width = shape_list(pixel_values)
OperatorNotAllowedInGraphError: Iterating over a symbolic `tf.Tensor` is not allowed. You can attempt the following resolutions to the problem: If you are running in Graph mode, use Eager execution mode or decorate this function with @tf.function. If you are using AutoGraph, you can try decorating this function with @tf.function. If that does not work, then you may be using an unsupported feature or your source code may not be visible to AutoGraph. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md#access-to-source-code for more information.
Call arguments received by layer 'embeddings' (type TFViTEmbeddings):
• pixel_values=tf.Tensor(shape=<unknown>, dtype=float32)
• interpolate_pos_encoding=True
• training=True
Call arguments received by layer 'vit' (type TFViTMainLayer):
• pixel_values=tf.Tensor(shape=<unknown>, dtype=float32)
• head_mask=None
• output_attentions=None
• output_hidden_states=None
• interpolate_pos_encoding=True
• return_dict=None
• training=True
File <command-6957984842183233>, line 18
8 #base_model.trainable = False
10 model.compile(optimizer=tf.keras.optimizers.AdamW(learning_rate=1e-3, weight_decay=1e-6),
11 loss={'output_qualidade': tf.keras.losses.BinaryCrossentropy(label_smoothing=0.1),
12 'output_armario': tf.keras.losses.CategoricalCrossentropy(label_smoothing=0.1),
(...)
15 'output_armario': tf.keras.metrics.AUC(curve='PR', multi_label=True, name='auc'),
16 'output_dano': tf.keras.metrics.AUC(curve='PR', multi_label=True, name='auc')})
---> 18 train_history = model.fit(x=train_generator,
19 epochs=110,
20 validation_data=val_generator,
21 validation_freq=1,
22 callbacks=[merge_metrics, early_stoping],
23 verbose=2)
File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:578, in safe_patch.<locals>.safe_patch_function(*args, **kwargs)
568 try_log_autologging_event(
569 AutologgingEventLogger.get_logger().log_patch_function_start,
570 session,
(...)
574 kwargs,
575 )
577 if patch_is_class:
--> 578 patch_function.call(call_original, *args, **kwargs)
579 else:
580 patch_function(call_original, *args, **kwargs)
File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:165, in PatchFunction.call(cls, original, *args, **kwargs)
163 @classmethod
164 def call(cls, original, *args, **kwargs):
--> 165 return cls().__call__(original, *args, **kwargs)
File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:176, in PatchFunction.__call__(self, original, *args, **kwargs)
172 self._on_exception(e)
173 finally:
174 # Regardless of what happens during the `_on_exception` callback, reraise
175 # the original implementation exception once the callback completes
--> 176 raise e
File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:169, in PatchFunction.__call__(self, original, *args, **kwargs)
167 def __call__(self, original, *args, **kwargs):
168 try:
--> 169 return self._patch_implementation(original, *args, **kwargs)
170 except (Exception, KeyboardInterrupt) as e:
171 try:
File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:227, in with_managed_run.<locals>.PatchWithManagedRun._patch_implementation(self, original, *args, **kwargs)
224 if not mlflow.active_run():
225 self.managed_run = create_managed_run()
--> 227 result = super()._patch_implementation(original, *args, **kwargs)
229 if self.managed_run:
230 mlflow.end_run(RunStatus.to_string(RunStatus.FINISHED))
File /databricks/python/lib/python3.11/site-packages/mlflow/tensorflow/__init__.py:1334, in autolog.<locals>.FitPatch._patch_implementation(self, original, inst, *args, **kwargs)
1327 except Exception as e:
1328 _logger.warning(
1329 "Failed to log training dataset information to "
1330 "MLflow Tracking. Reason: %s",
1331 e,
1332 )
-> 1334 history = original(inst, *args, **kwargs)
1336 if log_models:
1337 _log_keras_model(history, args)
File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:561, in safe_patch.<locals>.safe_patch_function.<locals>.call_original(*og_args, **og_kwargs)
558 original_result = original(*_og_args, **_og_kwargs)
559 return original_result
--> 561 return call_original_fn_with_event_logging(_original_fn, og_args, og_kwargs)
File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:496, in safe_patch.<locals>.safe_patch_function.<locals>.call_original_fn_with_event_logging(original_fn, og_args, og_kwargs)
487 try:
488 try_log_autologging_event(
489 AutologgingEventLogger.get_logger().log_original_function_start,
490 session,
(...)
494 og_kwargs,
495 )
--> 496 original_fn_result = original_fn(*og_args, **og_kwargs)
498 try_log_autologging_event(
499 AutologgingEventLogger.get_logger().log_original_function_success,
500 session,
(...)
504 og_kwargs,
505 )
506 return original_fn_result
File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:558, in safe_patch.<locals>.safe_patch_function.<locals>.call_original.<locals>._original_fn(*_og_args, **_og_kwargs)
550 # Show all non-MLflow warnings as normal (i.e. not as event logs)
551 # during original function execution, even if silent mode is enabled
552 # (`silent=True`), since these warnings originate from the ML framework
553 # or one of its dependencies and are likely relevant to the caller
554 with set_non_mlflow_warnings_behavior_for_current_thread(
555 disable_warnings=False,
556 reroute_warnings=False,
557 ):
--> 558 original_result = original(*_og_args, **_og_kwargs)
559 return original_result
File /databricks/python/lib/python3.11/site-packages/tf_keras/src/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File /databricks/python/lib/python3.11/site-packages/tensorflow/python/eager/polymorphic_function/autograph_util.py:52, in py_func_from_autograph.<locals>.autograph_handler(*args, **kwargs)
50 except Exception as e: # pylint:disable=broad-except
51 if hasattr(e, "ag_error_metadata"):
---> 52 raise e.ag_error_metadata.to_exception(e)
53 else:
54 raise
```
### Expected behavior
Expected behavior consists in running the model fit/training. | closed | 2025-02-13T00:44:42Z | 2025-03-23T08:04:02Z | https://github.com/huggingface/transformers/issues/36155 | [
"TensorFlow",
"bug"
] | carlosg-m | 1 |
lepture/authlib | django | 29 | OAuth2ClientMixin requires a client_secret | 1. From https://play.authlib.org/
2. Select Apps
3. Select New Oauth2 Client
4. Fill in the fields without checking the box Confidential Client
5. Click Create
You get:
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
Seems like this is due to OAuth2ClientMixin requiring a client secret | closed | 2018-02-15T17:04:50Z | 2018-02-20T03:03:53Z | https://github.com/lepture/authlib/issues/29 | [] | ivonnemclaughlin | 1 |
cvat-ai/cvat | tensorflow | 8,526 | Delete an account from cvat.ai? | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
I cannot find any method to remove my account from the service.
### Describe the solution you'd like
Please add a button in the settings to delete the currently logged in account.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | closed | 2024-10-10T07:51:30Z | 2024-10-10T10:28:09Z | https://github.com/cvat-ai/cvat/issues/8526 | [
"enhancement"
] | jasonmbrown | 1 |
matterport/Mask_RCNN | tensorflow | 2,267 | Is it possible to use Mask R-CNN when objects of the same instance are in different parts of the image ? | I want to use Mask R-CNN to make instance segmentation on spectrogram images. As you may know, signals comming from the same source can make patterns in different part of the spectrogram simply because the source can emmit at different time.
My question is: do you think that Mask R-CNN be able to treat such an instance-segmentation problem, that is to say to make detection and be able to set the same instance number for several region in the image ?
| open | 2020-07-02T15:30:23Z | 2020-07-03T13:53:01Z | https://github.com/matterport/Mask_RCNN/issues/2267 | [] | YuriGagarine | 1 |
BlinkDL/RWKV-LM | pytorch | 225 | Finetuning RWKV-5-World-1B5-v2 model | How to train RWKV-5-World-1B5-v2 model | open | 2024-02-21T13:59:41Z | 2024-02-27T00:53:06Z | https://github.com/BlinkDL/RWKV-LM/issues/225 | [] | ArchanaNarayanan843 | 1 |
cvat-ai/cvat | pytorch | 9,217 | 500 error after login | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Login
Immediately after Login the following 500 error appears in a popup:
```
[2025-03-17 07:45:32,385] ERROR django.request: Internal Server Error: /api/requests
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 518, in thread_handler
raise exc_info[1]
File "/opt/venv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 42, in inner
response = await get_response(request)
File "/opt/venv/lib/python3.10/site-packages/django/core/handlers/base.py", line 253, in _get_response_async
response = await wrapped_callback(
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 468, in __call__
ret = await asyncio.shield(exec_coro)
File "/opt/venv/lib/python3.10/site-packages/asgiref/current_thread_executor.py", line 40, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 522, in thread_handler
return func(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
return view_func(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/viewsets.py", line 124, in view
return self.dispatch(request, *args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/utils/decorators.py", line 46, in _wrapper
return bound_method(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/views/decorators/cache.py", line 62, in _wrapper_view_func
response = view_func(request, *args, **kwargs)
File "/home/django/cvat/apps/engine/views.py", line 3779, in wrapper
return func(*args, **kwargs)
File "/home/django/cvat/apps/engine/views.py", line 3803, in list
user_jobs = self._get_rq_jobs(user_id)
File "/home/django/cvat/apps/engine/views.py", line 3745, in _get_rq_jobs
jobs = self._get_rq_jobs_from_queue(queue, user_id)
File "/home/django/cvat/apps/engine/views.py", line 3722, in _get_rq_jobs_from_queue
if job and is_rq_job_owner(job, user_id):
File "/home/django/cvat/apps/engine/rq.py", line 315, in is_rq_job_owner
return BaseRQMeta.for_job(rq_job).user.id == user_id
File "/home/django/cvat/apps/engine/rq.py", line 196, in user
return UserMeta(self.meta[RQJobMetaField.USER])
KeyError: 'user'
```
### Expected Behavior
No error message
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
Server version: 2.31.0
UI version: 2.31.0
``` | closed | 2025-03-17T07:50:50Z | 2025-03-17T15:43:54Z | https://github.com/cvat-ai/cvat/issues/9217 | [
"bug"
] | eporsche | 2 |
numpy/numpy | numpy | 27,944 | TYP: `np.ndarray.tolist` return type seems broken in numpy 2.2.0 | ### Describe the issue:
While upgrading CMasher's CI to numpy 2.2.0, I got a new typechecking error that seems spurious to me. No error is raised with numpy 2.1.3
### Reproduce the code example:
```python
import numpy as np
def to_rgb(in_) -> tuple[float, float, float]:
# minimal mock for matplotlib.colors.to_rgb
return (1. ,2., 3.)
def foo(name: str, data: list[tuple[float, float, float]]) -> None:
data_arr = np.array(data)
if data_arr.dtype.kind == "S":
cm_data = [str(x) for x in data_arr]
colorlist = [to_rgb(_) for _ in cm_data]
else:
data_arr = np.atleast_2d(data_arr)
colorlist = data_arr.tolist()
```
### Error message:
```shell
cmasher.py:15: error: Incompatible types in assignment (expression has type "str | list[str] | list[list[str]] | list[list[list[Any]]]", variable has type "list[tuple[float, float, float]]") [assignment]
Found 1 error in 1 file (checked 1 source file)
```
### Python and NumPy Versions:
```
2.2.0
3.13.1 (main, Dec 4 2024, 16:25:14) [Clang 16.0.0 (clang-1600.0.26.4)]
```
### Type-checker version and settings:
```
mypy==1.13.0
```
no custom settings
### Additional typing packages.
Included with mypy, but not used here (as far as I'm aware)
```
typing-extensions==4.12.2
``` | closed | 2024-12-09T10:11:22Z | 2024-12-10T17:17:35Z | https://github.com/numpy/numpy/issues/27944 | [
"41 - Static typing"
] | neutrinoceros | 2 |
ydataai/ydata-profiling | pandas | 860 | jquery version is vulernerable to xss | **Describe the bug**
<!--
You are using a very old and vulnerable version of jquery which is bundled in with pandas profiling.
Full details are provided here of the vulnerability:
https://nvd.nist.gov/vuln/detail/CVE-2020-11023
Can you also advise on timescales for this change and what versions you will release this fix in please?
The fix has been patched in v3.5.0. Security scans are flagging this up in the following location:
jquery-1.12.4.min.js located at site-packages/pandas-profiling/report/presentation/flavours/html/templates/wrapper/assets
-->
**To Reproduce**
<!--
We would need to reproduce your scenario before being able to resolve it.
_Data:_
Please share your dataframe.
If the data is confidential, for example when it contains company-sensitive information, provide us with a synthetic or open dataset that produces the same error.
You should provide the DataFrame structure, for example by reporting the output of `df.info()`.
You can anonymize the column names if necessary.
_Code:_ Preferably, use this code format:
```python
"""
Test for issue XXX:
https://github.com/pandas-profiling/pandas-profiling/issues/XXX
"""
import pandas as pd
import pandas_profiling
def test_issueXXX():
df = pd.read_csv(r"<file>")
# Minimal reproducible code
```
-->
**Version information:**
<!--
Version information is essential in reproducing and resolving bugs. Please report:
* _Python version_: Your exact Python version.
* _Environment_: Where do you run the code? Command line, IDE (PyCharm, Spyder, IDLE etc.), Jupyter Notebook (Colab or local)
* _`pip`_: If you are using `pip`, run `pip freeze` in your environment and report the results. The list of packages can be rather long, you can use the snippet below to collapse the output.
<details><summary>Click to expand <strong><em>Version information</em></strong></summary>
<p>
```
<<< Put your version information here >>>
```
</p>
</details>
-->
**Additional context**
<!--
-->
| open | 2021-10-11T13:23:43Z | 2023-08-28T21:19:15Z | https://github.com/ydataai/ydata-profiling/issues/860 | [
"security 👮"
] | amerzon | 7 |
modelscope/modelscope | nlp | 955 | 模型训练无法存储checkpoint | 在ubuntu 22.04 使用 modelscope 1.17.1 版本,用
https://modelscope.cn/models/iic/cv_LightweightEdge_ocr-recognitoin-general_damo
中所示的训练脚本尝试训练模型时
报如下错误
```
2024-08-23 09:22:29,099 - modelscope - INFO - Checkpoints will be saved to /tmp/tmp8gzabwkm
2024-08-23 09:22:29,099 - modelscope - INFO - Text logs will be saved to /tmp/tmp8gzabwkm
2024-08-23 09:22:34,874 - modelscope - INFO - epoch [1][50/108] lr: 1.000e-03, eta: 0:02:59, iter_time: 0.114, data_load_time: 0.063, memory: 2488, loss: 3.0732
2024-08-23 09:22:40,319 - modelscope - INFO - epoch [1][100/108] lr: 1.000e-03, eta: 0:02:49, iter_time: 0.109, data_load_time: 0.064, memory: 2488, loss: 1.7033
Total test samples: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 3432/3432 [00:08<00:00, 399.12it/s]
2024-08-23 09:22:51,627 - modelscope - INFO - Saving checkpoint at 1 epoch
Traceback (most recent call last):
File "/home/liuzhaoyu/workspace/testlitetrain/test.py", line 94, in <module>
trainer.train()
File "/home/liuzhaoyu/workspace/testlitetrain/lite/lib/python3.10/site-packages/modelscope/trainers/trainer.py", line 711, in train
self.train_loop(self.train_dataloader)
File "/home/liuzhaoyu/workspace/testlitetrain/lite/lib/python3.10/site-packages/modelscope/trainers/trainer.py", line 1243, in train_loop
self.invoke_hook(TrainerStages.after_train_epoch)
File "/home/liuzhaoyu/workspace/testlitetrain/lite/lib/python3.10/site-packages/modelscope/trainers/trainer.py", line 1395, in invoke_hook
getattr(hook, fn_name)(self)
File "/home/liuzhaoyu/workspace/testlitetrain/lite/lib/python3.10/site-packages/modelscope/trainers/hooks/checkpoint/checkpoint_hook.py", line 177, in after_train_epoch
self._do_save(trainer, CheckpointStrategy.by_epoch)
File "/home/liuzhaoyu/workspace/testlitetrain/lite/lib/python3.10/site-packages/modelscope/trainers/hooks/checkpoint/checkpoint_hook.py", line 160, in _do_save
self._save_checkpoint(trainer, prefix)
File "/home/liuzhaoyu/workspace/testlitetrain/lite/lib/python3.10/site-packages/modelscope/trainers/hooks/checkpoint/checkpoint_hook.py", line 227, in _save_checkpoint
self.save_evaluate_results(trainer)
File "/home/liuzhaoyu/workspace/testlitetrain/lite/lib/python3.10/site-packages/modelscope/trainers/hooks/checkpoint/checkpoint_hook.py", line 217, in save_evaluate_results
f.write(json.dumps(trainer.metric_values))
File "/usr/lib/python3.10/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python3.10/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.10/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python3.10/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type float32 is not JSON serializable
```
其他包信息
```
.0
certifi 2024.7.4
cffi 1.17.0
charset-normalizer 3.3.2
comm 0.2.2
crcmod 1.7
cryptography 43.0.0
datasets 2.21.0
debugpy 1.8.5
decorator 5.1.1
dill 0.3.8
edit-distance 1.0.6
exceptiongroup 1.2.2
executing 2.0.1
filelock 3.15.4
frozenlist 1.4.1
fsspec 2024.6.1
huggingface-hub 0.24.6
idna 3.7
ipykernel 6.29.5
ipython 8.26.0
jedi 0.19.1
Jinja2 3.1.4
jmespath 0.10.0
jupyter_client 8.6.2
jupyter_core 5.7.2
lmdb 1.5.1
MarkupSafe 2.1.5
matplotlib-inline 0.1.7
modelscope 1.17.1
mpmath 1.3.0
multidict 6.0.5
multiprocess 0.70.16
nest-asyncio 1.6.0
networkx 3.3
numpy 2.1.0
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.20.5
nvidia-nvjitlink-cu12 12.6.20
nvidia-nvtx-cu12 12.1.105
opencv-python 4.10.0.84
oss2 2.18.6
packaging 24.1
pandas 2.2.2
parso 0.8.4
pexpect 4.9.0
pillow 10.4.0
pip 22.0.2
platformdirs 4.2.2
prompt_toolkit 3.0.47
psutil 6.0.0
ptyprocess 0.7.0
pure_eval 0.2.3
pyarrow 17.0.0
pycparser 2.22
pycryptodome 3.20.0
Pygments 2.18.0
python-dateutil 2.9.0.post0
pytz 2024.1
PyYAML 6.0.2
pyzmq 26.1.1
requests 2.32.3
setuptools 59.6.0
simplejson 3.19.3
six 1.16.0
sortedcontainers 2.4.0
stack-data 0.6.3
sympy 1.13.2
torch 2.4.0
torchaudio 2.4.0
torchvision 0.19.0
tornado 6.4.1
tqdm 4.66.5
traitlets 5.14.3
triton 3.0.0
typing_extensions 4.12.2
tzdata 2024.1
urllib3 2.2.2
wcwidth 0.2.13
xxhash 3.5.0
yarl 1.9.4
``` | closed | 2024-08-23T00:54:40Z | 2024-10-01T05:32:34Z | https://github.com/modelscope/modelscope/issues/955 | [
"Stale"
] | jprorikon | 2 |
tensorflow/tensor2tensor | machine-learning | 1,819 | Installation fails due to conflicting cloudpickle version | Hi, users are unable to run Tensor2tensor due to dependency conflict with _**cloudpickle**_ package. As shown in the following full dependency graph of Tensor2tensor, gym requires _** cloudpickle~=1.2.0**_,while tensorflow-probability requires _** cloudpickle >0.6.1 **_.
According to pip’s “first found wins” installation strategy, _**cloudpickle 1.4.1**_ is the actually installed version. However, _**cloudpickle 1.4.1**_ does not satisfy _**cloudpickle~=1.2.0**_.
### Dependency tree-----------
```
tensor2tensor - 1.15.5
| +- absl-py(install version:0.9.0 version range:*)
| +- bz2file(install version:0.98 version range:*)
| +- dopamine-rl(install version:3.0.1 version range:*)
| | +- absl-py (install version:0.9.0 version range:>=0.2.2)
| | +- gin-config (install version:0.3.0 version range:>=0.1.1)
| | | +- six (install version:1.14.0 version range:>=1.10.0)
| | +- gym (install version:0.14.0 version range:>=0.10.5)
| | | +- cloudpickle(install version:1.3.0 version range:>=1.2.0,<1.4.0)
| | | +- enum34(install version: version range:<.2,>=1.1.6)
| | | +- numpy(install version:1.18.2 version range:>=1.10.4)
| | | +- pyglet(install version:1.5.0 version range:>=1.4.0,<=1.5.0)
| | | +- scipy(install version:1.2.3 version range:*)
| | | +- six(install version:1.14.0 version range:*)
| | +- opencv-python (install version:4.1.0.25 version range:>=3.4.1.15)
| | +- Pillow (install version:7.1.1 version range:>=5.4.1)
| +- flask(install version:1.1.2 version range:*)
| | +- click(install version:7.1.1 version range:>=5.1)
| | +- itsdangerous(install version:1.1.0 version range:>=0.24)
| | +- Jinja2(install version:2.11.2 version range:>=2.10.1)
| | | +- MarkupSafe(install version:2.0.0a1 version range:>=0.23)
| | +- Werkzeug(install version:1.0.1 version range:>=0.15)
| +- future(install version:0.18.2 version range:*)
| +- gevent(install version:1.5.0 version range:*)
| +- gin-config(install version:0.3.0 version range:*)
| | +- six (install version:1.14.0 version range:>=1.10.0)
| +- google-api-python-client(install version:1.8.0 version range:*)
| | +- google-api-core(install version:1.17.0 version range:>=1.13.0,<2dev)
| | +- google-auth(install version:1.14.0 version range:>=1.4.1)
| | | +- cachetools(install version:4.1.0 version range:>=2.0.0,<5.0)
| | | +- pyasn1-modules(install version:0.2.8 version range:>=0.2.1)
| | | +- rsa(install version:4.0 version range:>=3.1.4,<4.1)
| | | | +- pyasn1(install version:0.4.8 version range:>=0.1.3)
| | | +- setuptools(install version:46.1.3 version range:>=40.3.0)
| | | +- six(install version:1.14.0 version range:>=1.9.0)
| | +- google-auth-httplib2(install version:0.0.3 version range:>=0.0.3)
| | | +- google-auth(install version:1.14.0 version range:*)
| | | | +- cachetools(install version:4.1.0 version range:>=2.0.0,<5.0)
| | | | +- pyasn1-modules(install version:0.2.8 version range:>=0.2.1)
| | | | +- rsa(install version:4.0 version range:>=3.1.4,<4.1)
| | | | +- setuptools(install version:46.1.3 version range:>=40.3.0)
| | | | +- six(install version:1.14.0 version range:>=1.9.0)
| | | +- httplib2(install version:0.17.2 version range:>=0.9.1)
| | +- httplib2(install version:0.18.1 version range:>=0.9.2,<1dev)
| | +- six(install version:1.14.0 version range:>=1.6.1,<2dev)
| | +- uritemplate(install version:3.0.1 version range:>=3.0.0,<4dev)
| +- gunicorn(install version:20.0.4 version range:*)
| | +- setuptools(install version:46.1.3 version range:>=3.0)
| +- gym(install version:0.14.0 version range:==0.14.0)
| | +- cloudpickle(install version:1.3.0 version range:>=1.2.0,<1.4.0)
| | +- enum34(install version: version range:<.2,>=1.1.6)
| | +- numpy(install version:1.18.2 version range:>=1.10.4)
| | +- pyglet(install version:1.5.0 version range:>=1.4.0,<=1.5.0)
| | +- scipy(install version:1.2.3 version range:*)
| | +- six(install version:1.14.0 version range:*)
| +- h5py(install version:2.10.0 version range:*)
| +- kfac(install version:0.2.0 version range:*)
| | +- numpy(install version:1.18.2 version range:*)
| | +- six(install version:1.14.0 version range:*)
| | +- tensorflow-probability(install version:0.7.0 version range:*)
| +- mesh-tensorflow(install version:0.1.13 version range:*)
| | +- absl-py(install version:0.9.0 version range:*)
| | +- future(install version:0.18.2 version range:*)
| | +- gin-config(install version:0.3.0 version range:*)
| | | +- six (install version:1.14.0 version range:>=1.10.0)
| | +- six(install version:1.14.0 version range:*)
| +- numpy(install version:1.18.2 version range:*)
| +- oauth2client(install version:4.1.3 version range:*)
| | +- httplib2(install version:0.17.2 version range:>=0.9.1)
| | +- pyasn1(install version:0.4.8 version range:>=0.1.7)
| | +- pyasn1-modules(install version:0.2.8 version range:>=0.0.5)
| | +- rsa(install version:4.0 version range:>=3.1.4)
| | | +- pyasn1(install version:0.4.8 version range:>=0.1.3)
| | +- six(install version:1.14.0 version range:>=1.6.1)
| +- opencv-python(install version:4.1.0.25 version range:*)
| +- Pillow(install version:7.1.1 version range:*)
| +- pypng(install version:0.0.20 version range:*)
| +- requests(install version:2.23.0 version range:*)
| | +- certifi(install version:2020.4.5.1 version range:>=2017.4.17)
| | +- chardet(install version:3.0.4 version range:>=3.0.2,<4)
| | +- idna(install version:2.9 version range:>=2.5,<3)
| | +- urllib3(install version:1.25.9 version range:>=1.21.1,<1.26)
| +- scipy(install version:1.2.3 version range:*)
| +- six(install version:1.14.0 version range:*)
| +- sympy(install version:1.5.1 version range:*)
| +- tensorflow-datasets(install version:3.0.0 version range:*)
| +- tensorflow-gan(install version:1.0.0.dev0 version range:*)
| +- tensorflow-probability(install version:0.7.0 version range:==0.7.0)
| | +- cloudpickle(install version:1.3.0 version range:>0.6.1)
| +- tqdm(install version:4.45.0 version range:*)
```
Thanks for your help.
Best,
Neolith
| open | 2020-05-29T10:57:53Z | 2020-06-22T08:22:55Z | https://github.com/tensorflow/tensor2tensor/issues/1819 | [] | NeolithEra | 2 |
MaartenGr/BERTopic | nlp | 1,124 | probably incorrect typings | https://github.com/MaartenGr/BERTopic/blob/1b25ba7810b44bcb9594b42f70787fc2d8bb7015/bertopic/_bertopic.py#L1315
`get_topics` actually returns a `Mappig`, where
- key is an `int`
- value is a `list[tuple[str, numpy.float64]]` which is a list, with each entry being a tuple with the word, and the score.
## Showing that the key is an `int`, not a `str`

so, the correct signature would be
```python
def get_topics(self) -> Mapping[int, List[Tuple[str, float]]]:
``` | closed | 2023-03-24T19:36:17Z | 2023-09-27T09:05:12Z | https://github.com/MaartenGr/BERTopic/issues/1124 | [] | demetrius-mp | 1 |
jupyterlab/jupyter-ai | jupyter | 482 | ollama |
### Problem
* I would like to integrate https://ollama.ai/ with Jupyter ai
### Proposed Solution
Add provider end point to ollama hosted service within locally or remotely.
| closed | 2023-11-19T21:41:40Z | 2024-07-10T22:06:18Z | https://github.com/jupyterlab/jupyter-ai/issues/482 | [
"enhancement"
] | sqlreport | 1 |
man-group/arctic | pandas | 749 | Rename _compression to something else. | This naming breaks the debugger with py3.6 and the latest lz4 library as it apparently tries to look for some class inside _compression but defaults to using our _compression module as it's in the scope. I am using a renamed version in the scope and it seems to be fine. | closed | 2019-04-26T13:26:01Z | 2019-06-06T10:28:36Z | https://github.com/man-group/arctic/issues/749 | [
"easy"
] | shashank88 | 1 |
lux-org/lux | pandas | 412 | Converting Timestamp: Error | Hi,
I am reading in a csv to my notebook, calling it df_plot. When I do a df_plot.head() it comes back saying that Timestamp maybe temperal. So I followed the suggested template and also tried a suggestion on the lux website. Neither works for me.
See attached image from my csv file of the timestamp

So I tried this, as I believed that my timestamp was in the format dd-mm-yyy hh:mm:ss
```
df_plot['Timestamp'] = pd.to_datetime(df_plot['Timestamp'], format="%d-%m-%y%h:%m:%s")
##df['date'] = pd.to_datetime(df['date'], format="%Y-%m-%d")
```
And get this error
```
ValueError: 'h' is a bad directive in format '%d-%m-%y%h:%m:%s'
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
c:\xxx\capstone_python\venv\lib\site-packages\pandas\core\tools\datetimes.py in _convert_listlike_datetimes(arg, format, name, tz, unit, errors, infer_datetime_format, dayfirst, yearfirst, exact)
455 try:
--> 456 values, tz = conversion.datetime_to_datetime64(arg)
457 dta = DatetimeArray(values, dtype=tz_to_dtype(tz))
pandas\_libs\tslibs\conversion.pyx in pandas._libs.tslibs.conversion.datetime_to_datetime64()
TypeError: Unrecognized value type: <class 'str'>
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_35372/1760194970.py in <module>
----> 1 df_plot['Timestamp'] = pd.to_datetime(df_plot['Timestamp'], format="%d-%m-%y%h:%m:%s")
2 ##df['date'] = pd.to_datetime(df['date'], format="%Y-%m-%d")
c:\xxx\Final_Project\capstone_python\venv\lib\site-packages\pandas\core\tools\datetimes.py in to_datetime(arg, errors, dayfirst, yearfirst, utc, format, exact, unit, infer_datetime_format, origin, cache)
799 result = result.tz_localize(tz)
800 elif isinstance(arg, ABCSeries):
--> 801 cache_array = _maybe_cache(arg, format, cache, convert_listlike)
802 if not cache_array.empty:
803 result = arg.map(cache_array)
c:\xxx\capstone_python\venv\lib\site-packages\pandas\core\tools\datetimes.py in _maybe_cache(arg, format, cache, convert_listlike)
176 unique_dates = unique(arg)
177 if len(unique_dates) < len(arg):
--> 178 cache_dates = convert_listlike(unique_dates, format)
179 cache_array = Series(cache_dates, index=unique_dates)
180 return cache_array
c:\xxx\capstone_python\venv\lib\site-packages\pandas\core\tools\datetimes.py in _convert_listlike_datetimes(arg, format, name, tz, unit, errors, infer_datetime_format, dayfirst, yearfirst, exact)
458 return DatetimeIndex._simple_new(dta, name=name)
459 except (ValueError, TypeError):
--> 460 raise e
461
462 if result is None:
c:\xxx\capstone_python\venv\lib\site-packages\pandas\core\tools\datetimes.py in _convert_listlike_datetimes(arg, format, name, tz, unit, errors, infer_datetime_format, dayfirst, yearfirst, exact)
421 if result is None:
422 try:
--> 423 result, timezones = array_strptime(
424 arg, format, exact=exact, errors=errors
425 )
pandas\_libs\tslibs\strptime.pyx in pandas._libs.tslibs.strptime.array_strptime()
pandas\_libs\tslibs\strptime.pyx in pandas._libs.tslibs.strptime.array_strptime()
ValueError: 'h' is a bad directive in format '%d-%m-%y%h:%m:%s'
``` | closed | 2021-08-19T13:31:21Z | 2021-09-07T00:12:27Z | https://github.com/lux-org/lux/issues/412 | [] | conorwa | 1 |
encode/httpx | asyncio | 3,482 | http/2 not working correctly | Probably it is related to https://github.com/encode/httpx/discussions/3126, but when I according to documentation do:
client = httpx.AsyncClient(http2=True, verify=False)
post_response = await client.post(post_url, headers=post_headers, cookies=post_cookies, data=post_data)
in the logs I see:
DEBUG [2025-01-25 14:02:16] httpx - load_ssl_context verify=False cert=None trust_env=True http2=False
So http2=False from the beginning, why? It was instructed to use http2=true. If I specify http1=False, http2=True the connection is not made succesfully. Curl and web browsers do http2 on the same server successfully. | closed | 2025-01-25T13:29:36Z | 2025-02-14T10:06:18Z | https://github.com/encode/httpx/issues/3482 | [] | ferror56 | 1 |
drivendataorg/cookiecutter-data-science | data-science | 419 | Clean up and update docs prior to 2.0.1 release | A few items for cleanup left over from v1, e.g., the instructions for deploying the docs are out of date. | closed | 2025-02-16T18:56:17Z | 2025-02-16T21:10:58Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/419 | [] | chrisjkuch | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,450 | AttributeError: module 'torchvision.transforms' has no attribute 'InterpolationMode' | System: Windows and Ubuntu
using Anaconda set up environment base on the `environment.yml`.
run CycleGAN train/test
```
Traceback (most recent call last):
File "D:/PycharmProjects/pytorch-CycleGAN-and-pix2pix/test.py", line 30, in <module>
from options.test_options import TestOptions
File "D:\PycharmProjects\pytorch-CycleGAN-and-pix2pix\options\test_options.py", line 1, in <module>
from .base_options import BaseOptions
File "D:\PycharmProjects\pytorch-CycleGAN-and-pix2pix\options\base_options.py", line 6, in <module>
import data
File "D:\PycharmProjects\pytorch-CycleGAN-and-pix2pix\data\__init__.py", line 15, in <module>
from data.base_dataset import BaseDataset
File "D:\PycharmProjects\pytorch-CycleGAN-and-pix2pix\data\base_dataset.py", line 81, in <module>
def get_transform(opt, params=None, grayscale=False, method=transforms.InterpolationMode.BICUBIC, convert=True):
AttributeError: module 'torchvision.transforms' has no attribute 'InterpolationMode'
```
| closed | 2022-07-07T21:39:38Z | 2024-03-03T12:11:50Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1450 | [] | jiatangz | 8 |
pytest-dev/pytest-django | pytest | 710 | Error with Django internal when using pytest >= 4.2 | Hello,
Since I updated pytest to 4.2, I've got errors inside Django internals for some tests.
These tests worked fine with 4.1 and still work fine with Django embedded unittests.
Here the error:
```
______________________________________________________________________________ UserPageCheckTest.test_get_view_user_by_organisation _______________________________________________________________________________
self = <drop.front.tests.test_pages.UserPageCheckTest testMethod=test_get_view_user_by_organisation>, url = <URLPattern 'user/<int:pk>/' [name='view_user']>, pk_required = True, login_required = True
def test_get_by_organisation(self, url=url, pk_required=pk_required, login_required=login_required):
> base_url = self.get(url, pk_required, login_required)
drop/front/tests/test_pages.py:69:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
drop/front/tests/test_pages.py:49: in get
self.client.force_login(self.user)
../../.pyenv/versions/drop/lib/python3.7/site-packages/django/test/client.py:611: in force_login
self._login(user, backend)
../../.pyenv/versions/drop/lib/python3.7/site-packages/django/test/client.py:624: in _login
login(request, user, backend)
../../.pyenv/versions/drop/lib/python3.7/site-packages/django/contrib/auth/__init__.py:132: in login
user_logged_in.send(sender=user.__class__, request=request, user=user)
../../.pyenv/versions/drop/lib/python3.7/site-packages/django/dispatch/dispatcher.py:175: in send
for receiver in self._live_receivers(sender)
../../.pyenv/versions/drop/lib/python3.7/site-packages/django/dispatch/dispatcher.py:175: in <listcomp>
for receiver in self._live_receivers(sender)
../../.pyenv/versions/drop/lib/python3.7/site-packages/django/contrib/auth/models.py:20: in update_last_login
user.save(update_fields=['last_login'])
../../.pyenv/versions/drop/lib/python3.7/site-packages/django/contrib/auth/base_user.py:73: in save
super().save(*args, **kwargs)
../../.pyenv/versions/drop/lib/python3.7/site-packages/django/db/models/base.py:718: in save
force_update=force_update, update_fields=update_fields)
../../.pyenv/versions/drop/lib/python3.7/site-packages/django/db/models/base.py:748: in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <User: admin>, raw = False, cls = <class 'drop.core.models.base.User'>, force_insert = False, force_update = False, using = 'default', update_fields = frozenset({'last_login'})
def _save_table(self, raw=False, cls=None, force_insert=False,
force_update=False, using=None, update_fields=None):
"""
Do the heavy-lifting involved in saving. Update or insert the data
for a single table.
"""
meta = cls._meta
non_pks = [f for f in meta.local_concrete_fields if not f.primary_key]
if update_fields:
non_pks = [f for f in non_pks
if f.name in update_fields or f.attname in update_fields]
pk_val = self._get_pk_val(meta)
if pk_val is None:
pk_val = meta.pk.get_pk_value_on_save(self)
setattr(self, meta.pk.attname, pk_val)
pk_set = pk_val is not None
if not pk_set and (force_update or update_fields):
raise ValueError("Cannot force an update in save() with no primary key.")
updated = False
# If possible, try an UPDATE. If that doesn't update anything, do an INSERT.
if pk_set and not force_insert:
base_qs = cls._base_manager.using(using)
values = [(f, None, (getattr(self, f.attname) if raw else f.pre_save(self, False)))
for f in non_pks]
forced_update = update_fields or force_update
updated = self._do_update(base_qs, using, pk_val, values, update_fields,
forced_update)
if force_update and not updated:
raise DatabaseError("Forced update did not affect any rows.")
if update_fields and not updated:
> raise DatabaseError("Save with update_fields did not affect any rows.")
E django.db.utils.DatabaseError: Save with update_fields did not affect any rows.
../../.pyenv/versions/drop/lib/python3.7/site-packages/django/db/models/base.py:816: DatabaseError
```
As you can see, the error occurs when updating the last login date inside Django, but it happens only for few tests, which is very odd.
Here my current requirements:
```
Django==2.1.7
Pillow==5.4.1
pytest==4.3.1
pytest-cov==2.6.1
pytest-django==3.4.8
pytest-env==0.6.2
```
Thanks for any help or insight provided. ;) | open | 2019-03-13T10:27:31Z | 2019-10-16T20:42:30Z | https://github.com/pytest-dev/pytest-django/issues/710 | [
"bug",
"needs-info"
] | debnet | 21 |
django-cms/django-cms | django | 7,909 | [BUG] AppHooked page does not have New draft button | <!--
Please fill in each section below, otherwise, your issue will be closed.
This info allows django CMS maintainers to diagnose (and fix!) your issue
as quickly as possible.
-->
## Description
When AppHook is attached to a page, the page loses the *New draft* button. Workaround is to "de-attach" the AppHook, edit Page settings (e.g. if you wish to change the page title) and hook the AppHook again.
<!--
If this is a security issue stop immediately and follow the instructions at:
http://docs.django-cms.org/en/latest/contributing/development-policies.html#reporting-security-issues
-->
## Steps to reproduce
<!--
Clear steps describing how to reproduce the issue.
Steps to reproduce the behavior:
-->
1. Create a page
2. Attach an AppHook in Advanced settings
3. Publish the page
4. Try changing Page settings (you should go to the page and click on *New draft* button which is not there)
## Expected behaviour
One should be able to change page settings of an AppHooked page.
<!--
A clear and concise description of what you expected to happen.
-->
## Actual behaviour
Creating new draft to be able to edit page settings is impossible.
<!--
A clear and concise description of what is actually happening.
-->
## Screenshots
<!--If applicable, add screenshots to help explain your problem.
-->
<img width="208" alt="Screenshot 2024-05-05 at 13 28 47" src="https://github.com/django-cms/django-cms/assets/912266/f189e56b-083e-49c0-be53-5e85172c435b">
*Stvori* equals to *Create* in English.
## Additional information (CMS/Python/Django versions)
Django 4.2.11
Django CMS 4.1.1
Python 3.10.6
<!--
Add any other context about the problem such as environment,
CMS/Python/Django versions, logs etc. here.
-->
## Do you want to help fix this issue?
<!--
The django CMS project is managed and kept alive by its open source community and is backed by the [django CMS Association](https://www.django-cms.org/en/about-us/). We therefore welcome any help and are grateful if people contribute to the project. Please use 'x' to check the items below.
-->
* [x] Yes, I want to help fix this issue and I will join #workgroup-pr-review on [Slack](https://www.django-cms.org/slack) to confirm with the community that a PR is welcome.
* [ ] No, I only want to report the issue.
| closed | 2024-05-05T11:30:09Z | 2024-05-07T15:17:42Z | https://github.com/django-cms/django-cms/issues/7909 | [] | aacimov | 7 |
raphaelvallat/pingouin | pandas | 207 | pingouin-stats.org unavailable | I'm experiencing some DNS issue with pingouin-stats.org. Please use the following temporary link to access the documentation:
https://s3.us-east-1.amazonaws.com/pingouin-stats.org/index.html | closed | 2021-10-28T23:13:52Z | 2021-10-30T16:35:27Z | https://github.com/raphaelvallat/pingouin/issues/207 | [
"bug :boom:",
"docs/testing :book:"
] | raphaelvallat | 2 |
PokeAPI/pokeapi | graphql | 470 | Search for a pokemon in another language | Hello there ! :)
For my application, I would need to do a search on the name of a Pokémon BUT in French.
Is there a way today to do this?
Otherwise could it happen?
Thank you in advance ! :) | closed | 2020-01-17T01:34:57Z | 2020-02-17T03:31:26Z | https://github.com/PokeAPI/pokeapi/issues/470 | [] | MalronWall | 8 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 762 | AttributeError: 'FetchNode' object has no attribute 'update_state' | **Describe the bug**
in File "scrapegraphai\nodes\fetch_node.py" , clas FetchNode nor in BaseNode which is the parent of the class FerchNode there is no method defined as update_state(), although it has been used in both FetchNode.handle_local_source() and FetchNode.handle_web_source()
full error: File "scrapegraphai\nodes\fetch_node.py", line 233, in handle_local_source
return self.update_state(state, compressed_document)
^^^^^^^^^^^^^^^^^
AttributeError: 'FetchNode' object has no attribute 'update_state'
**Desktop (please complete the following information):**
- OS: Windows server 2022
- python library scrapegraphai
- Version 1.26.7
| closed | 2024-10-21T11:34:42Z | 2025-02-24T16:00:12Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/762 | [] | MahdiSepiRashidi | 11 |
DistrictDataLabs/yellowbrick | scikit-learn | 616 | Extend color selection on ResidualsPlot() to histogram | When running ResidualsPlot() visualizer, the `test_color` and `train_color` do not propagate down to the histogram. It would be nice if the histogram on the right reflected the color choices as well.
```visualizer = ResidualsPlot(regr_ols, test_color='#615d6c', train_color='#6F8AB7')
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()```
Results in a plot like this:

| closed | 2018-09-19T04:47:39Z | 2018-09-19T20:09:36Z | https://github.com/DistrictDataLabs/yellowbrick/issues/616 | [] | black-tea | 1 |
ycd/manage-fastapi | fastapi | 92 | Add Dockerfile support | closed | 2022-10-02T12:57:13Z | 2022-10-02T13:00:33Z | https://github.com/ycd/manage-fastapi/issues/92 | [] | Kludex | 1 | |
biolab/orange3 | data-visualization | 6,672 | Prediction widget crashes in at least two different ways |
**What's wrong?**
Linear regression on Mac crashes. I have no issues with classification or clustering. However, as soon as I connect select column 1 to prediction the system crashes. I have forwarded the error messages too. I repeated exactly the same on my wife’s windows pc and works perfectly. File is just columns with headers x,y, z and 12 instances. File 1 used for prediction is a couple of instances taken from File and used ? For target. This way I made sure I had no transposition errors on column headers. Furthermore I selected the features to skip and keep for both File and File 1 as well as as using select columns for both training and prediction. In windows using the same workflow and data you get the results no issues.
Orange 3.36.2
Apple M2 Pro 16 GB
Mac OS Sonoma 14.1.1
447 GB storage available
Thank you
By the way I love Orange
Regards
Houtan
**How can we reproduce the problem?**



**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system:
- Orange version:
- How you installed Orange:
| open | 2023-12-11T12:08:17Z | 2023-12-15T08:15:37Z | https://github.com/biolab/orange3/issues/6672 | [
"bug",
"snack"
] | HoutanSadeghi | 1 |
wger-project/wger | django | 939 | Muscle overview missing | It seems the muscle overview in the workout page is missing:

| closed | 2022-01-04T15:13:45Z | 2022-01-05T09:56:48Z | https://github.com/wger-project/wger/issues/939 | [] | rolandgeider | 0 |
alirezamika/autoscraper | automation | 30 | build method with wanted_dict does not work. | Tested with autoscraper-1.1.6
When calling the build method with wanted_dict, the method behaves badly as it treats the searched string as an array of individual letters. The culprit is around l.204 of auto_scraper.py as the data structure does not behave the same as when you use the wanted_list option.
Besides this, I find the work done so far super interesting and promising. Keep up the good work.
R. | closed | 2020-10-06T14:55:45Z | 2021-03-07T11:47:10Z | https://github.com/alirezamika/autoscraper/issues/30 | [] | romain-utelly | 3 |
littlecodersh/ItChat | api | 294 | 可以给公众号发送信息吗? | closed | 2017-03-22T05:27:49Z | 2017-03-22T07:48:37Z | https://github.com/littlecodersh/ItChat/issues/294 | [
"question"
] | 27hao | 1 | |
schemathesis/schemathesis | pytest | 2,089 | [BUG] ASCII can not be valid when generate test case with schemathesis. | ### Checklist
- [ ] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [ ] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [ ] I am using the latest version of Schemathesis
### Describe the bug
when I use Schemathesis to generate test case by setting as below, which avoid generating test cases with symbols that are not recognizable due to using utf-8 for string generation.
generation_config=GenerationConfig(allow_x00=False, codec='ascii'),
But after I run it with my own schema, the screenshot of the generated test cases still contains unreadable characters, such as the text highlighted in the circle below.

### To Reproduce
🚨 **Mandatory** 🚨: Steps to reproduce the behavior:
1. Run this command 'run' in pycharm environment
2. See result above screenshot.
Clearly describe your expected outcome.
I would like the generated test case to not contain characters that are difficult to understand.
### Environment
```
- OS: [e.g. Linux or Windows]
- Python version: [3.9.18]
- Schemathesis version: [3.25.4]
- Spec version: [e.g. Open API 3.0.2]
```
### Additional context
Below information is excerpted from help documentation.
Generating strings[](https://schemathesis.readthedocs.io/en/stable/data-generation.html#generating-strings)
In Schemathesis, you can control how strings are generated:
allow_x00 (default True): Determines whether to allow the generation of \x00 bytes within strings. It is useful to avoid rejecting tests as invalid by some web servers.
codec (default utf-8): Specifies the codec used for generating strings. It helps if you need to restrict the inputs to, for example, the ASCII range.
Global configuration[](https://schemathesis.readthedocs.io/en/stable/data-generation.html#global-configuration)
CLI:
$ st run --generation-allow-x00=false ...
$ st run --generation-codec=ascii ...
Python:
import schemathesis
from schemathesis import GenerationConfig
schema = schemathesis.from_uri(
"https://example.schemathesis.io/openapi.json",
generation_config=GenerationConfig(allow_x00=False, codec='ascii'),
)
This configuration sets the string generation to disallow \x00 bytes and use the ASCII codec for all strings.
| closed | 2024-03-05T07:44:32Z | 2024-04-08T19:40:42Z | https://github.com/schemathesis/schemathesis/issues/2089 | [
"Type: Bug"
] | jiejunsailor | 4 |
lundberg/respx | pytest | 200 | Incomplete `AllMockedAssertionError` description | Hello! Thank you for this useful library!
I've found that `AllMockedAssertionError` description is incomplete. For example, JSON in this snippet will fail the test:
```python
import httpx
import pytest
@pytest.mark.asyncio
async def test_reproduce(respx_mock):
respx_mock.post("https://foo.bar/", json={"foo": "bar"}).mock(
return_value=httpx.Response(201)
)
async with httpx.AsyncClient() as client:
response = await client.post("https://foo.bar/", json={"baz": "quux"})
assert response.status_code == 201
```
But the error doesn't contain any information about JSON difference:
```python
self = <respx.router.MockRouter object at 0x7f5a2778e860>
request = <Request('POST', 'https://foo.bar/')>
@contextmanager
def resolver(self, request: httpx.Request) -> Generator[ResolvedRoute, None, None]:
resolved = ResolvedRoute()
try:
yield resolved
if resolved.route is None:
# Assert we always get a route match, if check is enabled
if self._assert_all_mocked:
> raise AllMockedAssertionError(f"RESPX: {request!r} not mocked!")
E respx.models.AllMockedAssertionError: RESPX: <Request('POST', 'https://foo.bar/')> not mocked!
.venv/lib/python3.10/site-packages/respx/router.py:251: AllMockedAssertionError
```
| closed | 2022-02-05T11:46:24Z | 2022-05-24T11:36:04Z | https://github.com/lundberg/respx/issues/200 | [] | lxmnk | 2 |
svc-develop-team/so-vits-svc | pytorch | 97 | 发现了一个小bug, | resample.py中 40 行的判断,在我电脑中触发了一个python历史性遗留问题,主要问题在于我的系统内核大于60个, mult库发生了错误.
需要修改成如下代码:
```
processs = 30 if cpu_count() > 60 else (cpu_count()-2 if cpu_count() > 4 else 1)
``` | closed | 2023-03-27T17:13:32Z | 2023-04-09T04:47:25Z | https://github.com/svc-develop-team/so-vits-svc/issues/97 | [
"not urgent"
] | tencentdosos | 1 |
aimhubio/aim | data-visualization | 2,473 | Showing only one metric in the charts and no legend on metric names | ## 🐛 Bug
When I add multiple metrics by clicking the +metrics button and then click the search button, I can only see the charts of one metric on the webpage. Moreover, the legend does not show the metric name of each chart. But the table below the charts shows the selected metric names and their values.
### Expected behavior
Show each selected metric and label it in the legend.
### Environment
- Aim Version: 3.15.2
| closed | 2023-01-12T01:24:26Z | 2023-01-13T09:12:01Z | https://github.com/aimhubio/aim/issues/2473 | [
"type / bug",
"help wanted"
] | twni2016 | 2 |
huggingface/datasets | pytorch | 7,072 | nm | closed | 2024-07-25T17:03:24Z | 2024-07-25T20:36:11Z | https://github.com/huggingface/datasets/issues/7072 | [] | brettdavies | 0 | |
scikit-tda/kepler-mapper | data-visualization | 162 | Make keppler-mapper sparse compatible | I recently tried to use the Mapper algorithm for a graph with more than 1 million nodes. I, thus computed the adjacency matrix with the scipy sparse CSR format, which drastically reduces the memory usage.
However keppler-mapper does not currently work with sparse matrices.
I successfully modified a few lines and created a sparse-compatible version.
Would that be of interest to you for a pull request? | closed | 2019-04-04T15:43:56Z | 2019-04-09T07:49:16Z | https://github.com/scikit-tda/kepler-mapper/issues/162 | [] | retdop | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.