repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ShishirPatil/gorilla | api | 905 | [BFCL] When will it be possible to evaluate deepseek-r1? | When will it be possible to evaluate deepseek-r1? | closed | 2025-02-08T09:27:17Z | 2025-03-16T07:03:04Z | https://github.com/ShishirPatil/gorilla/issues/905 | [] | destinyyzy | 1 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,699 | [Bug]: The WebUI fails to start | ### Checklist
- [X] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Suddenly the WebUI fails to start without any error message. Need advice on how to diagnose the problem
### Steps to reproduce the problem
run `webui-user.bat`
### What should have happened?
WebUI should start
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
The WebUI fails to start, running it with `--dump-sysinfo` does not do anything either.
### Console logs
```Shell
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --no-gradio-queue
2024-12-03 11:44:14.284059: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-12-03 11:44:15.187589: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
```
### Additional information
_No response_ | closed | 2024-12-03T09:17:46Z | 2024-12-03T11:06:26Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16699 | [
"bug-report"
] | Zueuk | 1 |
coqui-ai/TTS | python | 3,498 | [Bug] Windows Installation Guide not working at step # 11 | ### Describe the bug
Following this guide:
https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system
I get an error output at step 11 which requires me to do the following:
11. Run the following command (this differs from the command you get from [the PyTorch website](https://pytorch.org/get-started/locally/) because of [a known issue](https://github.com/pytorch/pytorch/issues/54172)):
.\Scripts\pip install torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchaudio===0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
The following error will occur:

PS D:\AI\TTS> pip install torch==1.8.1+cu101 torchvision==0.9.1+cu101 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
Looking in links: https://download.pytorch.org/whl/torch_stable.html
ERROR: Could not find a version that satisfies the requirement torch==1.8.1+cu101 (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu113, 1.11.0+cu115, 1.12.0, 1.12.0+cpu, 1.12.0+cu113, 1.12.0+cu116, 1.12.1, 1.12.1+cpu, 1.12.1+cu113, 1.12.1+cu116, 1.13.0, 1.13.0+cpu, 1.13.0+cu116, 1.13.0+cu117, 1.13.1, 1.13.1+cpu, 1.13.1+cu116, 1.13.1+cu117, 2.0.0, 2.0.0+cpu, 2.0.0+cu117, 2.0.0+cu118, 2.0.1, 2.0.1+cpu, 2.0.1+cu117, 2.0.1+cu118, 2.1.0, 2.1.0+cpu, 2.1.0+cu118, 2.1.0+cu121, 2.1.1, 2.1.1+cpu, 2.1.1+cu118, 2.1.1+cu121, 2.1.2, 2.1.2+cpu, 2.1.2+cu118, 2.1.2+cu121)
ERROR: No matching distribution found for torch==1.8.1+cu101
What should I do?
### To Reproduce
run .\Scripts\pip install torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchaudio===0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
using what is recommended in https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system
```
### Additional context
_No response_ | closed | 2024-01-06T16:23:37Z | 2024-02-18T08:58:20Z | https://github.com/coqui-ai/TTS/issues/3498 | [
"bug",
"wontfix"
] | Nikanoru | 2 |
Gozargah/Marzban | api | 665 | 🍆 TED nane jende, kiram to ghabre madare jendat | Hi,
As there is a TELEGRAM_DEFAULT_VLESS_FLOW
Please add DASHBOARD_DEFAULT_VLESS_FLOW
Until you add ability to set custom vless flow for each inbound
Thanks. | closed | 2023-11-30T14:25:38Z | 2025-01-21T19:49:35Z | https://github.com/Gozargah/Marzban/issues/665 | [
"Feature"
] | APT-ZERO | 1 |
freqtrade/freqtrade | python | 10,686 | Why only can not short in spot mode? | <!--
Have you searched for similar issues before posting it?
Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there
Please do not use the question template to report bugs or to request new features.
-->
## Describe your environment
* Operating system: Mac OS 15
* Python Version: 3.12.2 (`python -V`)
* CCXT version: 4.4.5 (`pip freeze | grep ccxt`)
* Freqtrade Version: 2024.9-dev-3bbc6cbab (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
## Your question
As your description below (https://www.freqtrade.io/en/stable/leverage/#shorting)
> Shorting is not possible when trading with [trading_mode](https://www.freqtrade.io/en/stable/leverage/#leverage-trading-modes) set to spot. To short trade, trading_mode must be set to margin(currently unavailable) or [futures](https://www.freqtrade.io/en/stable/leverage/#futures), with [margin_mode](https://www.freqtrade.io/en/stable/leverage/#margin-mode) set to cross(currently unavailable) or [isolated](https://www.freqtrade.io/en/stable/leverage/#isolated-margin-mode)
I am just confused about why only can't short in spot mode? and for now, it can't support futures in Kraken but it also can't short in spot mode, that's means I can only long in Kraken, right?
| closed | 2024-09-22T00:59:03Z | 2024-09-22T12:43:13Z | https://github.com/freqtrade/freqtrade/issues/10686 | [
"Question",
"Non-spot"
] | winsonet | 2 |
K3D-tools/K3D-jupyter | jupyter | 450 | Snapshot HTML produces invalid html file with Jupyter Notebook version 7 |
* K3D version: 2.15.3
* Python version: 2.15.3
* Operating System: MS Windows 11
### Description
The Snapshot HTML produces an invalid file when using Jupyter Notebook Version: 7.1.1
The same code produces a valid file when using Jupyter Notebook Version: 6.5.4 (significantly larger file size).
### Example files:
Run with Jupyter Notebook Version 7.1.1 and click on Snapshot HTML control. File that is produced does not work with Firefox or MS Edge browsers. (File produced with earlier version of Jupyter Notebook is fine).
I've attached the files produced from notebook 6.5.4 and 7.1.1:
[K3D-snapshot-v711.zip](https://github.com/K3D-tools/K3D-jupyter/files/14451531/K3D-snapshot-v711.zip)
[K3D-snapshot-v654.zip](https://github.com/K3D-tools/K3D-jupyter/files/14451533/K3D-snapshot-v654.zip)
| open | 2024-02-29T17:38:19Z | 2024-07-03T14:11:18Z | https://github.com/K3D-tools/K3D-jupyter/issues/450 | [
"Next release"
] | deankarlen | 0 |
jazzband/django-oauth-toolkit | django | 1,493 | Are there any open-source projects developed using django-oauth-toolkit? | <!-- What is your question? -->
| open | 2024-09-10T03:20:53Z | 2024-09-26T03:30:47Z | https://github.com/jazzband/django-oauth-toolkit/issues/1493 | [
"question"
] | dusens | 1 |
plotly/dash | dash | 2,862 | [BUG] callback dataflow display flickers at certain sizes | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.17.0
dash_ag_grid 31.2.0
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash_design_kit 1.10.2
dash-html-components 2.0.0
dash_mantine_components 0.14.3
dash-table 5.0.0
dash-testing-stub 0.0.2
```
- if frontend related, tell us your Browser, Version and OS
- MacOS Sonoma 14.4.1
- Chrome 124.0.6367.158
**Describe the bug**
- Running the World Bank example from Chapter 5 of https://github.com/DashBookProject/plotly-dash
- Opened the dataflow graph showing connections between callbacks
- Display flickered: appears that scrollbars are appearing and disappearing repeatedly.
**Expected behavior**
No flickering.
**Screenshots**
Movie attached
https://github.com/plotly/dash/assets/911566/36f8962e-1722-484b-ad46-764d5ce0f319
| closed | 2024-05-14T18:26:37Z | 2024-05-15T14:59:34Z | https://github.com/plotly/dash/issues/2862 | [
"bug"
] | gvwilson | 6 |
flairNLP/flair | pytorch | 3,113 | [Question]: Prepare test data for text classification | ### Question
Hi,
I would like to train a TextClassifier and have created tab separated csv-files with the columns "text" and "label". I am using the CSVClassificationCorpus when building the corpus for the training. After training I get the message "ACHTUNG! No gold labels and no all_predicted_values found! Could be an error in your corpus or how you initialize the trainer!" Now I wonder how the test data must be prepared - unfortunately I find nothing in the documentation about it.
| closed | 2023-02-17T12:02:47Z | 2023-04-24T11:53:56Z | https://github.com/flairNLP/flair/issues/3113 | [
"question"
] | NDTanja | 4 |
pydantic/logfire | pydantic | 335 | `test_logfire_api` doesn't look at `logfire_api.__all__` | It just uses `logfire.__all__` in both versions of `test_runtime`. `logfire_api.__all__` doesn't actually exist at runtime when logfire isn't importable. | open | 2024-07-24T17:27:27Z | 2024-12-27T08:16:32Z | https://github.com/pydantic/logfire/issues/335 | [
"good first issue",
"Logfire API"
] | alexmojaki | 4 |
ContextLab/hypertools | data-visualization | 97 | allow legend to be computed from group kwarg | Currently, legends are set explicitly by passing a list to the `plot` function. However, they could also be set automatically by passing a `group` list to the `legend` kwarg
e.g.
`hyp.plot(data, group=labels, legend=labels)` | closed | 2017-04-18T14:55:28Z | 2017-06-02T16:51:19Z | https://github.com/ContextLab/hypertools/issues/97 | [
"enhancement",
"low priority",
"mozilla sprint",
"easy(ish)"
] | andrewheusser | 4 |
kizniche/Mycodo | automation | 809 | Enhancement - mycodo allow changing pi root user | Leaving default pi root user is a security risk. mycodo should support removing default root user. | closed | 2020-08-08T02:18:12Z | 2020-08-08T21:19:02Z | https://github.com/kizniche/Mycodo/issues/809 | [] | stardawg | 3 |
wandb/wandb | tensorflow | 9,563 | [Bug-App]: Plotly figure rendered empty | ### Describe the bug
<!--- Describe your issue here --->
I am trying to log a simple plotly bar plot to wandb like shown in the minimal example below. Similar to https://github.com/wandb/wandb/issues/2191, the plot is rendered correctly when logged as an HTML but shows up empty when logged as plotly.
```python
import numpy as np
import plotly
import wandb
import plotly.graph_objects as go
if __name__ == "__main__":
wandb.init(project=..., entity=..., job_type=...)
bins = np.arange(4) / 2
counts = np.arange(4) + 1
fig = go.Figure(data=[go.Bar(
x=bins,
y=counts,
)])
wandb.log({"test_bar": wandb.Plotly(fig),
"test_bar_html": wandb.Html(plotly.io.to_html(fig))})
```

I am not sure if the underlying issue is the same as in https://github.com/wandb/wandb/issues/2191. If this is the case and there really is nothing to be done on the wandb side, I think that at the very least, the documentation at https://docs.wandb.ai/guides/track/log/plots/#matplotlib-and-plotly-plots should contain a comment indicating that there have been known issues with this functionality for what is now almost 4 years. It would also be good to have some guidance on what exactly the issue identified in https://github.com/wandb/wandb/issues/2191 was and how to determine if one is facing the same issue.
Additionally, it would be good to know whether there is a workaround e.g. by pinning plotly to a specific version. Note that logging to HTML can easily lead to a 1000-fold increase in the size of the logged files and is not a satisfactory solution (nor is logging as image, losing the interactive functionality).
Relevant installed versions might be `wandb==0.19.3` and `plotly==6.0.0`.
<details>
<summary>.plotly.json logged to media/plotly</summary>
````json
{
"data": [
{
"x": {
"dtype": "f8",
"bdata": "AAAAAAAAAAAAAAAAAADgPwAAAAAAAPA/AAAAAAAA+D8="
},
"y": {
"dtype": "i4",
"bdata": "AQAAAAIAAAADAAAABAAAAA=="
},
"type": "bar"
}
],
"layout": {
"template": {
"data": {
"histogram2dcontour": [
{
"type": "histogram2dcontour",
"colorbar": {
"outlinewidth": 0,
"ticks": ""
},
"colorscale": [
[
0,
"#0d0887"
],
[
0.1111111111111111,
"#46039f"
],
[
0.2222222222222222,
"#7201a8"
],
[
0.3333333333333333,
"#9c179e"
],
[
0.4444444444444444,
"#bd3786"
],
[
0.5555555555555556,
"#d8576b"
],
[
0.6666666666666666,
"#ed7953"
],
[
0.7777777777777778,
"#fb9f3a"
],
[
0.8888888888888888,
"#fdca26"
],
[
1,
"#f0f921"
]
]
}
],
"choropleth": [
{
"type": "choropleth",
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
}
],
"histogram2d": [
{
"type": "histogram2d",
"colorbar": {
"outlinewidth": 0,
"ticks": ""
},
"colorscale": [
[
0,
"#0d0887"
],
[
0.1111111111111111,
"#46039f"
],
[
0.2222222222222222,
"#7201a8"
],
[
0.3333333333333333,
"#9c179e"
],
[
0.4444444444444444,
"#bd3786"
],
[
0.5555555555555556,
"#d8576b"
],
[
0.6666666666666666,
"#ed7953"
],
[
0.7777777777777778,
"#fb9f3a"
],
[
0.8888888888888888,
"#fdca26"
],
[
1,
"#f0f921"
]
]
}
],
"heatmap": [
{
"type": "heatmap",
"colorbar": {
"outlinewidth": 0,
"ticks": ""
},
"colorscale": [
[
0,
"#0d0887"
],
[
0.1111111111111111,
"#46039f"
],
[
0.2222222222222222,
"#7201a8"
],
[
0.3333333333333333,
"#9c179e"
],
[
0.4444444444444444,
"#bd3786"
],
[
0.5555555555555556,
"#d8576b"
],
[
0.6666666666666666,
"#ed7953"
],
[
0.7777777777777778,
"#fb9f3a"
],
[
0.8888888888888888,
"#fdca26"
],
[
1,
"#f0f921"
]
]
}
],
"contourcarpet": [
{
"type": "contourcarpet",
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
}
],
"contour": [
{
"type": "contour",
"colorbar": {
"outlinewidth": 0,
"ticks": ""
},
"colorscale": [
[
0,
"#0d0887"
],
[
0.1111111111111111,
"#46039f"
],
[
0.2222222222222222,
"#7201a8"
],
[
0.3333333333333333,
"#9c179e"
],
[
0.4444444444444444,
"#bd3786"
],
[
0.5555555555555556,
"#d8576b"
],
[
0.6666666666666666,
"#ed7953"
],
[
0.7777777777777778,
"#fb9f3a"
],
[
0.8888888888888888,
"#fdca26"
],
[
1,
"#f0f921"
]
]
}
],
"surface": [
{
"type": "surface",
"colorbar": {
"outlinewidth": 0,
"ticks": ""
},
"colorscale": [
[
0,
"#0d0887"
],
[
0.1111111111111111,
"#46039f"
],
[
0.2222222222222222,
"#7201a8"
],
[
0.3333333333333333,
"#9c179e"
],
[
0.4444444444444444,
"#bd3786"
],
[
0.5555555555555556,
"#d8576b"
],
[
0.6666666666666666,
"#ed7953"
],
[
0.7777777777777778,
"#fb9f3a"
],
[
0.8888888888888888,
"#fdca26"
],
[
1,
"#f0f921"
]
]
}
],
"mesh3d": [
{
"type": "mesh3d",
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
}
],
"scatter": [
{
"fillpattern": {
"fillmode": "overlay",
"size": 10,
"solidity": 0.2
},
"type": "scatter"
}
],
"parcoords": [
{
"type": "parcoords",
"line": {
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
}
}
],
"scatterpolargl": [
{
"type": "scatterpolargl",
"marker": {
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
}
}
],
"bar": [
{
"error_x": {
"color": "#2a3f5f"
},
"error_y": {
"color": "#2a3f5f"
},
"marker": {
"line": {
"color": "#E5ECF6",
"width": 0.5
},
"pattern": {
"fillmode": "overlay",
"size": 10,
"solidity": 0.2
}
},
"type": "bar"
}
],
"scattergeo": [
{
"type": "scattergeo",
"marker": {
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
}
}
],
"scatterpolar": [
{
"type": "scatterpolar",
"marker": {
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
}
}
],
"histogram": [
{
"marker": {
"pattern": {
"fillmode": "overlay",
"size": 10,
"solidity": 0.2
}
},
"type": "histogram"
}
],
"scattergl": [
{
"type": "scattergl",
"marker": {
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
}
}
],
"scatter3d": [
{
"type": "scatter3d",
"line": {
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
},
"marker": {
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
}
}
],
"scattermap": [
{
"type": "scattermap",
"marker": {
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
}
}
],
"scattermapbox": [
{
"type": "scattermapbox",
"marker": {
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
}
}
],
"scatterternary": [
{
"type": "scatterternary",
"marker": {
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
}
}
],
"scattercarpet": [
{
"type": "scattercarpet",
"marker": {
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
}
}
],
"carpet": [
{
"aaxis": {
"endlinecolor": "#2a3f5f",
"gridcolor": "white",
"linecolor": "white",
"minorgridcolor": "white",
"startlinecolor": "#2a3f5f"
},
"baxis": {
"endlinecolor": "#2a3f5f",
"gridcolor": "white",
"linecolor": "white",
"minorgridcolor": "white",
"startlinecolor": "#2a3f5f"
},
"type": "carpet"
}
],
"table": [
{
"cells": {
"fill": {
"color": "#EBF0F8"
},
"line": {
"color": "white"
}
},
"header": {
"fill": {
"color": "#C8D4E3"
},
"line": {
"color": "white"
}
},
"type": "table"
}
],
"barpolar": [
{
"marker": {
"line": {
"color": "#E5ECF6",
"width": 0.5
},
"pattern": {
"fillmode": "overlay",
"size": 10,
"solidity": 0.2
}
},
"type": "barpolar"
}
],
"pie": [
{
"automargin": true,
"type": "pie"
}
]
},
"layout": {
"autotypenumbers": "strict",
"colorway": [
"#636efa",
"#EF553B",
"#00cc96",
"#ab63fa",
"#FFA15A",
"#19d3f3",
"#FF6692",
"#B6E880",
"#FF97FF",
"#FECB52"
],
"font": {
"color": "#2a3f5f"
},
"hovermode": "closest",
"hoverlabel": {
"align": "left"
},
"paper_bgcolor": "white",
"plot_bgcolor": "#E5ECF6",
"polar": {
"bgcolor": "#E5ECF6",
"angularaxis": {
"gridcolor": "white",
"linecolor": "white",
"ticks": ""
},
"radialaxis": {
"gridcolor": "white",
"linecolor": "white",
"ticks": ""
}
},
"ternary": {
"bgcolor": "#E5ECF6",
"aaxis": {
"gridcolor": "white",
"linecolor": "white",
"ticks": ""
},
"baxis": {
"gridcolor": "white",
"linecolor": "white",
"ticks": ""
},
"caxis": {
"gridcolor": "white",
"linecolor": "white",
"ticks": ""
}
},
"coloraxis": {
"colorbar": {
"outlinewidth": 0,
"ticks": ""
}
},
"colorscale": {
"sequential": [
[
0,
"#0d0887"
],
[
0.1111111111111111,
"#46039f"
],
[
0.2222222222222222,
"#7201a8"
],
[
0.3333333333333333,
"#9c179e"
],
[
0.4444444444444444,
"#bd3786"
],
[
0.5555555555555556,
"#d8576b"
],
[
0.6666666666666666,
"#ed7953"
],
[
0.7777777777777778,
"#fb9f3a"
],
[
0.8888888888888888,
"#fdca26"
],
[
1,
"#f0f921"
]
],
"sequentialminus": [
[
0,
"#0d0887"
],
[
0.1111111111111111,
"#46039f"
],
[
0.2222222222222222,
"#7201a8"
],
[
0.3333333333333333,
"#9c179e"
],
[
0.4444444444444444,
"#bd3786"
],
[
0.5555555555555556,
"#d8576b"
],
[
0.6666666666666666,
"#ed7953"
],
[
0.7777777777777778,
"#fb9f3a"
],
[
0.8888888888888888,
"#fdca26"
],
[
1,
"#f0f921"
]
],
"diverging": [
[
0,
"#8e0152"
],
[
0.1,
"#c51b7d"
],
[
0.2,
"#de77ae"
],
[
0.3,
"#f1b6da"
],
[
0.4,
"#fde0ef"
],
[
0.5,
"#f7f7f7"
],
[
0.6,
"#e6f5d0"
],
[
0.7,
"#b8e186"
],
[
0.8,
"#7fbc41"
],
[
0.9,
"#4d9221"
],
[
1,
"#276419"
]
]
},
"xaxis": {
"gridcolor": "white",
"linecolor": "white",
"ticks": "",
"title": {
"standoff": 15
},
"zerolinecolor": "white",
"automargin": true,
"zerolinewidth": 2
},
"yaxis": {
"gridcolor": "white",
"linecolor": "white",
"ticks": "",
"title": {
"standoff": 15
},
"zerolinecolor": "white",
"automargin": true,
"zerolinewidth": 2
},
"scene": {
"xaxis": {
"backgroundcolor": "#E5ECF6",
"gridcolor": "white",
"linecolor": "white",
"showbackground": true,
"ticks": "",
"zerolinecolor": "white",
"gridwidth": 2
},
"yaxis": {
"backgroundcolor": "#E5ECF6",
"gridcolor": "white",
"linecolor": "white",
"showbackground": true,
"ticks": "",
"zerolinecolor": "white",
"gridwidth": 2
},
"zaxis": {
"backgroundcolor": "#E5ECF6",
"gridcolor": "white",
"linecolor": "white",
"showbackground": true,
"ticks": "",
"zerolinecolor": "white",
"gridwidth": 2
}
},
"shapedefaults": {
"line": {
"color": "#2a3f5f"
}
},
"annotationdefaults": {
"arrowcolor": "#2a3f5f",
"arrowhead": 0,
"arrowwidth": 1
},
"geo": {
"bgcolor": "white",
"landcolor": "#E5ECF6",
"subunitcolor": "white",
"showland": true,
"showlakes": true,
"lakecolor": "white"
},
"title": {
"x": 0.05
},
"mapbox": {
"style": "light"
}
}
}
}
}
```
</details> | open | 2025-03-06T00:04:04Z | 2025-03-17T11:05:46Z | https://github.com/wandb/wandb/issues/9563 | [
"ty:bug",
"a:app"
] | jonasjuerss | 5 |
tflearn/tflearn | data-science | 585 | model.load file name needs to start with ./ | It appears that when loading a model file from the current dir we must prepend "./" to the file name:
```python
model.load("./marker-classifier.tfl")
```
The documentation needs to mention this.
Reference: http://stackoverflow.com/a/41325679/1036017 | open | 2017-02-02T22:51:32Z | 2017-02-10T05:59:52Z | https://github.com/tflearn/tflearn/issues/585 | [] | bibhas2 | 1 |
apify/crawlee-python | automation | 560 | Unable to execute POST request with JSON payload | Example
```python
async def main() -> None:
crawler = HttpCrawler()
# Define the default request handler, which will be called for every request.
@crawler.router.default_handler
async def request_handler(context: HttpCrawlingContext) -> None:
context.log.info(f'Processing {context.request.url} ...')
response = context.http_response.read().decode('utf-8')
context.log.info(f'Response: {response}') # To see the response in the logs.
# Prepare a POST request to the form endpoint.
request = Request.from_url(
url='https://httpbin.org/post',
method='POST',
headers = {"content-type": "application/json"},
data={
'custname': 'John Doe',
'custtel': '1234567890',
'custemail': 'johndoe@example.com',
'size': 'large',
'topping': ['bacon', 'cheese', 'mushroom'],
'delivery': '13:00',
'comments': 'Please ring the doorbell upon arrival.',
},
)
await crawler.run([request])
```
Current response format
```json
{
"args": {},
"data": "custname=John+Doe&custtel=1234567890&custemail=johndoe%40example.com&size=large&topping=bacon&topping=cheese&topping=mushroom&delivery=13%3A00&comments=Please+ring+the+doorbell+upon+arrival.",
"files": {},
"form": {},
"headers": {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-US,en;q=0.9",
"Content-Length": "190",
"Content-Type": "application/json",
"Host": "httpbin.org",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
"X-Amzn-Trace-Id": "Root=1-66fc90cf-11644d4f5483e1096211b721"
},
"json": null,
"origin": "91.240.96.149",
"url": "https://httpbin.org/post"
}
```
Expected response format
```json
{
"args": {},
"data": "{\"custname\": \"John Doe\", \"custtel\": \"1234567890\", \"custemail\": \"johndoe@example.com\", \"size\": \"large\", \"topping\": [\"bacon\", \"cheese\", \"mushroom\"], \"delivery\": \"13:00\", \"comments\": \"Please ring the doorbell upon arrival.\"}",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate, br",
"Content-Length": "221",
"Content-Type": "application/json",
"Host": "httpbin.org",
"User-Agent": "python-httpx/0.27.2",
"X-Amzn-Trace-Id": "Root=1-66fc91c2-6db9989347fef25b150615e2"
},
"json": {
"comments": "Please ring the doorbell upon arrival.",
"custemail": "johndoe@example.com",
"custname": "John Doe",
"custtel": "1234567890",
"delivery": "13:00",
"size": "large",
"topping": [
"bacon",
"cheese",
"mushroom"
]
},
"origin": "91.240.96.149",
"url": "https://httpbin.org/post"
}
```
Both HTTPX and curl_impersonate allow creating a “POST” request with JSON payload, in two ways
```python
data={
'custname': 'John Doe',
'custtel': '1234567890',
'custemail': 'johndoe@example.com',
'size': 'large',
'topping': ['bacon', 'cheese', 'mushroom'],
'delivery': '13:00',
'comments': 'Please ring the doorbell upon arrival.',
}
response = httpx.post(url, json=data)
response = httpx.post(url, data=json.dumps(data))
```
But we can't reproduce this behavior in Crawlee, because the [`json` parameter is not passed when the request is created](https://github.com/apify/crawlee-python/blob/master/src/crawlee/http_clients/_httpx.py#L177) and the [`data` parameter cannot be a string](https://github.com/apify/crawlee-python/blob/master/src/crawlee/_request.py#L130) | closed | 2024-10-02T00:48:27Z | 2024-10-24T11:44:12Z | https://github.com/apify/crawlee-python/issues/560 | [
"bug",
"t-tooling"
] | Mantisus | 2 |
pyro-ppl/numpyro | numpy | 1,191 | Clarify that `sample(..., obs_mask)` is only used for SVI | to avoid confusion as in this [forum thread](https://forum.pyro.ai/t/bayesian-imputation/3443). | closed | 2021-10-14T01:48:19Z | 2021-10-26T01:30:05Z | https://github.com/pyro-ppl/numpyro/issues/1191 | [
"documentation"
] | fehiepsi | 0 |
KaiyangZhou/deep-person-reid | computer-vision | 409 | Huge memory (RAM) consumption for evaluation stage | Please, fix `features = features.data.cpu()` => `features = features.cpu().clone()` in https://github.com/KaiyangZhou/deep-person-reid/blob/28430ec309d03908b757dc8610325c039dbaebbe/torchreid/engine/engine.py#L370 to avoid huge memory consumption for evaluation stage (to see the difference just evaluate msmt17)
possible the reason for kill in https://github.com/KaiyangZhou/deep-person-reid/issues/104#issuecomment-538213289
env
```
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.2.89
GPU models and configuration: GPU 0: GeForce RTX 2080
Nvidia driver version: 440.118.02
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
Versions of relevant libraries:
[pip3] msgpack-numpy==0.4.3.2
[pip3] numpy==1.18.1
[pip3] pytorch-transformers==1.1.0
[pip3] torch==1.5.0a0+8f84ded
[pip3] torchreid==1.3.3
[pip3] torchtext==0.4.0
[pip3] torchvision==0.6.0a0
[conda] magma-cuda101 2.5.1 1 local
[conda] mkl 2019.1 144
[conda] mkl-include 2019.1 144
[conda] nomkl 3.0 0
[conda] pytorch-transformers 1.1.0 pypi_0 pypi
[conda] torch 1.5.0a0+8f84ded pypi_0 pypi
[conda] torchtext 0.4.0 pypi_0 pypi
[conda] torchvision 0.6.0a0 pypi_0 pypi
Pillow (8.1.0)
```
| closed | 2021-01-22T06:01:59Z | 2021-01-22T09:07:31Z | https://github.com/KaiyangZhou/deep-person-reid/issues/409 | [] | denisvmedyantsev | 1 |
Nemo2011/bilibili-api | api | 180 | [需求] API 信息缺少最基础的 HTTP 请求方法... | 突然疑惑,为啥有的没有标出是 GET 还是 POST 等...?
~~傻眼,任务挺大,但挺重要吧?~~
只有一部分,在补,看似是忘了 | closed | 2023-02-02T08:26:39Z | 2023-02-02T09:26:39Z | https://github.com/Nemo2011/bilibili-api/issues/180 | [
"need"
] | z0z0r4 | 1 |
blacklanternsecurity/bbot | automation | 1,841 | after install on windows error Failed to set new ulimit: No module named 'resource' | bbot.exe -t honda.com
______ _____ ____ _______
| ___ \| __ \ / __ \__ __|
| |___) | |__) | | | | | |
| ___ <| __ <| | | | | |
| |___) | |__) | |__| | | |
|______/|_____/ \____/ |_|
BIGHUGE BLS OSINT TOOL v2.0.1
www.blacklanternsecurity.com/bbot
Failed to set new ulimit: No module named 'resource'
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\users\administrator\.local\bin\bbot.exe\__main__.py", line 7, in <module>
File "C:\Users\Administrator\pipx\venvs\bbot\Lib\site-packages\bbot\cli.py", line 272, in main
asyncio.run(_main())
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python312\Lib\asyncio\runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python312\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\Administrator\pipx\venvs\bbot\Lib\site-packages\bbot\cli.py", line 168, in _main
scan.helpers.word_cloud.load()
^^^^^^^^^^^^
File "C:\Users\Administrator\pipx\venvs\bbot\Lib\site-packages\bbot\scanner\scanner.py", line 861, in helpers
return self.preset.helpers
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\pipx\venvs\bbot\Lib\site-packages\bbot\scanner\preset\preset.py", line 560, in helpers
from bbot.core.helpers.helper import ConfigAwareHelper
File "C:\Users\Administrator\pipx\venvs\bbot\Lib\site-packages\bbot\core\helpers\helper.py", line 16, in <module>
from .depsinstaller import DepsInstaller
File "C:\Users\Administrator\pipx\venvs\bbot\Lib\site-packages\bbot\core\helpers\depsinstaller\__init__.py", line 1, in <module>
from .installer import DepsInstaller
File "C:\Users\Administrator\pipx\venvs\bbot\Lib\site-packages\bbot\core\helpers\depsinstaller\installer.py", line 13, in <module>
from ansible_runner.interface import run
File "C:\Users\Administrator\pipx\venvs\bbot\Lib\site-packages\ansible_runner\__init__.py", line 1, in <module>
from .utils.importlib_compat import importlib_metadata
File "C:\Users\Administrator\pipx\venvs\bbot\Lib\site-packages\ansible_runner\utils\__init__.py", line 9, in <module>
import fcntl
ModuleNotFoundError: No module named 'fcntl'
| closed | 2024-10-11T21:38:30Z | 2024-10-11T22:52:58Z | https://github.com/blacklanternsecurity/bbot/issues/1841 | [] | jinji9630 | 1 |
miguelgrinberg/Flask-Migrate | flask | 151 | Migrate model with multi-column UniqueConstraint | Hi, I'm trying to create a migration for the following model:
```
class Function(db.Model):
__tablename__ = 'function'
__table_args__ = tuple(UniqueConstraint('name', 'namespace', 'revision',
name='name_namespace_revision_unique_constraint'))
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(254), nullable=False)
code = db.Column(db.Text, nullable=False)
namespace = db.Column(db.String(254), default="all", nullable=False)
revision = db.Column(db.String(65), nullable=False)
created_at = db.Column(db.Date, default=_get_date)
updated_at = db.Column(db.Date, onupdate=_get_date)
def __init__(self, name, namespace, code, revision):
self.name = name
self.namespace = namespace
self.code = code
self.revision = revision
```
Flask-Migrate is generating the following migration for this model:
```
"""initial migration
Revision ID: 0485d7255905
Revises:
Create Date: 2017-03-01 17:16:07.538631
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '0485d7255905'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('function',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=254), nullable=False),
sa.Column('code', sa.Text(), nullable=False),
sa.Column('namespace', sa.String(length=254), nullable=False),
sa.Column('revision', sa.String(length=65), nullable=False),
sa.Column('created_at', sa.Date(), nullable=True),
sa.Column('updated_at', sa.Date(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('function')
# ### end Alembic commands ###
```
note that the UniqueConstraint ```name_namespace_revision_unique_constraint``` is not being generated in the migration.
Aditional info:
I'm using Flask-SQLAlchemy and python 3.6
If I manually add the uniqueconstraint creation in the migration, the next time I do ```db migrate``` a migration deleting this unique constraint is generated.
Any hints?
| closed | 2017-03-01T20:25:57Z | 2019-08-16T19:02:04Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/151 | [
"question"
] | felipejfc | 6 |
LibreTranslate/LibreTranslate | api | 719 | I am a simple man: I ctrl+f "Requirements" or "self-host" and find 0 information in your Readme.md | You say we can self-host, but the Readme.md needs to be updated to mention in a reasonably easy to find fashion:
- what do I click to see steps on _how_ to self-host (as opposed to using the $29 / month portal.libretranslate.com api key).
- before I decide to start: what are roughly the (minimum) system requirements of using the language model on my computer?
Please and thank you.
PS: Because this is the internet, pardon me but I must say: don't tell me "it's somewhere in the docs lol" - imagine being an app/game on a store without this info, how that would look to hu-mans with lives. | closed | 2025-01-03T16:57:39Z | 2025-01-04T16:14:45Z | https://github.com/LibreTranslate/LibreTranslate/issues/719 | [
"enhancement"
] | tdbe | 2 |
plotly/dash | data-science | 2,992 | dcc.Graph rendering goes into infinite error loop when None is returned for Figure | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
```
dash 2.18.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: MacOS/Linux/Windows
- Browser Chrome
- Version 128
**Describe the bug**
When running the below script (which has a bug: `graph_selections` returns `None` instead of a `Figure`) with python3, and displaying in the Chrome browser, the browser tab seems to lock up. If the developer tools are open, one can see the error count rapidly rising with the errors in the screenshot below being repeated over and over again in a tight loop.
```python
from dash import Dash, html, dcc, callback, Output, Input
app = Dash()
app.layout = html.Div([
html.Button('RUN', id='run-btn', n_clicks=0),
dcc.Graph(id='graph-container')
])
@callback(
Output('graph-container', 'figure'),
Input('run-btn', 'n_clicks'),
)
def graph_selections(n_clicks):
print(n_clicks)
if __name__ == "__main__":
app.run(port=8050, host='0.0.0.0', debug=True)
```
**Expected behavior**
An error message in the browser, describing the bad return from the callback..
**Screenshots**
<img width="1230" alt="Screenshot 2024-09-09 at 12 28 28" src="https://github.com/user-attachments/assets/0352285a-b7c2-4139-89eb-ddf8eddeb2be">
| open | 2024-09-09T19:44:45Z | 2024-09-11T19:16:40Z | https://github.com/plotly/dash/issues/2992 | [
"bug",
"P3"
] | reggied | 0 |
fbdesignpro/sweetviz | pandas | 40 | compare infra fails with KeyError: 'cannot use a single bool to index into setitem' | This seems to be a possible bug:
```
Date: Jul 22, 2020
platform: Macos
environment: conda custom environment
sweetviz version: sweetviz==1.0b3
np.__version__ # 1.18.4
pd.__version__ # 1.0.3
```
# MWE
```python
import numpy as np
import pandas as pd
import seaborn as sns
import sweetviz
df = sns.load_dataset('titanic')
display(df.head(2))
feat_cfg = sweetviz.FeatureConfig(skip="deck")
my_report = sweetviz.compare_intra(df,
df["sex"] == "male",
["Male", "Female"],
'survived',
feat_cfg)
my_report.show_html('compare_male_vs_female.html')
```
# Error
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~/opt/miniconda3/envs/dataSc/lib/python3.7/site-packages/pandas/core/series.py in _set_value(self, label, value, takeable)
1138 else:
-> 1139 self.index._engine.set_value(self._values, label, value)
1140 except (KeyError, TypeError):
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.set_value()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.set_value()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: True
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-2-6f7575a96558> in <module>
12 ["Male", "Female"],
13 'survived',
---> 14 feat_cfg)
15 my_report.show_html('compare_male_vs_female.html')
~/opt/miniconda3/envs/dataSc/lib/python3.7/site-packages/sweetviz/sv_public.py in compare_intra(source_df, condition_series, names, target_feat, feat_cfg, pairwise_analysis)
42 report = sweetviz.DataframeReport([data_true, names[0]], target_feat,
43 [data_false, names[1]],
---> 44 pairwise_analysis, feat_cfg)
45 return report
46
~/opt/miniconda3/envs/dataSc/lib/python3.7/site-packages/sweetviz/dataframe_report.py in __init__(self, source, target_feature_name, compare, pairwise_analysis, fc)
215 # start = time.perf_counter()
216 self.progress_bar.set_description(':' + f.source.name + '')
--> 217 self._features[f.source.name] = sa.analyze_feature_to_dictionary(f)
218 self.progress_bar.update(1)
219 # print(f"DONE FEATURE------> {f.source.name}"
~/opt/miniconda3/envs/dataSc/lib/python3.7/site-packages/sweetviz/series_analyzer.py in analyze_feature_to_dictionary(to_process)
97 # Explicitly show missing categories on each set
98 if compare_type == FeatureType.TYPE_CAT or compare_type == FeatureType.TYPE_BOOL:
---> 99 fill_out_missing_counts_in_other_series(to_process.compare_counts, to_process.source_counts)
100 fill_out_missing_counts_in_other_series(to_process.source_counts, to_process.compare_counts)
101 returned_feature_dict["compare"] = dict()
~/opt/miniconda3/envs/dataSc/lib/python3.7/site-packages/sweetviz/series_analyzer.py in fill_out_missing_counts_in_other_series(my_counts, other_counts)
43 if my_counts[to_fill].index.dtype.name == 'category':
44 my_counts[to_fill] = my_counts[to_fill].reindex(my_counts[to_fill].index.add_categories(key))
---> 45 my_counts[to_fill].at[key] = 0
46
47 def add_series_base_stats_to_dict(series: pd.Series, counts: dict, updated_dict: dict) -> dict:
~/opt/miniconda3/envs/dataSc/lib/python3.7/site-packages/pandas/core/indexing.py in __setitem__(self, key, value)
2192 key = list(self._convert_key(key, is_setter=True))
2193 key.append(value)
-> 2194 self.obj._set_value(*key, takeable=self._takeable)
2195
2196
~/opt/miniconda3/envs/dataSc/lib/python3.7/site-packages/pandas/core/series.py in _set_value(self, label, value, takeable)
1140 except (KeyError, TypeError):
1141 # set using a non-recursive method
-> 1142 self.loc[label] = value
1143
1144 return self
~/opt/miniconda3/envs/dataSc/lib/python3.7/site-packages/pandas/core/indexing.py in __setitem__(self, key, value)
669 key = com.apply_if_callable(key, self.obj)
670 indexer = self._get_setitem_indexer(key)
--> 671 self._setitem_with_indexer(indexer, value)
672
673 def _validate_key(self, key, axis: int):
~/opt/miniconda3/envs/dataSc/lib/python3.7/site-packages/pandas/core/indexing.py in _setitem_with_indexer(self, indexer, value)
870 else:
871
--> 872 indexer, missing = convert_missing_indexer(indexer)
873
874 if missing:
~/opt/miniconda3/envs/dataSc/lib/python3.7/site-packages/pandas/core/indexing.py in convert_missing_indexer(indexer)
2342
2343 if isinstance(indexer, bool):
-> 2344 raise KeyError("cannot use a single bool to index into setitem")
2345 return indexer, True
2346
KeyError: 'cannot use a single bool to index into setitem'
``` | closed | 2020-07-22T15:29:46Z | 2020-08-02T14:37:51Z | https://github.com/fbdesignpro/sweetviz/issues/40 | [
"bug"
] | bhishanpdl | 2 |
aminalaee/sqladmin | sqlalchemy | 624 | Customized sort_query | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
At the moment, the admin panel does not support sorting by objects, but this can be solved using the custom sort_query method. Similar to list_query, search_query.
### Describe the solution you would like.
You can move the sorting from the main list method to separate methods.
``` Python
async def list(self, request: Request) -> Pagination:
...
sort_fields = self._get_sort_fields(request)
stmt = self.sort_query(stmt, sort_fields)
...
```
A separate method for getting fields.
```Python
def _get_sort_fields(self, request: Request) -> List[Tuple[str, bool]]:
sort_by = request.query_params.get("sortBy", None)
sort = request.query_params.get("sort", "asc")
if sort_by:
sort_fields = [(sort_by, sort == "desc")]
else:
sort_fields = self._get_default_sort()
return sort_fields
```
And an overridable method that can be used at your discretion.
```Python
def sort_query(self, stmt: Select, sort_fields: List[Tuple[str, bool]]) -> Select:
for sort_field, is_desc in sort_fields:
if is_desc:
stmt = stmt.order_by(desc(sort_field))
else:
stmt = stmt.order_by(asc(sort_field))
return stmt
```
The idea of custom methods solves many problems and gives great flexibility.
I haven't actually added anything, but it solves the problem with sorting by object.
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | closed | 2023-09-21T08:54:40Z | 2023-09-22T09:27:59Z | https://github.com/aminalaee/sqladmin/issues/624 | [] | YarLikviD | 1 |
jupyter-book/jupyter-book | jupyter | 2,144 | Display math in admonition | ### Describe the bug
**issue**
I tried to put some content with display math inside an admonition block, but the output html form is incorrect.
### Reproduce the bug
I tried to put the following content
~~~
A linear map $f:\mathbb{C}\to\mathbb{C}$ is of the form
$$f\left(z\right)=az+b$$
where $a,b\in\mathbb{C}$.
~~~
in an admonition block
~~~
```{tip}
A linear map $f:\mathbb{C}\to\mathbb{C}$ is of the form
$$
f\left(z\right)=az+b
$$
where $a,b\in\mathbb{C}$.
````
~~~
However, the output in html file is apparently incorrect, comparing the upper part and the middle one in the screenshot
<img width="552" alt="issue_prtscr" src="https://github.com/executablebooks/jupyter-book/assets/158507541/95987fcf-cd0a-47f0-aa05-fec5ce5576de">
The indentation of the line following the displayed part is wrong, and strange spacing was added around the inline math.
The lower part of the screenshot shows a simple fix of the issue when I add a line before the double dollar sign of the display math (adding a line after the display math works equally well)
~~~
```{tip}
A linear map $f:\mathbb{C}\to\mathbb{C}$ is of the form
$$
f\left(z\right)=az+b
$$
where $a,b\in\mathbb{C}$.
```
~~~
### List your environment
My environment is
```
Jupyter Book : 1.0.0
External ToC : 1.0.1
MyST-Parser : 2.0.0
MyST-NB : 1.0.0
Sphinx Book Theme : 1.1.2
Jupyter-Cache : 1.0.0
NbClient : 0.7.0
``` | open | 2024-04-23T04:56:37Z | 2024-04-23T16:19:26Z | https://github.com/jupyter-book/jupyter-book/issues/2144 | [
"bug"
] | tyl012 | 1 |
jina-ai/clip-as-service | pytorch | 59 | Other Encoding block will be released ? | As you linked your [blog](https://hanxiao.github.io/2018/06/24/4-Encoding-Blocks-You-Need-to-Know-Besides-LSTM-RNN-in-Tensorflow/#pooling-block) in the README, I read it : it was so interesting !! Thanks for sharing it.
---
Now, the main pooling strategies are `REDUCE_MEAN` and `REDUCE_MAX`, as described in the first part of your blog.
**Are you going to release other Sequence encoding blocks ?**
---
If I understood well, it seems difficult because others strategies are based on CNN, which needs data to train on. (Am I right ?) | closed | 2018-11-27T08:24:45Z | 2018-12-19T10:18:36Z | https://github.com/jina-ai/clip-as-service/issues/59 | [] | astariul | 1 |
vaexio/vaex | data-science | 1,296 | Is it possible to merge two dataframes? | Hi there,
In pandas I'm joining two datrafarmes using merge, basically this:
`new_df = pd.merge(df, df2, how='left', on="ticker")`
but it keeps running out of memory so I thought I'd try vaex for the same item but I'm unsure how to merge. I searched the documentation and couldn't find anything. I looked around the issues and it looks like people can merge but it's not being explained how.
Is it possible and if so, can you direct me to a resource so can learn how to do it? | closed | 2021-04-01T01:50:26Z | 2021-04-05T20:12:38Z | https://github.com/vaexio/vaex/issues/1296 | [] | gautambak | 4 |
marimo-team/marimo | data-science | 3,781 | KaTeX Macro Support | ### Description
As outlined in https://github.com/marimo-team/marimo/discussions/1941, I would love to be able to have some Macro Support for KaTeX.
This would come in handy for repeated use of convoluted symbols and would really help us for teaching and presenting our stuff in notebooks.
Our current workflow for jupyter notebooks is a workaround, where we do
```python
import IPython.display
IPython.display.display_latex(IPython.display.Latex(filename="macros.tex"))
```
where `macros.tex` looks like
```
\newcommand{\rot}[1]{{\rm curl }\left( #1 \right)}
\newcommand{\Grad}[1]{{\rm Grad}\left( #1 \right)}
\newcommand{\Div}[1]{{\rm Div }\left( #1 \right)}
```
A feature like that would be great!
### Suggested solution
In an optimal case, we would love to be able to point to a file populated by KaTeX macro commands, which would then be available to use in all markdown cells without any additional import. | closed | 2025-02-13T12:51:39Z | 2025-02-14T07:18:32Z | https://github.com/marimo-team/marimo/issues/3781 | [
"enhancement"
] | claudiushaag | 3 |
ultralytics/ultralytics | machine-learning | 19,310 | Device selection on export on multi-gpu systems | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
Greatings! 🚀
Sorry for my English. I ran into issue (latest version, February, 19th) when choosing GPU to export on NVIDIA mGPU setup:
```
DEVICE0 = "cuda:1"
torch.set_default_device(device=DEVICE0)
with torch.cuda.device(device=DEVICE0):
model = YOLO("yolo11m.pt")
model.export(format="engine", half=True, imgsz=TRACK_HW, batch=BATCH_SIZE, dynamic=True, device=DEVICE0)
```
I selected second (:1) gpu, but got a usage on first (:0) one. nvidia-smi showed a full load on first gpu with a small one on second.
utils/torch_utils.py
```
if not cpu and not mps and torch.cuda.is_available(): # prefer GPU if available
devices = device.split(",") if device else "0" # i.e. "0,1" -> ["0", "1"]
n = len(devices) # device count
if n > 1: # multi-GPU
if batch < 1:
raise ValueError(
"AutoBatch with batch<1 not supported for Multi-GPU training, "
"please specify a valid batch size, i.e. batch=16."
)
if batch >= 0 and batch % n != 0: # check batch_size is divisible by device_count
raise ValueError(
f"'batch={batch}' must be a multiple of GPU count {n}. Try 'batch={batch // n * n}' or "
f"'batch={batch // n * n + n}', the nearest batch sizes evenly divisible by {n}."
)
space = " " * (len(s) + 1)
for i, d in enumerate(devices):
s += f"{'' if i == 0 else space}CUDA:{d} ({get_gpu_info(i)})\n" # bytes to MB
arg = "cuda:0"
```
The line leads to bug: ```arg = "cuda:0"```
I suppose it should be: ```arg = f"cuda:{device}"```
### Environment
Package Version
------------------------- ------------
addict 2.4.0
aiohappyeyeballs 2.4.4
aiohttp 3.11.11
aiosignal 1.3.2
albucore 0.0.23
albumentations 2.0.2
annotated-types 0.7.0
anyio 4.8.0
attrs 25.1.0
bcrypt 4.2.1
certifi 2025.1.31
cffi 1.17.1
chardet 5.2.0
charset-normalizer 3.4.1
click 8.1.8
coloredlogs 15.0.1
contourpy 1.3.1
cryptography 44.0.0
cycler 0.12.1
fastapi 0.115.6
filelock 3.17.0
flatbuffers 25.1.24
fonttools 4.55.7
frozenlist 1.5.0
fsspec 2025.2.0
geographiclib 2.0
greenlet 3.1.1
h11 0.14.0
huggingface-hub 0.27.1
humanfriendly 10.0
idna 3.10
Jinja2 3.1.5
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
jwt 1.3.1
kiwisolver 1.4.8
lap 0.5.12
lightning-utilities 0.11.9
MarkupSafe 3.0.2
matplotlib 3.10.0
mpmath 1.3.0
msgpack 1.1.0
multidict 6.1.0
networkx 3.4.2
numpy 2.1.1
nvidia-cublas-cu12 12.6.4.1
nvidia-cuda-cupti-cu12 12.6.80
nvidia-cuda-nvrtc-cu12 12.6.77
nvidia-cuda-runtime-cu12 12.6.77
nvidia-cudnn-cu12 9.5.1.17
nvidia-cufft-cu12 11.3.0.4
nvidia-curand-cu12 10.3.7.77
nvidia-cusolver-cu12 11.7.1.2
nvidia-cusparse-cu12 12.5.4.2
nvidia-cusparselt-cu12 0.6.3
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.6.85
nvidia-nvtx-cu12 12.6.77
onnx 1.17.0
onnxruntime-gpu 1.20.1
onnxslim 0.1.48
opencv-python 4.11.0.86
opencv-python-headless 4.11.0.86
openvino 2025.0.0
openvino-telemetry 2025.0.0
packaging 24.2
pandas 2.2.3
pillow 11.1.0
pip 24.3.1
propcache 0.2.1
protobuf 5.29.3
psutil 6.1.1
psycopg2-binary 2.9.10
py-cpuinfo 9.0.0
pyarrow 19.0.0
pycparser 2.22
pydantic 2.10.5
pydantic_core 2.27.2
PyJWT 2.10.1
pyparsing 3.2.1
pysrt 1.1.2
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-magic 0.4.27
python-multipart 0.0.20
pytorch-lightning 2.5.0.post0
pytz 2025.1
PyYAML 6.0.2
pyzmq 26.2.0
ray 2.40.0
referencing 0.36.2
requests 2.32.3
rpds-py 0.22.3
safetensors 0.5.2
scipy 1.15.1
seaborn 0.13.2
setuptools 75.8.0
simsimd 6.2.1
six 1.17.0
sniffio 1.3.1
SQLAlchemy 2.0.37
sqlmodel 0.0.22
starlette 0.41.3
stringzilla 3.11.3
sympy 1.13.1
tensorboardX 2.6.2.2
tensorrt 10.7.0.post1
tensorrt_cu12 10.7.0.post1
tensorrt-cu12-bindings 10.7.0.post1
tensorrt-cu12-libs 10.7.0.post1
timm 1.0.14
torch 2.6.0+cu126
torch_tensorrt 2.6.0+cu126
torchaudio 2.6.0+cu126
TorchCodec 0.2.0+cu126
torchmetrics 1.0.3
torchvision 0.21.0+cu126
tqdm 4.67.1
triton 3.2.0
typing_extensions 4.12.2
tzdata 2025.1
ultralytics 8.3.76
ultralytics-thop 2.0.14
urllib3 2.3.0
uvicorn 0.34.0
websockets 14.2
wheel 0.45.1
yarl 1.18.3
### Minimal Reproducible Example
```
DEVICE0 = "cuda:1"
torch.set_default_device(device=DEVICE0)
with torch.cuda.device(device=DEVICE0):
model = YOLO("yolo11m.pt")
model.export(format="engine", half=True, imgsz=TRACK_HW, batch=BATCH_SIZE, dynamic=True, device=DEVICE0)
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-02-19T10:30:55Z | 2025-02-20T18:39:51Z | https://github.com/ultralytics/ultralytics/issues/19310 | [
"exports"
] | liwtw | 4 |
JoeanAmier/XHS-Downloader | api | 55 | 使用油猴脚本 点击了提取发现作品链接 没有任何反应 | <img width="274" alt="image" src="https://github.com/JoeanAmier/XHS-Downloader/assets/23499946/ed5c4edb-14e1-412e-a82f-8262b0466d67">
| open | 2024-02-27T10:12:31Z | 2024-02-28T11:45:01Z | https://github.com/JoeanAmier/XHS-Downloader/issues/55 | [] | qiyaozu | 1 |
pydata/pandas-datareader | pandas | 769 | Could not find a version that satisfies the requirement tensorflow ,tensorflow download error | pip install --upgrade tensorflow

| closed | 2020-04-17T14:30:19Z | 2020-07-06T23:02:09Z | https://github.com/pydata/pandas-datareader/issues/769 | [] | ashutos-lab | 1 |
flasgger/flasgger | rest-api | 52 | Why docsstring in head method is not working in flasgger ? |
> Docstring in triple quotes is not working in the head method. I am using the package flassger. I am not able to use docstring in head method for swagger ui. However, it is working in patch, post, put, and get methods.
```
@app.route('/flight/<flight_no>', methods=['HEAD'])
def get_flight_exist(flight_no):
"""
show Flight Existence
This resource returns flight exist response
---
tags:
- hello
parameters:
- name: flight_no
in: path
type: string
description: Flight_no
required: true
responses:
'200':
description: Flight data response
schema:
description: Flight object
properties:
flight_name:
type: string
description: name of the flight
flight_no:
type: string
description: flight number
total_seat:
type: integer
required:
- flight_name
- flight_no
- total_seat
'404':
description: Flight not found
"""
flight_data = mongo.db.flight_details
info = flight_data.find_one({'flight_no': flight_no})
if info:
if request.headers['Accept'] == 'application/json':
flight_exist_response = make_response()
flight_exist_response.status_code = 200
flight_exist_response.mimetype = 'application/json'
return flight_exist_response
else:
flight_not_exist_response = make_response()
flight_not_exist_response.status_code = 404
flight_not_exist_response.mimetype = 'application/json'
return flight_not_exist_response
``` | closed | 2017-03-17T07:57:22Z | 2017-03-22T16:57:43Z | https://github.com/flasgger/flasgger/issues/52 | [
"bug"
] | ravibhushan29 | 8 |
inventree/InvenTree | django | 8,690 | [PUI] Login broken with MFA enabled, template missing | ### Please verify that this bug has NOT been raised before.
- [x] I checked and didn't find a similar issue
### Describe the bug*
24f433c9482a152463c85d5dad4a105ba1508fec broken MFA login via PUI as `accounts/base.html` and dependencies got removed.
### Steps to Reproduce
1. Try and log in with 2FA enabled
2. Instead of token prompt, experience HTTP 500
### Expected behaviour
1. Try and log in with 2FA enabled
2. Get prompted for token
3. Be logged in
Partial revert of 24f433c9482a152463c85d5dad4a105ba1508fec restores 2FA login:
```diff
diff --git a/src/backend/InvenTree/InvenTree/templatetags/inventree_extras.py b/src/backend/InvenTree/InvenTree/templatetags/inventree_extras.py
index 81d9b483e..0f7093722 100644
--- a/src/backend/InvenTree/InvenTree/templatetags/inventree_extras.py
+++ b/src/backend/InvenTree/InvenTree/templatetags/inventree_extras.py
@@ -1,5 +1,6 @@
"""This module provides template tags for extra functionality, over and above the built-in Django tags."""
+import os
import logging
from datetime import date, datetime
@@ -487,3 +488,38 @@ def admin_url(user, table, pk):
pass
return url
+
+@register.simple_tag()
+def get_color_theme_css(user):
+ """Return the custom theme .css file for the selected user."""
+ user_theme_name = get_user_color_theme(user)
+ # Build path to CSS sheet
+ inventree_css_sheet = os.path.join('css', 'color-themes', user_theme_name + '.css')
+
+ # Build static URL
+ inventree_css_static_url = os.path.join(settings.STATIC_URL, inventree_css_sheet)
+
+ return inventree_css_static_url
+
+
+@register.simple_tag()
+def get_user_color_theme(user):
+ """Get current user color theme."""
+ from common.models import ColorTheme
+
+ try:
+ if not user.is_authenticated:
+ return 'default'
+ except Exception:
+ return 'default'
+
+ try:
+ user_theme = ColorTheme.objects.filter(user_obj=user).get()
+ user_theme_name = user_theme.name
+ if not user_theme_name or not ColorTheme.is_valid_choice(user_theme):
+ user_theme_name = 'default'
+ except ColorTheme.DoesNotExist:
+ user_theme_name = 'default'
+
+ return user_theme_name
+
diff --git a/src/backend/InvenTree/common/models.py b/src/backend/InvenTree/common/models.py
index ff7411883..0beb58dd4 100644
--- a/src/backend/InvenTree/common/models.py
+++ b/src/backend/InvenTree/common/models.py
@@ -2257,3 +2257,57 @@ class BarcodeScanResult(InvenTree.models.InvenTreeModel):
help_text=_('Was the barcode scan successful?'),
default=False,
)
+
+
+class ColorTheme(models.Model):
+ """Color Theme Setting."""
+
+ name = models.CharField(max_length=20, default='', blank=True)
+
+ user = models.CharField(max_length=150, unique=True)
+ user_obj = models.ForeignKey(User, on_delete=models.CASCADE, blank=True, null=True)
+
+ @classmethod
+ def get_color_themes_choices(cls):
+ """Get all color themes from static folder."""
+ color_theme_dir = (
+ django_settings.STATIC_COLOR_THEMES_DIR
+ if django_settings.STATIC_COLOR_THEMES_DIR.exists()
+ else django_settings.BASE_DIR.joinpath(
+ 'InvenTree', 'static', 'css', 'color-themes'
+ )
+ )
+
+ if not color_theme_dir.exists():
+ logger.error(f'Theme directory "{color_theme_dir}" does not exist')
+ return []
+
+ # Get files list from css/color-themes/ folder
+ files_list = []
+
+ for file in color_theme_dir.iterdir():
+ files_list.append([file.stem, file.suffix])
+
+ # Get color themes choices (CSS sheets)
+ choices = [
+ (file_name.lower(), _(file_name.replace('-', ' ').title()))
+ for file_name, file_ext in files_list
+ if file_ext == '.css'
+ ]
+
+ return choices
+
+ @classmethod
+ def is_valid_choice(cls, user_color_theme):
+ """Check if color theme is valid choice."""
+ try:
+ user_color_theme_name = user_color_theme.name
+ except AttributeError:
+ return False
+
+ for color_theme in cls.get_color_themes_choices():
+ if user_color_theme_name == color_theme[0]:
+ return True
+
+ return False
+
diff --git a/src/backend/InvenTree/templates/account/base.html b/src/backend/InvenTree/templates/account/base.html
new file mode 100644
index 000000000..cec314513
--- /dev/null
+++ b/src/backend/InvenTree/templates/account/base.html
@@ -0,0 +1,121 @@
+{% load static %}
+{% load i18n %}
+{% load inventree_extras %}
+
+<!DOCTYPE html>
+<html lang="en">
+<head>
+
+<!-- Required meta tags -->
+<meta charset="utf-8">
+<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
+
+<!-- Favicon -->
+<link rel="apple-touch-icon" sizes="57x57" href="{% static 'img/favicon/apple-icon-57x57.png' %}">
+<link rel="apple-touch-icon" sizes="60x60" href="{% static 'img/favicon/apple-icon-60x60.png' %}">
+<link rel="apple-touch-icon" sizes="72x72" href="{% static 'img/favicon/apple-icon-72x72.png' %}">
+<link rel="apple-touch-icon" sizes="76x76" href="{% static 'img/favicon/apple-icon-76x76.png' %}">
+<link rel="apple-touch-icon" sizes="114x114" href="{% static 'img/favicon/apple-icon-114x114.png' %}">
+<link rel="apple-touch-icon" sizes="120x120" href="{% static 'img/favicon/apple-icon-120x120.png' %}">
+<link rel="apple-touch-icon" sizes="144x144" href="{% static 'img/favicon/apple-icon-144x144.png' %}">
+<link rel="apple-touch-icon" sizes="152x152" href="{% static 'img/favicon/apple-icon-152x152.png' %}">
+<link rel="apple-touch-icon" sizes="180x180" href="{% static 'img/favicon/apple-icon-180x180.png' %}">
+<link rel="icon" type="image/png" sizes="192x192" href="{% static 'img/favicon/android-icon-192x192.png' %}">
+<link rel="icon" type="image/png" sizes="32x32" href="{% static 'img/favicon/favicon-32x32.png' %}">
+<link rel="icon" type="image/png" sizes="96x96" href="{% static 'img/favicon/favicon-96x96.png' %}">
+<link rel="icon" type="image/png" sizes="16x16" href="{% static 'img/favicon/favicon-16x16.png' %}">
+<link rel="manifest" href="{% static 'img/favicon/manifest.json' %}">
+<meta name="msapplication-TileColor" content="#ffffff">
+<meta name="msapplication-TileImage" content="{% static 'img/favicon/ms-icon-144x144.png' %}">
+<meta name="theme-color" content="#ffffff">
+
+
+<!-- CSS -->
+<link rel="stylesheet" href="{% static 'fontawesome/css/brands.css' %}">
+<link rel="stylesheet" href="{% static 'fontawesome/css/solid.css' %}">
+<link rel="stylesheet" href="{% static 'bootstrap/css/bootstrap.min.css' %}">
+<link rel="stylesheet" href="{% static 'select2/css/select2.css' %}">
+<link rel="stylesheet" href="{% static 'select2/css/select2-bootstrap-5-theme.css' %}">
+<link rel="stylesheet" href="{% static 'css/inventree.css' %}">
+
+<link rel="stylesheet" href="{% get_color_theme_css request.user %}">
+
+<title>
+ {% inventree_title %} | {% block head_title %}{% endblock head_title %}
+</title>
+
+{% block extra_head %}
+{% endblock extra_head %}
+</head>
+
+<body class='login-screen' style='background: url({% inventree_splash %}); background-size: cover;'>
+
+ <div class='container-fluid'>
+ <div class='notification-area' id='alerts'>
+ <!-- Div for displayed alerts -->
+ </div>
+ </div>
+
+ <div class='main body-wrapper login-screen d-flex'>
+
+ <div class='login-container'>
+ <div class="row">
+ <div class='container-fluid'>
+
+ <div class='clearfix content-heading login-header d-flex flex-wrap'>
+ <img class="pull-left" src="{% inventree_logo %}" alt='{% trans "InvenTree logo" %}' width="60" height="60"/>
+ {% include "spacer.html" %}
+ <span class='float-right'><h3>{% inventree_title %}</h3></span>
+ </div>
+ </div>
+ <div class='container-fluid'>
+ <hr>
+ {% block content %}
+ {% endblock content %}
+ </div>
+ </div>
+
+ </div>
+
+ {% block extra_body %}
+ {% endblock extra_body %}
+ </div>
+
+
+
+<!-- general JS -->
+{% include "third_party_js.html" %}
+
+<script type='text/javascript' src='{% static "script/inventree/inventree.js" %}'></script>
+<script type='text/javascript' src='{% static "script/inventree/message.js" %}'></script>
+
+<script type='text/javascript'>
+
+$(document).ready(function () {
+
+ {% if messages %}
+ {% for message in messages %}
+ showMessage("{{ message }}");
+ {% endfor %}
+ {% endif %}
+
+ showCachedAlerts();
+
+ // Add brand icons for SSO providers, if available
+ $('.socialaccount_provider').each(function(i, obj) {
+ var el = $(this);
+ var tag = el.attr('brand_name');
+
+ var icon = window.FontAwesome.icon({prefix: 'fab', iconName: tag});
+
+ if (icon) {
+ el.prepend(`<span class='fab fa-${tag}'></span> `);
+ }
+ });
+
+});
+
+</script>
+
+</body>
+</html>
diff --git a/src/backend/InvenTree/templates/third_party_js.html b/src/backend/InvenTree/templates/third_party_js.html
new file mode 100644
index 000000000..b4d52b748
--- /dev/null
+++ b/src/backend/InvenTree/templates/third_party_js.html
@@ -0,0 +1,39 @@
+{% load static %}
+
+<!-- jquery -->
+<script type="text/javascript" src="{% static 'script/jquery_3.3.1_jquery.min.js' %}"></script>
+<script type='text/javascript' src="{% static 'script/jquery.form.min.js' %}"></script>
+<script type='text/javascript' src="{% static 'script/jquery-ui/jquery-ui.min.js' %}"></script>
+
+<!-- Bootstrap-->
+<script type="text/javascript" src="{% static 'bootstrap/js/bootstrap.bundle.min.js' %}"></script>
+
+<!-- Bootstrap Table -->
+<script defer type='text/javascript' src="{% static 'script/bootstrap/bootstrap-treeview.js' %}"></script>
+<script defer type='text/javascript' src='{% static "treegrid/js/jquery.treegrid.js" %}'></script>
+<script defer type='text/javascript' src='{% static "treegrid/js/jquery.treegrid.bootstrap3.js" %}'></script>
+<script defer type='text/javascript' src="{% static 'bootstrap-table/bootstrap-table.min.js' %}"></script>
+<script defer type='text/javascript' src='{% static "bootstrap-table/extensions/group-by-v2/bootstrap-table-group-by.min.js" %}'></script>
+<script defer type='text/javascript' src='{% static "bootstrap-table/extensions/filter-control/bootstrap-table-filter-control.min.js" %}'></script>
+<script defer type='text/javascript' src='{% static "bootstrap-table/extensions/treegrid/bootstrap-table-treegrid.min.js" %}'></script>
+<script defer type='text/javascript' src='{% static "bootstrap-table/extensions/custom-view/bootstrap-table-custom-view.min.js" %}'></script>
+
+<!-- fontawesome -->
+<script defer type='text/javascript' src="{% static 'fontawesome/js/solid.min.js' %}"></script>
+<script defer type='text/javascript' src="{% static 'fontawesome/js/regular.min.js' %}"></script>
+<script defer type='text/javascript' src="{% static 'fontawesome/js/brands.min.js' %}"></script>
+<script defer type='text/javascript' src="{% static 'fontawesome/js/fontawesome.min.js' %}"></script>
+
+<!-- 3rd party general js -->
+<script defer type="text/javascript" src="{% static 'fullcalendar/main.min.js' %}"></script>
+<script defer type="text/javascript" src="{% static 'fullcalendar/locales-all.min.js' %}"></script>
+<script defer type="text/javascript" src="{% static 'select2/js/select2.full.min.js' %}"></script>
+<script defer type='text/javascript' src="{% static 'script/moment.js' %}"></script>
+<script defer type='text/javascript' src="{% static 'script/chart.js' %}"></script>
+<script defer type='text/javascript' src="{% static 'script/chartjs-adapter-moment.js' %}"></script>
+<script defer type='text/javascript' src="{% static 'script/clipboard.min.js' %}"></script>
+<script defer type='text/javascript' src="{% static 'easymde/easymde.min.js' %}"></script>
+<script defer type='text/javascript' src="{% static 'script/randomColor.min.js' %}"></script>
+<script defer type='text/javascript' src="{% static 'script/html5-qrcode.min.js' %}"></script>
+<script defer type='text/javascript' src="{% static 'script/qrcode.min.js' %}"></script>
+<script defer type='text/javascript' src="{% static 'script/purify.min.js' %}"></script>
```
### Deployment Method
- [ ] Docker
- [ ] Package
- [x] Bare metal
- [ ] Other - added info in Steps to Reproduce
### Version Information
InvenTree-Version: 0.18.0 dev
Django Version: 4.2.17
Commit Hash: ff69cf6
Commit Date: 2024-12-17
Commit Branch: master
Database: postgresql
Debug-Mode: False
Deployed using Docker: False
Platform: Linux-6.1.0-13-cloud-amd64-x86_64-with-glibc2.36
Installer: GIT
nullActive plugins: [{"name":"InvenTreeBarcode","slug":"inventreebarcode","version":"2.1.0"},{"name":"InvenTreeCoreNotificationsPlugin","slug":"inventreecorenotificationsplugin","version":"1.0.0"},{"name":"InvenTreeCurrencyExchange","slug":"inventreecurrencyexchange","version":"1.0.0"},{"name":"InvenTreeLabel","slug":"inventreelabel","version":"1.1.0"},{"name":"InvenTreeLabelMachine","slug":"inventreelabelmachine","version":"1.0.0"},{"name":"InvenTreeLabelSheet","slug":"inventreelabelsheet","version":"1.0.0"},{"name":"DigiKeyPlugin","slug":"digikeyplugin","version":"1.0.0"},{"name":"LCSCPlugin","slug":"lcscplugin","version":"1.0.0"},{"name":"MouserPlugin","slug":"mouserplugin","version":"1.0.0"},{"name":"TMEPlugin","slug":"tmeplugin","version":"1.0.0"},{"name":"Brother Labels","slug":"brother","version":"1.0.0"}]
### Please verify if you can reproduce this bug on the demo site.
- [ ] I can reproduce this bug on the demo site.
### Relevant log output
```shell
``` | closed | 2024-12-17T11:33:22Z | 2024-12-19T20:21:24Z | https://github.com/inventree/InvenTree/issues/8690 | [
"bug",
"User Interface"
] | sur5r | 7 |
coqui-ai/TTS | deep-learning | 3,299 | [Bug] CUDA crash when running xttx inference in Fastapi for streaming endpoint. | ### Describe the bug
I am using code at: https://github.com/hengjiUSTC/xtts-streaming-server/blob/main/server/main.py Building a Fastapi server for streaming TTS service. Got following error
```
Traceback (most recent call last):
File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/responses.py", line 277, in __call__
await wrap(partial(self.listen_for_disconnect, receive))
File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/responses.py", line 273, in wrap
await func()
File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/responses.py", line 250, in listen_for_disconnect
message = await receive()
File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 587, in receive
await self.message_event.wait()
File "/opt/conda/lib/python3.10/asyncio/locks.py", line 214, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f1252414c40
During handling of the above exception, another exception occurred:
+ Exception Group Traceback (most recent call last):
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
| result = await app( # type: ignore[func-returns-value]
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
| return await self.app(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/fastapi/applications.py", line 276, in __call__
| await super().__call__(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
| raise exc
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
| await self.app(scope, receive, _send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
| raise exc
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
| await self.app(scope, receive, sender)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
| raise e
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
| await self.app(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
| await route.handle(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
| await self.app(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/routing.py", line 69, in app
| await response(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/responses.py", line 270, in __call__
| async with anyio.create_task_group() as task_group:
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 658, in __aexit__
| raise BaseExceptionGroup(
| exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/responses.py", line 273, in wrap
| await func()
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/responses.py", line 262, in stream_response
| async for chunk in self.body_iterator:
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/concurrency.py", line 63, in iterate_in_threadpool
| yield await anyio.to_thread.run_sync(_next, iterator)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 49, in run_sync
| return await get_async_backend().run_sync_in_worker_thread(
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2103, in run_sync_in_worker_thread
| return await future
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 823, in run
| result = context.run(func, *args)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/concurrency.py", line 53, in _next
| return next(iterator)
| File "/home/ubuntu/xtts-streaming-server/server/main.py", line 147, in predict_streaming_generator
| for i, chunk in enumerate(chunks):
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
| response = gen.send(None)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 633, in inference_stream
| text_tokens = torch.IntTensor(self.tokenizer.encode(sent, lang=language)).unsqueeze(0).to(self.device)
| RuntimeError: CUDA error: an illegal memory access was encountered
| CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
| For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
| Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
|
+------------------------------------
```
### To Reproduce
runnning https://github.com/hengjiUSTC/xtts-streaming-server/blob/main/server/main.py at AWS g4dn.xlarage. With 16GB Gpu and 8G cpu. Using newest 0.20.6 release.
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
TTS version 0.20.6
pytorch version 2.1.1 install with pip
CUDA version:
>>> print(torch.version.cuda)
12.1
CUDNN version:
>>> print(torch.backends.cudnn.version())
8905
python 3.10.9
OS Ubuntu
GPU: nvidia T4 16GB
```
### Additional context
I think the error do comes with xttx module when running for long time. Does any one have idea why this happening?
| closed | 2023-11-24T10:40:24Z | 2024-09-12T11:14:08Z | https://github.com/coqui-ai/TTS/issues/3299 | [
"bug"
] | hengjiUSTC | 8 |
PaddlePaddle/ERNIE | nlp | 23 | Run environment | 具体的运行环境能够介绍吗? | closed | 2019-03-16T13:10:47Z | 2019-03-19T13:18:00Z | https://github.com/PaddlePaddle/ERNIE/issues/23 | [] | cloudXia777 | 2 |
kizniche/Mycodo | automation | 1,189 | DFRobot PT1000 Temperature-Sensor - DFR0558 | As an alternative I2C-Method for the Atlas PT1000 Temperature Sensors, these ones from DFRobot only costs $25
would be great to have them as official input
https://www.dfrobot.com/product-1753.html
thanks | closed | 2022-05-12T14:30:34Z | 2022-05-20T02:37:46Z | https://github.com/kizniche/Mycodo/issues/1189 | [
"enhancement",
"Fixed and Committed"
] | snickers2k | 4 |
deepset-ai/haystack | machine-learning | 8,561 | AzureOCRDocumentConverter | I'm using the AzureOCRDocumentConverter and I'm struggling to understand how it can be useful.
How am i suppose to use the output documents of the AzureOCRDocumentConverter? All the extracted tables gets flatten out in the documents value entry of the returned dictionary. Wouldn't be better to convert that extracted text from tables into some markdown format? In this way I can ingest row by row in Elasticsearch. At the moment in the dictionary returned, the structure of the tables gets completely lost and is basically useless, i can't chunk that.
thanks a lot!
| closed | 2024-11-20T16:43:10Z | 2024-11-23T16:33:07Z | https://github.com/deepset-ai/haystack/issues/8561 | [] | CompareSan | 0 |
twopirllc/pandas-ta | pandas | 531 | FutureWarning: The series.append method is deprecated. Use pandas.concat instead. | **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
```
- 0.3.14b0
**Do you have _TA Lib_ also installed in your environment?**
- Nope
**Did you upgrade? Did the upgrade resolve the issue?**
- Yes
- Nope
**Describe the bug**
When calling ta.mcgd(), `FutureWarning` reported.
```
FutureWarning: The series.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
```
**To Reproduce**
`y_smooth = ta.mcgd(series, ma_len)`
**Expected behavior**
Use pandas.concat instead of series.append method when caculating mcgd().
Thanks!
| open | 2022-05-11T04:31:07Z | 2024-01-25T02:08:30Z | https://github.com/twopirllc/pandas-ta/issues/531 | [
"duplicate",
"enhancement",
"help wanted"
] | hyxxsfwy | 2 |
graphdeco-inria/gaussian-splatting | computer-vision | 799 | Training Time Breakdown | In the paper, the author mentioned that "The majority (∼80%) of our training time is spent in Python code, since...". However, I did a runtime breakdown on training of 3DGS using torch.cuda.Event(), and it turns out that most of the time is spent on the backward propagation(which is implemented by CUDA), can anyone please explain if there is a misunderstanding? | open | 2024-05-09T13:48:00Z | 2024-05-09T13:48:00Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/799 | [] | AmeYatoro | 0 |
huggingface/datasets | pytorch | 6,637 | 'with_format' is extremely slow when used together with 'interleave_datasets' or 'shuffle' on IterableDatasets | ### Describe the bug
If you:
1. Interleave two iterable datasets together with the interleave_datasets function, or shuffle an iterable dataset
2. Set the output format to torch tensors with .with_format('torch')
Then iterating through the dataset becomes over 100x slower than it is if you don't apply the torch formatting.
### Steps to reproduce the bug
```python
import datasets
import torch
from tqdm import tqdm
rand_a = torch.randn(3,224,224)
rand_b = torch.randn(3,224,224)
a = torch.stack([rand_a] * 1000)
b = torch.stack([rand_b] * 1000)
features = datasets.Features({"tensor": datasets.Array3D(shape=(3,224,224), dtype="float32")})
ds_a = datasets.Dataset.from_dict({"tensor": a}, features=features).to_iterable_dataset()
ds_b = datasets.Dataset.from_dict({"tensor": b}, features=features).to_iterable_dataset()
# Iterating through either dataset with torch formatting is really fast (2000it/s on my machine)
for example in tqdm(ds_a.with_format('torch')):
pass
# Iterating through either dataset shuffled is also pretty fast (100it/s on my machine)
for example in tqdm(ds_a.shuffle()):
pass
# Iterating through this interleaved dataset is pretty fast (200it/s on my machine)
ds_fast = datasets.interleave_datasets([ds_a, ds_b])
for example in tqdm(ds_fast):
pass
# Iterating through either dataset with torch formatting *after shuffling* is really slow... (<2it/s on my machine)
for example in tqdm(ds_a.shuffle().with_format('torch')):
pass
# Iterating through this torch formatted interleaved dataset is also really slow (<2it/s on my machine)...
ds_slow = datasets.interleave_datasets([ds_a, ds_b]).with_format('torch')
for example in tqdm(ds_slow):
pass
# Even doing this is way faster!! (70it/s on my machine)
for example in tqdm(ds_fast):
test = torch.tensor(example['tensor'])
```
### Expected behavior
Applying torch formatting to the interleaved dataset shouldn't increase the time taken to iterate through the dataset by very much, since even explicitly converting every example is over 70x faster than calling .with_format('torch').
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.38
- Python version: 3.11.6
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
| open | 2024-02-01T17:16:54Z | 2024-02-05T10:43:47Z | https://github.com/huggingface/datasets/issues/6637 | [] | tobycrisford | 1 |
pytest-dev/pytest-xdist | pytest | 234 | Stop using master/slave terminology | Can xdist please stop using the master/slave terminology? Would you accept a PR changing "slave" to "worker"? The terminology is upsetting to some people (and I've already had complaints where I work) and there is no reason to use it when other words exist that do not offend people. Words do matter. | closed | 2017-09-27T15:44:58Z | 2018-02-06T09:28:00Z | https://github.com/pytest-dev/pytest-xdist/issues/234 | [
"enhancement",
"help wanted",
"easy"
] | timj | 9 |
amdegroot/ssd.pytorch | computer-vision | 332 | Randomly sampling a patch on data augmentation | Hi,
In randomly sampling a patch on the data augmentation, we have difference size of input images. In this case, we need to resize those image patches?
Thanks. | open | 2019-05-02T08:12:40Z | 2019-05-02T08:12:40Z | https://github.com/amdegroot/ssd.pytorch/issues/332 | [] | ardianumam | 0 |
ijl/orjson | numpy | 193 | Missing wheel for CentOS 7 | OS: CentOS 7
In the 3.6.1 change to `manylinux_2_24`, CentOS 7 x86 wheel support was dropped. CentOS 7 supports `manylinux2014`/`manylinux_2_17` but not `manylinux_2_24`. `aarch64` support for `manylinux2014` still exists because the build lines for 3.9/8/7/6 were not updated with the container.
I've currently solved this issue by pinning orjson < 3.6.1. | closed | 2021-08-04T21:45:58Z | 2021-08-05T14:40:52Z | https://github.com/ijl/orjson/issues/193 | [] | rafmagns-skepa-dreag | 5 |
tensorflow/tensor2tensor | machine-learning | 1,083 | InvalidArgumentError in Transformer model | ### Description
I am trying to run the `Transformer` model in training mode. I took the as the [`asr_transformer` notebook](https://github.com/tensorflow/tensor2tensor/blob/v1.9.0/tensor2tensor/notebooks/asr_transformer.ipynb) as example and built up on this.
> **Note:** For `hparams` the input and target modality is just `'default'`.
### The error
The exception information:
```
tensorflow.python.framework.errors_impl.InvalidArgumentError:
In[0].dim(0) and In[1].dim(0) must be the same:
[100,2,1,192] vs [1,2,229,192] [Op:BatchMatMul]
name: transformer/parallel_0/transformer/transformer/body/decoder/layer_0/encdec_attention/multihead_attention/dot_product_attention/MatMul/
```
### Inspection
I was debugging into the `transformer` model right to where `dot_product_attention` is being called [common_attention.py#L3470](https://github.com/tensorflow/tensor2tensor/blob/v1.9.0/tensor2tensor/layers/common_attention.py#L3470). From the docs:
```python
"""Dot-product attention.
Args:
q: Tensor with shape [..., length_q, depth_k].
k: Tensor with shape [..., length_kv, depth_k]. Leading dimensions must
match with q.
v: Tensor with shape [..., length_kv, depth_v] Leading dimensions must
match with q.
```
### Environment information
```
OS: Linux everest11 4.15.0-34-generic #37~16.04.1-Ubuntu SMP Tue Aug 28 10:44:06 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ pip freeze | grep tensor
tensor2tensor==1.9.0
tensorboard==1.10.0
tensorflow==1.10.1
$ python -V
Python 3.5.6 :: Anaconda, Inc.
```
# Steps to reproduce:
```python
import os
import logging.config
import tensorflow as tf
from tensor2tensor import problems
from tensor2tensor import models
from tensor2tensor.utils import metrics
from tensor2tensor.utils import registry
from tensor2tensor.utils import trainer_lib
from asr.util import is_debug_mode
Modes = tf.estimator.ModeKeys
tfe = tf.contrib.eager
tfe.enable_eager_execution()
if __name__ == '__main__':
problem_name = 'librispeech_clean_small'
input_dir = os.path.join('datasets', 'input', 'problems', problem_name) #'input/ende_wmt_bpe32k'
data_dir = os.path.join(input_dir, 'data')
tmp_dir = os.path.join(input_dir, 'tmp')
tf.gfile.MakeDirs(data_dir)
tf.gfile.MakeDirs(tmp_dir)
problem = problems.problem(problem_name)
problem.generate_data(data_dir, tmp_dir)
encoders = problem.feature_encoders(None)
model_name = "transformer"
hparams_set = "transformer_librispeech_tpu"
hparams = trainer_lib.create_hparams(hparams_set, data_dir=data_dir, problem_name=problem_name)
model_class = registry.model(model_name)
model = model_class(hparams=hparams, mode=Modes.TRAIN)
# In Eager mode, opt.minimize must be passed a loss function wrapped with
# implicit_value_and_gradients
@tfe.implicit_value_and_gradients
def loss_fn(features):
_, losses = model(features)
return losses["training"]
# Setup the training data
train_data = problem.dataset(Modes.TRAIN, data_dir)
optimizer = tf.train.AdamOptimizer()
# Train
NUM_STEPS = 100
for count, example in enumerate(tfe.Iterator(train_data)):
example['inputs'] = tf.reshape(example['inputs'], (1,) + tuple([d.value for d in example['inputs'].shape]))
loss, gv = loss_fn(example)
optimizer.apply_gradients(gv)
```
| closed | 2018-09-20T10:45:00Z | 2018-09-20T12:20:46Z | https://github.com/tensorflow/tensor2tensor/issues/1083 | [] | stefan-falk | 1 |
ultralytics/ultralytics | deep-learning | 19,493 | How to handle cascade yolo models? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I have three series yolo11 models. First method is track and others are predict. What is the best solution for handling input, output data of models?
I checked the docs and use for loop on results, then get boxes, but I think it's time consuming method. Is there a better way?
### Additional
_No response_ | open | 2025-03-02T16:04:40Z | 2025-03-07T04:35:46Z | https://github.com/ultralytics/ultralytics/issues/19493 | [
"question",
"track",
"detect"
] | erfansafaie | 4 |
fastapi-users/fastapi-users | asyncio | 168 | Use username instead of email ? | Quick question, can I change using a nick_name as a login name instead of email?
How to make email field optional, but not mandatory? | closed | 2020-04-27T12:59:10Z | 2023-11-07T01:27:29Z | https://github.com/fastapi-users/fastapi-users/issues/168 | [
"question"
] | galvakojis | 8 |
sepandhaghighi/samila | matplotlib | 19 | Alpha Value | Set default value of alpha to `DEFAULT_ALPHA=0.1`. | closed | 2021-09-28T05:51:08Z | 2021-09-28T10:35:46Z | https://github.com/sepandhaghighi/samila/issues/19 | [
"enhancement"
] | sadrasabouri | 0 |
adap/flower | scikit-learn | 4,345 | Baseline FedNova does not work with batchnorms | ### Describe the bug
I am currently running the baselines of Flower with different models. I tried Fednova and it seems to work well with the standard config. Once I activate batchnorms in the VGG model, an error gets thrown:
```
File "/home/korjakow/.cache/pypoetry/virtualenvs/fednova-0w39Djqe-py3.10/lib/python3.10/site-packages/flwr/simulation/ray_transport/ray_client_proxy.py", line 196, in fit
return maybe_call_fit(
File "/home/korjakow/.cache/pypoetry/virtualenvs/fednova-0w39Djqe-py3.10/lib/python3.10/site-packages/flwr/client/client.py", line 217, in maybe_call_fit
return client.fit(fit_ins)
File "/home/korjakow/.cache/pypoetry/virtualenvs/fednova-0w39Djqe-py3.10/lib/python3.10/site-packages/flwr/client/app.py", line 333, in _fit
results = self.numpy_client.fit(parameters, ins.config) # type: ignore
File "/home/korjakow/projects/flower-1/baselines/fednova/fednova/client.py", line 71, in fit
self.set_parameters(parameters)
File "/home/korjakow/projects/flower-1/baselines/fednova/fednova/client.py", line 65, in set_parameters
self.optimizer.set_model_params(parameters)
File "/home/korjakow/projects/flower-1/baselines/fednova/fednova/models.py", line 360, in set_model_params
p.data.copy_(param_tensor)
RuntimeError: The size of tensor a (3) must match the size of tensor b (64) at non-singleton dimension 3
```
I assume that the baseline wasn't executed with the batchnorm options and the current code does not handle the running stats of the batchnorm layer well.
### Steps/Code to Reproduce
1. Apply the fix from #4344 since otherwise the code errors beforehands.
2. Activate batchnorms in https://github.com/adap/flower/blob/cdc8c43d63e1dbb50d960662caedd205afee7429/baselines/fednova/fednova/models.py#L48
3. Run the baseline: `python -m fednova.main`
### Expected Results
The training should progress.
### Actual Results
An error gets thrown as shown above. | open | 2024-10-21T14:48:01Z | 2024-12-11T18:55:05Z | https://github.com/adap/flower/issues/4345 | [
"bug",
"stale",
"part: baselines"
] | wittenator | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 956 | I've got this problem | closed | 2023-11-08T10:22:45Z | 2023-11-08T12:15:40Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/956 | [] | ahmedabdurrahman | 0 | |
gtalarico/django-vue-template | rest-api | 24 | Error | ```
xxxx-Pro:django-vue-template zz$ python3 manage.py migrate
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ModuleNotFoundError: No module named 'django'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 14, in <module>
) from exc
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
```
I use Mac and something wrong.
Could u help me plz? | closed | 2019-04-10T02:32:20Z | 2019-04-10T04:33:20Z | https://github.com/gtalarico/django-vue-template/issues/24 | [] | InVinCiblezz | 3 |
kaliiiiiiiiii/Selenium-Driverless | web-scraping | 315 | user_data_dir() got an unexpected keyword argument 'ensure_exists' | When importing `webdriver` through either of these methods:
```py
from selenium_driverless import webdriver
```
```py
import selenium_driverless.webbdriver
```
I get this error:
> Exception has occurred: TypeError
user_data_dir() got an unexpected keyword argument 'ensure_exists'
I have tried:
- Uninstalling and reinstalling selenium-driverless
- Running `python3 -m pip install --upgrade selenium-driverless`
- Restarting
- And both methods listed above.
I am running Linux Mint XFCE if it helps. | closed | 2025-03-04T15:57:27Z | 2025-03-04T17:04:13Z | https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/315 | [] | IMakeThingsWithCode | 1 |
strawberry-graphql/strawberry | fastapi | 3,785 | `@oneOf` inputs do not support nullability | When an input class is marked as [`@oneOf`](https://www.apollographql.com/blog/more-expressive-schemas-with-oneof) and has a nullable field, it is not possible to actually provide a null value in the query, and the query completely fails.
## Describe the Bug
This bug relates to the usage of input classes that implement the [`@oneOf`](https://www.apollographql.com/blog/more-expressive-schemas-with-oneof) directive using [`one_of=True`](https://strawberry.rocks/docs/types/input-types#one-of-input-types):
```python
@strawberry.input(one_of=True)
class FilterInput:
name: str | None
```
As shown, it should be possible to pass a `null` value in the query for the `name` field. However, when executing this query:
```graphql
query MyQuery {
test(filterInput: {name: null})
}
```
You get this response:
```json
{
"data": null,
"errors": [
{
"message": "Field 'FilterInput.name' must be non-null.",
"locations": [
{
"line": 2,
"column": 21
}
]
}
]
}
```
I would assume that the `one_of` parameter directs Strawberry to evaluate if any of the given fields are `None` and fail if so, even though this is a legitimate input value.
---
I have provided this minimum working example, which runs using FastAPI. I have also included Poetry's `pyproject.toml` file so this issue can be replicated with ease:
```toml
[tool.poetry]
name = "strawberry-testing"
version = "0.1.0"
description = ""
authors = []
[tool.poetry.dependencies]
python = ">=3.11,<3.12"
strawberry-graphql = "0.260.2"
fastapi = "0.115.8"
uvicorn = "0.34.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
```
```python
from fastapi import FastAPI
from strawberry.fastapi import GraphQLRouter
import strawberry
@strawberry.input(one_of=True)
class FilterInput:
name: str | None
@strawberry.input
class Query:
@strawberry.field
async def test(self, filter_input: FilterInput) -> str | None:
return filter_input.name
app = FastAPI()
schema = strawberry.Schema(query=Query)
graphql_app = GraphQLRouter(schema=schema)
app.include_router(graphql_app, prefix="/graphql")
```
| open | 2025-02-17T16:29:05Z | 2025-02-17T16:29:05Z | https://github.com/strawberry-graphql/strawberry/issues/3785 | [
"bug"
] | DevJake | 0 |
FactoryBoy/factory_boy | django | 275 | Wire together `random` from FactoryBoy hooks to Faker's random seed | Per https://github.com/rbarrois/factory_boy/pull/247#issuecomment-189897331
| closed | 2016-02-29T10:44:42Z | 2017-04-06T13:53:32Z | https://github.com/FactoryBoy/factory_boy/issues/275 | [] | jeffwidman | 2 |
pytest-dev/pytest-html | pytest | 439 | I did not find a method available to insert a Pie chart into the report,so i did it myself, and maybe someone esle need this too. | 
| open | 2021-01-21T10:11:25Z | 2024-10-04T15:02:11Z | https://github.com/pytest-dev/pytest-html/issues/439 | [] | xiaosanye886 | 2 |
AirtestProject/Airtest | automation | 348 | 调用其他脚本进行文件写入的时候,文件内容 | 调用其他脚本进行文件写入的时候,文件内容
主脚本air:
# -*- encoding=utf8 -*-
from airtest.core.api import *
import os
import subprocess
auto_setup(__file__)
str=('python /Users/ljh/Documents/test.py "11" ')
p=os.system(str)
print(p)
调入脚本名称为test.py 内容如下:
#coding=utf-8
import time
import os
file = r'test.txt'
with open(file, 'w+') as f:
f.write("222") | closed | 2019-04-08T12:51:43Z | 2019-04-09T01:42:57Z | https://github.com/AirtestProject/Airtest/issues/348 | [] | linjinghui01 | 4 |
apify/crawlee-python | web-scraping | 105 | System status reports "Total weight cannot be zero" | - Crawlee v0.0.2
- Script:
```python
import asyncio
import logging
from bs4 import BeautifulSoup
from crawlee.log_config import CrawleeLogFormatter
from crawlee.http_crawler import HttpCrawler
from crawlee.http_crawler.types import HttpCrawlingContext
from crawlee.storages import Dataset, RequestList
logger = logging.getLogger()
handler = logging.StreamHandler()
handler.setFormatter(fmt=CrawleeLogFormatter())
logger.addHandler(hdlr=handler)
logger.setLevel(logging.DEBUG)
async def main() -> None:
request_list = RequestList(['https://crawlee.dev'])
crawler = HttpCrawler(request_provider=request_list)
dataset = await Dataset.open()
@crawler.router.default_handler
async def handler(context: HttpCrawlingContext) -> None:
status_code = context.http_response.status_code
soup = BeautifulSoup(context.http_response.read(), 'lxml')
title = soup.find('title').text
result = {'url': context.http_response.url, 'title': title, 'status_code': status_code}
print(f'Got the result: {result}, gonna push it to the dataset.')
await dataset.push_data(result)
await crawler.run()
if __name__ == '__main__':
asyncio.run(main())
```
- Resulting in
```
[asyncio] DEBUG Using selector: EpollSelector
[httpx] DEBUG load_ssl_context verify=True cert=None trust_env=True http2=False
[httpx] DEBUG load_verify_locations cafile='/home/vdusek/Projects/crawlee-py/.venv/lib/python3.12/site-packages/certifi/cacert.pem'
[crawlee.autoscaling.snapshotter] INFO Setting max_memory_size of this run to 3.84 GB.
[crawlee._utils.recurring_task] DEBUG Calling RecurringTask.__init__(func=_snapshot_event_loop, delay=0:00:00.500000)...
[crawlee._utils.recurring_task] DEBUG Calling RecurringTask.__init__(func=_snapshot_client, delay=0:00:01)...
[crawlee._utils.recurring_task] DEBUG Calling RecurringTask.__init__(func=_log_system_status, delay=0:01:00)...
[crawlee._utils.recurring_task] DEBUG Calling RecurringTask.__init__(func=_autoscale, delay=0:00:10)...
[crawlee._utils.recurring_task] DEBUG Calling RecurringTask.__init__(func=_emit_system_info_event, delay=0:01:00)...
[crawlee.autoscaling.autoscaled_pool] DEBUG Starting the pool
[crawlee.autoscaling.system_status] WARN Total weight cannot be zero
[crawlee._utils.system] DEBUG Calling get_cpu_info()...
[crawlee.autoscaling.system_status] WARN Total weight cannot be zero
[crawlee.autoscaling.system_status] WARN Total weight cannot be zero
[crawlee.autoscaling.system_status] WARN Total weight cannot be zero
[crawlee.autoscaling.autoscaled_pool] INFO current_concurrency = 0; desired_concurrency = 2; cpu = 0; mem = 0; event_loop = 0; client_info = 0
[crawlee.autoscaling.system_status] WARN Total weight cannot be zero
[crawlee.autoscaling.system_status] WARN Total weight cannot be zero
[crawlee.autoscaling.autoscaled_pool] DEBUG Scheduling a new task
[crawlee.autoscaling.system_status] WARN Total weight cannot be zero
[crawlee.autoscaling.system_status] WARN Total weight cannot be zero
[crawlee.autoscaling.autoscaled_pool] DEBUG Scheduling a new task
[crawlee.autoscaling.system_status] WARN Total weight cannot be zero
[crawlee.autoscaling.system_status] WARN Total weight cannot be zero
[crawlee.autoscaling.autoscaled_pool] DEBUG Not scheduling new tasks - already running at desired concurrency
[crawlee.autoscaling.autoscaled_pool] DEBUG Worker task finished
[httpcore.connection] DEBUG connect_tcp.started host='crawlee.dev' port=443 local_address=None timeout=5.0 socket_options=None
[crawlee.autoscaling.system_status] WARN Total weight cannot be zero
[crawlee.autoscaling.system_status] WARN Total weight cannot be zero
[crawlee.autoscaling.autoscaled_pool] DEBUG Not scheduling new task - no task is ready
[httpcore.connection] DEBUG connect_tcp.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7f68ede856d0>
[httpcore.connection] DEBUG start_tls.started ssl_context=<ssl.SSLContext object at 0x7f68ede482d0> server_hostname='crawlee.dev' timeout=5.0
[httpcore.connection] DEBUG start_tls.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7f68edd41550>
[httpcore.http11] DEBUG send_request_headers.started request=<Request [b'GET']>
[httpcore.http11] DEBUG send_request_headers.complete
[httpcore.http11] DEBUG send_request_body.started request=<Request [b'GET']>
[httpcore.http11] DEBUG send_request_body.complete
[httpcore.http11] DEBUG receive_response_headers.started request=<Request [b'GET']>
[httpcore.http11] DEBUG receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Connection', b'keep-alive'), (b'Content-Length', b'15939'), (b'Server', b'GitHub.com'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Last-Modified', b'Thu, 11 Apr 2024 09:11:37 GMT'), (b'Access-Control-Allow-Origin', b'*'), (b'Strict-Transport-Security', b'max-age=31556952'), (b'ETag', b'W/"6617a949-11d3c"'), (b'expires', b'Thu, 11 Apr 2024 09:29:14 GMT'), (b'Cache-Control', b'max-age=600'), (b'Content-Encoding', b'gzip'), (b'x-proxy-cache', b'MISS'), (b'X-GitHub-Request-Id', b'79F2:30F74F:7BBC900:7DB5193:6617AB12'), (b'Accept-Ranges', b'bytes'), (b'Date', b'Thu, 11 Apr 2024 12:52:21 GMT'), (b'Via', b'1.1 varnish'), (b'Age', b'149'), (b'X-Served-By', b'cache-fra-etou8220138-FRA'), (b'X-Cache', b'HIT'), (b'X-Cache-Hits', b'1'), (b'X-Timer', b'S1712839941.239169,VS0,VE1'), (b'Vary', b'Accept-Encoding'), (b'X-Fastly-Request-ID', b'cef204eb5ba20a84be8334407996f7874dd39c5a')])
[httpx] INFO HTTP Request: GET https://crawlee.dev "HTTP/1.1 200 OK"
[httpcore.http11] DEBUG receive_response_body.started request=<Request [b'GET']>
[httpcore.http11] DEBUG receive_response_body.complete
[httpcore.http11] DEBUG response_closed.started
[httpcore.http11] DEBUG response_closed.complete
Got the result: {'url': URL('https://crawlee.dev'), 'title': 'Crawlee · Build reliable crawlers. Fast. | Crawlee', 'status_code': 200}, gonna push it to the dataset.
[crawlee._utils.system] DEBUG Calling get_memory_info()...
[crawlee.autoscaling.autoscaled_pool] DEBUG Worker task finished
[crawlee.autoscaling.autoscaled_pool] DEBUG `is_finished_function` reports that we are finished
[crawlee.autoscaling.autoscaled_pool] DEBUG Terminating - no running tasks to wait for
[crawlee.autoscaling.autoscaled_pool] INFO Waiting for remaining tasks to finish
[crawlee.autoscaling.autoscaled_pool] DEBUG Pool cleanup finished
```
- There is probably some issue in the `Snapshotter` / `SystemStatus` - investigate & fix it. | closed | 2024-04-11T12:56:24Z | 2024-05-31T15:50:37Z | https://github.com/apify/crawlee-python/issues/105 | [
"bug",
"t-tooling"
] | vdusek | 0 |
jupyter/nbviewer | jupyter | 218 | Design explorations for nbviewer | I have been playing with some design ideas for nbviewer while watching a movie tonight. This is based on the discussion from this weeks dev meeting. Here are the goals:
- A top nav bar that contains provides a clear and unambiguous branding for the website.
- A simplified frontpage design that places more emphasis on example notebook content rather than oversized "IPython Notebook Viewer" banner text.
- Removed redundant sub-banners.
I wanted to get feedback about the broad strokes of the idea before I code this up.
Here is a screenshot.

| closed | 2014-03-15T04:39:57Z | 2015-03-03T18:03:16Z | https://github.com/jupyter/nbviewer/issues/218 | [] | ellisonbg | 5 |
tqdm/tqdm | jupyter | 687 | Tqdm notebook / trange prints nothing in Google Colab local runtime | - [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
This is using the jupyter notebook support.
I have been trying to use the progress bar in Google Colab. It works fine when using the hosted runtime.
However, when I use a local time, nothing is shown at all.
To make things even weirder, it works just fine in jupyterlab on the local runtime, just not in Colab when using the local runtime.
For the hosted runtime:
Tqdm: 4.28.1
Python: 3.6.7
OS: [GCC 8.2.0] linux
For the local runtime:
Tqdm: 4.31.1
Python: 3.5.3
OS: [GCC 6.3.0 20170516] linux
Example:
```python
from tqdm import tnrange as trange
from time import sleep
for i in trange(10):
sleep(1)
```
Google Colab, Hosted Runtime: works
Google Colab, Local Runtime: fails, shows nothing
Jupyter Lab, Local Runtime: works
| closed | 2019-03-02T08:35:00Z | 2020-11-18T13:03:57Z | https://github.com/tqdm/tqdm/issues/687 | [
"question/docs ‽",
"p5-wont-fix ☮",
"p2-bug-warning ⚠",
"submodule-notebook 📓"
] | rnett | 5 |
ydataai/ydata-profiling | jupyter | 1,336 | "Generate Report Structure" progress bar doesn't track progress | ### Current Behaviour
The "Generate Report Structure" progress bar never moves. Other progress bars work.
I can see, since I'm running with `html.inline = False` that progress is happening as svg files are generated in the assets directly, however that progress is not tracked in the current bar.
### Expected Behaviour
Progress bar should track the progress of generating the assets.
### Data Description
N/A
### Code that reproduces the bug
```Python
N/A
```
### pandas-profiling version
v4.1.2
### Dependencies
```Text
N/A
```
### OS
_No response_
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | open | 2023-05-19T15:14:17Z | 2023-05-23T23:39:18Z | https://github.com/ydataai/ydata-profiling/issues/1336 | [
"information requested ❔"
] | gdevenyi | 5 |
django-oscar/django-oscar | django | 3,949 | Image cropper | Found a bug? Please fill out the sections below.
Sometimes imaages are uploaded with different aspec ratio. In Listview now product tiles have different heights.
A summary of the issue.
Upload in dashboard in 2 different products different images with differend aspect ratios and then go to listview of products.
Solution
I think best solution is a cropper like this https://github.com/aneesh2usman/django_cropper_image where i can setup i fix aspect ratio.
Please insert a cropper in dashboard on product createview/updateview.
Thanks
| closed | 2022-07-12T20:35:35Z | 2023-06-23T13:44:36Z | https://github.com/django-oscar/django-oscar/issues/3949 | [] | Bastilla123 | 1 |
ageitgey/face_recognition | machine-learning | 727 | Face Detection Score | * face_recognition version: Latest
* Python version: 3.7
* Operating System: MacOSX
### Description
I am doing a face detection project and I found this lib is pretty awesome. So far, I am able to use the high level API, such as `face_locations` to get and detect faces in a given image. However, I am wondering if it's possible to get the detection score as well. concretely, the probability score of the detected area is a face. Is this possible?
### What I Did
So far, I am trying to understand the APIs from the [docs](https://face-recognition.readthedocs.io/en/latest/readme.html#face-detection)
| open | 2019-01-29T06:52:17Z | 2019-01-31T02:32:37Z | https://github.com/ageitgey/face_recognition/issues/727 | [] | chrisbangun | 2 |
tqdm/tqdm | pandas | 1,284 | Bluetooth headphones issue | In telegram latest version 8.3.2 video 📹 chat of large number of members approximately more than 300 like meeting, Bluetooth earphone connection automatically moves to phone speaker 🔊, beside on other apps like WhatsApp, and normal use it doesn't happen, only happening with telegram. Is it bug? I don't think so there is problem in headphones otherwise it would cause same issue in all apps but it's happening in telegram only. any solution plz let me know...Every time I have to manually select Bluetooth earphone after it automatically shifted to phone speaker, then again I have to select Bluetooth earphones from corner dots but again it moves to phone speaker itself... | closed | 2021-12-25T16:16:44Z | 2021-12-25T16:17:45Z | https://github.com/tqdm/tqdm/issues/1284 | [] | techknow7 | 0 |
pinry/pinry | django | 162 | Combine running Front-End and Back-End into single command | Concurrently is really good for this: https://www.npmjs.com/package/concurrently
Will need to improve makefile and development docs. | open | 2019-12-08T19:36:31Z | 2019-12-09T09:16:35Z | https://github.com/pinry/pinry/issues/162 | [
"enhancement",
"javascript",
"python"
] | overshard | 1 |
jpadilla/django-rest-framework-jwt | django | 411 | Explain better in documention the Logout Strategy | Initially, I thought the JWT token need to be deleted (I didn't understand the concept of Stateless yet), so my logout solution was to delete the token after X minutes. Many people think this way.
Now, I think the best solution is to set:
JWT_AUTH = {
'JWT_EXPIRATION_DELTA': datetime.timedelta(seconds=15*60),
}
With it, I can set a good timeout and, if the front receives a 401, it must go to logout page.
I think this strategy is not clear in the docs. | open | 2017-12-19T16:46:33Z | 2017-12-19T16:46:33Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/411 | [] | LucasAmorimSilva | 0 |
3b1b/manim | python | 1,427 | Something wrong with DashedVMobject | There may be something wrong with DahsedVMobject. The problem may result from `pointwise_become_partial`? The points of the geometry seem to coincide...

```python
# part of my code
square = DashedVMobject(Square(side_length=3)).shift(LEFT*3+UP*2)
circle = DashedVMobject(Circle(radius=1.5)).shift(RIGHT*3+DOWN*2)
self.add(square, circle)
``` | open | 2021-03-03T03:27:40Z | 2021-03-03T03:28:42Z | https://github.com/3b1b/manim/issues/1427 | [] | widcardw | 0 |
explosion/spaCy | machine-learning | 13,725 | Empty MorphAnalysis Hash differs from Token.morph.key | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
Hello,
I've trained a Morphologizer and i saw that empty MorphAnalysis (`""`) actually have the hash value of `"_"`. Is it by design? Because the documentation doesn't mention this.
> key `int` | The hash of the features string.
```python
for i in doc:
nlp.vocab.strings[i.morph.key] == str(i.morph)
False
False
False
True
False
True
True
False
```
As i use the hash values in a lookup for something, it produced `KeyError`.
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Linux (Debian 12)
* Python Version Used: 3.11
* spaCy Version Used: 3.7.3 (i will train my Morphologizer soon on 3.8.3 to see if that change)
* Environment Information:
| open | 2024-12-26T10:07:16Z | 2024-12-26T10:07:40Z | https://github.com/explosion/spaCy/issues/13725 | [] | thjbdvlt | 0 |
jupyter/docker-stacks | jupyter | 1,881 | Image with jupyterHub | ### What docker image(s) is this feature applicable to?
datascience-notebook
### What changes are you proposing?
I would like an image with JupyterHub instead of JupyterLab.
### How does this affect the user?
It would enable a multi-user data science environment.
### Anything else?
_No response_ | closed | 2023-02-27T02:20:51Z | 2023-03-05T09:16:30Z | https://github.com/jupyter/docker-stacks/issues/1881 | [
"type:Enhancement",
"status:Need Info"
] | cairoapcampos | 2 |
sqlalchemy/alembic | sqlalchemy | 898 | alembic autogenerate migration fails with database file not found error for sqlite | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Trying to auto-generate migrations fails with database file not found error when sqlite db file does not exist
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
When an sqlite db is not found a db file should be created based on the url.
**To Reproduce**
1. Initialize alembic with the async template `alembic init alembic --template async`
2. Set `sqlalchemy.url = sqlite+aiosqlite:///home/void/.local/share/redorg/redorg.db`
3. Edit `alembic/env.py` to set target_metadata correctly.
4. Generate migration using `alembic revision -m 'Create user items table' --autogenerate`
I have verified that the directory exists and has write permissions.
Here is my simple model, directly lifted from the SQLAlchemy example
```py
from sqlalchemy.orm import declarative_base
from sqlalchemy import Column, Integer, String
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
fullname = Column(String)
nickname = Column(String)
def __repr__(self):
return "<User(name='%s', fullname='%s', nickname='%s')>" % (
self.name, self.fullname, self.nickname)
```
**Error**
```
Traceback (most recent call last):
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3212, in _wrap_pool_connect
return fn()
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 307, in connect
return _ConnectionFairy._checkout(self)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 767, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 425, in checkout
rec = pool._do_get()
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/impl.py", line 256, in _do_get
return self._create_connection()
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 253, in _create_connection
return _ConnectionRecord(self)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 368, in __init__
self.__connect()
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 611, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 605, in __connect
connection = pool._invoke_creator(self)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect
return dialect.connect(*cargs, **cparams)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 584, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 291, in connect
await_only(connection),
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 69, in await_only
return current.driver.switch(awaitable)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 122, in greenlet_spawn
value = await result
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/aiosqlite/core.py", line 137, in _connect
self._connection = await future
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/aiosqlite/core.py", line 102, in run
result = function()
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/aiosqlite/core.py", line 397, in connector
return sqlite3.connect(loc, **kwargs)
sqlite3.OperationalError: unable to open database file
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/void/Projects/Python/albug/.venv/bin/alembic", line 8, in <module>
sys.exit(main())
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/alembic/config.py", line 588, in main
CommandLine(prog=prog).main(argv=argv)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/alembic/config.py", line 582, in main
self.run_cmd(cfg, options)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/alembic/config.py", line 559, in run_cmd
fn(
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/alembic/command.py", line 227, in revision
script_directory.run_env()
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/alembic/script/base.py", line 563, in run_env
util.load_python_file(self.dir, "env.py")
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/alembic/util/pyfiles.py", line 92, in load_python_file
module = load_module_py(module_id, path)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/alembic/util/pyfiles.py", line 108, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "alembic/env.py", line 84, in <module>
asyncio.run(run_migrations_online())
File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "alembic/env.py", line 77, in run_migrations_online
async with connectable.connect() as connection:
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/ext/asyncio/base.py", line 57, in __aenter__
return await self.start(is_ctxmanager=True)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/ext/asyncio/engine.py", line 106, in start
await (greenlet_spawn(self.sync_engine.connect))
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 127, in greenlet_spawn
result = context.throw(*sys.exc_info())
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/future/engine.py", line 419, in connect
return super(Engine, self).connect()
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3166, in connect
return self._connection_cls(self, close_with_result=close_with_result)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__
else engine.raw_connection()
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3245, in raw_connection
return self._wrap_pool_connect(self.pool.connect, _connection)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3215, in _wrap_pool_connect
Connection._handle_dbapi_exception_noconnection(
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2069, in _handle_dbapi_exception_noconnection
util.raise_(
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3212, in _wrap_pool_connect
return fn()
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 307, in connect
return _ConnectionFairy._checkout(self)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 767, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 425, in checkout
rec = pool._do_get()
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/impl.py", line 256, in _do_get
return self._create_connection()
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 253, in _create_connection
return _ConnectionRecord(self)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 368, in __init__
self.__connect()
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 611, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 605, in __connect
connection = pool._invoke_creator(self)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect
return dialect.connect(*cargs, **cparams)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 584, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 291, in connect
await_only(connection),
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 69, in await_only
return current.driver.switch(awaitable)
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 122, in greenlet_spawn
value = await result
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/aiosqlite/core.py", line 137, in _connect
self._connection = await future
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/aiosqlite/core.py", line 102, in run
result = function()
File "/home/void/Projects/Python/albug/.venv/lib/python3.9/site-packages/aiosqlite/core.py", line 397, in connector
return sqlite3.connect(loc, **kwargs)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file
(Background on this error at: https://sqlalche.me/e/14/e3q8)
```
**Versions.**
- OS: Linux x86_64
- Python: 3.9.7
- Alembic: 1.7.1
- SQLAlchemy: 1.4.23
- Database: sqlite
- DBAPI: aiosqlite
| closed | 2021-09-02T05:19:16Z | 2022-05-03T07:38:16Z | https://github.com/sqlalchemy/alembic/issues/898 | [] | vikigenius | 1 |
brightmart/text_classification | tensorflow | 14 | ValueError: Variable Embedding already exists | Traceback (most recent call last):
File "p5_fastTextB_train.py", line 163, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "p5_fastTextB_train.py", line 76, in main
fast_text=fastText(FLAGS.label_size, FLAGS.learning_rate, FLAGS.batch_size, FLAGS.decay_steps, FLAGS.decay_rate,FLAGS.num_sampled,FLAGS.sentence_len,vocab_size,FLAGS.embed_size,FLAGS.is_training)
File "/home/defy/text_classification-master/a01_FastText/p5_fastTextB_model.py", line 29, in __init__
self.instantiate_weights()
File "/home/defy/text_classification-master/a01_FastText/p5_fastTextB_model.py", line 42, in instantiate_weights
self.Embedding = tf.get_variable("Embedding", [self.vocab_size, self.embed_size])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 1049, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 948, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 356, in get_variable
validate_shape=validate_shape, use_resource=use_resource)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 341, in _true_getter
use_resource=use_resource)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 653, in _get_single_variable
name, "".join(traceback.format_list(tb))))
ValueError: Variable Embedding already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
File "/home/defy/text_classification-master/a01_FastText/p5_fastTextB_model.py", line 42, in instantiate_weights
self.Embedding = tf.get_variable("Embedding", [self.vocab_size, self.embed_size])
File "/home/defy/text_classification-master/a01_FastText/p5_fastTextB_model.py", line 29, in __init__
self.instantiate_weights()
File "/home/defy/text_classification-master/a01_FastText/p5_fastTextB_model.py", line 104, in test
fastText=fastTextB(num_classes, learning_rate, batch_size, decay_steps, decay_rate,5,sequence_length,vocab_size,embed_size,is_training)
python 2.7 ,tensorlfow 1.1, run p5_fastTextB_model, but get this error @brightmart | closed | 2017-08-20T05:24:38Z | 2017-08-20T11:13:07Z | https://github.com/brightmart/text_classification/issues/14 | [] | diaoxue | 1 |
dgtlmoon/changedetection.io | web-scraping | 2,377 | CSS weirdness on mobile | **DO NOT USE THIS FORM TO REPORT THAT A PARTICULAR WEBSITE IS NOT SCRAPING/WATCHING AS EXPECTED**
This form is only for direct bugs and feature requests todo directly with the software.
Please report watched websites (full URL and _any_ settings) that do not work with changedetection.io as expected [**IN THE DISCUSSION FORUMS**](https://github.com/dgtlmoon/changedetection.io/discussions) or your report will be deleted
CONSIDER TAKING OUT A SUBSCRIPTION FOR A SMALL PRICE PER MONTH, YOU GET THE BENEFIT OF USING OUR PAID PROXIES AND FURTHERING THE DEVELOPMENT OF CHANGEDETECTION.IO
THANK YOU
**Describe the bug**
It's hard to explain in words but if you look at the screenshot there are some semi transparent overlays that look pretty weird on mobile.
**Version**
*Exact version* in the top right area: 0.45.22
**To Reproduce**
Steps to reproduce the behavior:
1. Visit the app homepage
! ALWAYS INCLUDE AN EXAMPLE URL WHERE IT IS POSSIBLE TO RE-CREATE THE ISSUE - USE THE 'SHARE WATCH' FEATURE AND PASTE IN THE SHARE-LINK!
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Smartphone (please complete the following information):**
- Device: Pixel 7
- OS: Android 14
- Browser Firedox
- Version 125
**Additional context**
Obviously a small CSS issue that's low priority but maybe a good first issue for someone :) | closed | 2024-05-19T08:44:10Z | 2024-05-22T11:45:13Z | https://github.com/dgtlmoon/changedetection.io/issues/2377 | [
"triage"
] | RayBB | 1 |
nltk/nltk | nlp | 2,369 | for features, labels in training_data: 2 x.append(features) 3 y.append(label) 4 x = np.array(x).reshape(-1, IMG_SIZE, IMG_SIZE, 1) ValueError: not enough values to unpack (expected 2, got 1) | Does anyone know why I got this error while running that piece of code? | closed | 2019-08-19T01:29:30Z | 2020-03-13T16:21:12Z | https://github.com/nltk/nltk/issues/2369 | [
"inactive"
] | tommieezpark | 1 |
pmaji/crypto-whale-watching-app | plotly | 92 | *-BTC pairs should be 5 decimals not 4 | E.g. ETH-BTC: 0.07564 BTC not 0.0756. | closed | 2018-03-06T18:50:17Z | 2018-03-07T02:02:30Z | https://github.com/pmaji/crypto-whale-watching-app/issues/92 | [] | mifunetoshiro | 5 |
ultralytics/yolov5 | pytorch | 12,875 | YOLOV5 no longer works on Nvidia Jetson Nano | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Integrations
### Bug
YOLOV5 no longer works on NVIDIA Jetson Nano, don't waste your time until the ultralytics dependency is fixed or removed.
I am encountering a module import error related to the ultralytics package when attempting to run YOLOv5 on an NVIDIA Jetson Nano. This issue arises when executing the detect.py script, specifically at the import statement for ultralytics modules within YOLOv5's code.
Environment:
Device: NVIDIA Jetson Nano
Python Version: 3.6.9
Operating System: Ubuntu 18.04 (compatible with Jetson Nano)
PyTorch Version: 1.10.0 (installed from NVIDIA's recommended wheels for Jetson)
Torchvision Version: 0.11.1 (installed from NVIDIA's recommended wheels for Jetson)
YOLOv5 Version: Latest from main branch as of [Date]
CUDA Version: Compatible with installed PyTorch/torchvision versions
Issue Description:
Upon running the detect.py script with a basic test (either images or video stream), the script fails with the following error message:
`Traceback (most recent call last):
File "detect.py", line 47, in <module>
from utils.plots import Annotator, colors, save_one_box
File "/home/[username]/yolov5/utils/plots.py", line 19, in <module>
from ultralytics.utils.plotting import Annotator
ModuleNotFoundError: No module named 'ultralytics'
`
This suggests that the script is unable to import the ultralytics module, despite following all installation and setup instructions closely, including setting up the environment as recommended for the Jetson Nano platform.
Steps to Reproduce:
Set up the Jetson Nano with NVIDIA-recommended versions of PyTorch and torchvision.
Clone the YOLOv5 repository and install dependencies (excluding PyTorch and torchvision to avoid conflicts with NVIDIA's versions).
Attempt to run detect.py with any input source.
deleting the line will not solve the problem because YOLOv5 is trying to use ultralytics stuff deep unde the hood.
So until this ticket gets a RESOLVED status, don't waste your time with YOLOv5 on Jetson Nano.
### Environment
Device: NVIDIA Jetson Nano
Python Version: 3.6.9
Operating System: Ubuntu 18.04 (compatible with Jetson Nano)
PyTorch Version: 1.10.0 (installed from NVIDIA's recommended wheels for Jetson)
Torchvision Version: 0.11.1 (installed from NVIDIA's recommended wheels for Jetson)
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-04-02T20:06:50Z | 2024-10-20T19:42:49Z | https://github.com/ultralytics/yolov5/issues/12875 | [
"bug",
"Stale"
] | gilmotta3 | 4 |
whitphx/streamlit-webrtc | streamlit | 1,682 | Video recording freezes using MediaRecorder | Hi!
Thank you for implementing this component; I found it really useful!
I am having some trouble recording the input video using the cookbooks/examples provided in the code.
My current code looks like this:
```
from aiortc.contrib.media import MediaRecorder
from streamlit_webrtc import WebRtcMode, webrtc_streamer
def app():
def recorder_factory() -> MediaRecorder:
return MediaRecorder("test1.mp4")
webrtc_streamer(
key="loopback",
mode=WebRtcMode.SENDRECV,
rtc_configuration={"iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}]},
media_stream_constraints={
"video": True,
"audio": True,
},
in_recorder_factory=recorder_factory
)
if __name__ == "__main__":
app()
```
This code is similar to the example provided. However, after a few seconds, the video recording freezes and only the audio continues to record.
The error I am receiving is:
```
[libx264 @ 000001f7b1525180] non-strictly-monotonic PTS
[mp4 @ 000001f78f20b940] Application provided invalid, non monotonically increasing dts to muxer in stream 1: 22016 >= 22016
```
Should I use some other options in the MediaRecorder from the aiortc library?
I noticed that some people encounter the same error with aiortc MediaRecorder when using HLS, but that is not my case.
Thank you for your assistance! | open | 2024-06-20T09:30:37Z | 2024-06-20T11:45:03Z | https://github.com/whitphx/streamlit-webrtc/issues/1682 | [] | Spisor | 0 |
plotly/dash | jupyter | 2,688 | [Feature Request] Run dash in jupyter without additional server iframe | The problem with Iframe approach is that you need to pass additional ports from your dev environment which could be hard sometime, which makes troubles when you trying to make for example interactive chart, using dash-cytoscape
However as I understand jupyter have own data pipe between server and client.
So may be we can somehow avoid do app.run() the way it creates iframe on 127.0.0.1 ?
| closed | 2023-11-11T05:25:21Z | 2024-07-25T13:33:48Z | https://github.com/plotly/dash/issues/2688 | [] | zba | 1 |
comfyanonymous/ComfyUI | pytorch | 7,100 | WanVideoSampler///Empty image embeds must be provided for T2V (Text to Video) | ### Expected Behavior
WanVideoSampler///Empty image embeds must be provided for T2V (Text to Video
### Actual Behavior
WanVideoSampler///Empty image embeds must be provided for T2V (Text to Video
### Steps to Reproduce
WanVideoSampler///Empty image embeds must be provided for T2V (Text to Video
### Debug Logs
```powershell
WanVideoSampler///Empty image embeds must be provided for T2V (Text to Video
```
### Other
_No response_ | closed | 2025-03-06T11:47:07Z | 2025-03-06T15:56:16Z | https://github.com/comfyanonymous/ComfyUI/issues/7100 | [
"Potential Bug",
"Custom Nodes Bug"
] | renkunlong | 1 |
aiortc/aiortc | asyncio | 245 | MediaPlayer with multiple formats/inputs and/or merging audio+video from two different sources | I'm trying to add an audio input to a MediaPlayer using the FFMPEG options like so:
```python
player = MediaPlayer(':99', format='x11grab', options={'-f': 'pulse', '-i': 'default'})
```
However, `player.audio` resolves to `None`.
I'm not sure whether this is something that needs to be exposed in PyAV (or aiortc) or maybe it is and I just can't find it or I am doing something incorrectly.
I've tried creating two different MediaPlayers, one for x11grab and another for pulse, and sending the `video` and `audio` to connected clients, but then the audio + video are out of sync/distorted.
There's a closed issue on PyAV that's very similar to this one. https://github.com/mikeboers/PyAV/issues/488
I apologize if I've misplaced this issue, but I'm really loving aiortc and I'd love to stay within it.
EDIT: If there's a better way to merge the audio of one player with the video from another please let me know. | closed | 2019-12-28T07:47:02Z | 2019-12-30T07:51:04Z | https://github.com/aiortc/aiortc/issues/245 | [] | murlyn | 2 |
newpanjing/simpleui | django | 514 | 要在同一行显示多个字段,将这些字段包在自己的元组中。这个功能使用simpleui后失效了 | 要在同一行显示多个字段,将这些字段包在自己的元组中。这个功能使用simpleui后失效了 | open | 2025-03-17T10:35:53Z | 2025-03-17T10:35:53Z | https://github.com/newpanjing/simpleui/issues/514 | [
"bug"
] | djzbj | 0 |
donnemartin/system-design-primer | python | 531 | Convert to epub broken | There are several issues in the epub conversion script using pandoc
1) Seems like `--metadata-file` might have been renamed to `--epub-metadata`, but not entirely sure. Doesn't seem like it's picking up the yaml file after I renamed it. Ended up using command line arguments instead.
`--metadata=lang:en --metadata=title:'System Design Primer'`
2) `<img src='path'>` is valid HTML4/HTML5, but it is not valid XHTML which is what I think epub requires. When I tried to read the epub on my iPad the book breaks.
After replacing all `<img ...>` with `<img ... />` it's rendering fine.
https://stackoverflow.com/questions/14860492/how-to-close-img-tag-properly | open | 2021-04-28T20:01:30Z | 2022-04-23T13:17:27Z | https://github.com/donnemartin/system-design-primer/issues/531 | [
"needs-review"
] | Peppershaker | 3 |
dask/dask | numpy | 11,020 | `set_index` returns the divisions instead of the dataframe with query planning enabled | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
Calling `set_index` with a sorted index and divisions returns a tuple of the divisions instead of the modified dataframe, if query planning is enabled.
**Minimal Complete Verifiable Example**:
Copied from the `set_index` docstring:
```python
import dask
import pandas as pd
ddf = dask.datasets.timeseries(start="2021-01-01", end="2021-01-07", freq="1h").reset_index()
divisions = pd.date_range(start="2021-01-01", end="2021-01-07", freq='1D')
ddf2 = ddf.set_index("timestamp", sorted=True, divisions=divisions.tolist())
```
```python
>>> type(ddf2)
tuple
>>> ddf2 == tuple(divisions)
True
```
**Anything else we need to know?**:
This only happens when query planning is enabled in the config, sorted is true, and divisions are given.
Likely related to #10974.
**Environment**:
- Dask version: 2024.3.1 (dask-expr: 1.0.3)
- Python version: 3.11
- Operating System: PopOS
- Install method (conda, pip, source): pip
| closed | 2024-03-24T17:04:17Z | 2024-03-25T17:26:54Z | https://github.com/dask/dask/issues/11020 | [
"needs triage"
] | aazuspan | 0 |
cvat-ai/cvat | pytorch | 8,818 | I got a “Could not create the task” error. | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
-
### Expected Behavior
-
### Possible Solution
-
### Context
Hi,
Could you help me?
I am having trouble creating a task as superuser in any project and organization.

CVAT is deployed on Ubuntu server with docker container. Here's the logs:
```
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.NotNullViolation: null value in column "max_validations_per_job" of relation "quality_control_qualitysettings" violates not-null constraint
DETAIL: Failing row contains (128, 0.4, 0.09, 0.01, 0.8, t, 0.1, t, 0.5, t, 0.05, t, t, 128, null, null, null).
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 518, in thread_handler на что ругается
raise exc_info[1]
File "/opt/venv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 42, in inner
response = await get_response(request)
File "/opt/venv/lib/python3.10/site-packages/django/core/handlers/base.py", line 253, in _get_response_async
response = await wrapped_callback(
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 468, in __call__
ret = await asyncio.shield(exec_coro)
File "/opt/venv/lib/python3.10/site-packages/asgiref/current_thread_executor.py", line 40, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 522, in thread_handler
return func(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
return view_func(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/drf_spectacular/drainage.py", line 193, in wrapped_method
return method(self, request, *args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/mixins.py", line 19, in create
self.perform_create(serializer)
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/django/cvat/apps/engine/views.py", line 907, in perform_create
serializer.save(
File "/opt/venv/lib/python3.10/site-packages/rest_framework/serializers.py", line 212, in save
self.instance = self.create(validated_data)
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/django/cvat/apps/engine/serializers.py", line 1162, in create
db_task = models.Task.objects.create(
File "/opt/venv/lib/python3.10/site-packages/django/db/models/manager.py", line 87, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/db/models/query.py", line 658, in create
obj.save(force_insert=True, using=self.db)
File "/opt/venv/lib/python3.10/site-packages/django/db/models/base.py", line 814, in save
self.save_base(
File "/opt/venv/lib/python3.10/site-packages/django/db/models/base.py", line 892, in save_base
post_save.send(
File "/opt/venv/lib/python3.10/site-packages/django/dispatch/dispatcher.py", line 176, in send
return [
File "/opt/venv/lib/python3.10/site-packages/django/dispatch/dispatcher.py", line 177, in <listcomp>
(receiver, receiver(signal=self, sender=sender, **named))
File "/home/django/cvat/apps/quality_control/signals.py", line 66, in __save_task__initialize_quality_settings
QualitySettings.objects.get_or_create(task=task)
File "/opt/venv/lib/python3.10/site-packages/django/db/models/manager.py", line 87, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/db/models/query.py", line 923, in get_or_create
return self.create(**params), True
File "/opt/venv/lib/python3.10/site-packages/django/db/models/query.py", line 658, in create
obj.save(force_insert=True, using=self.db)
File "/opt/venv/lib/python3.10/site-packages/django/db/models/base.py", line 814, in save
self.save_base(
File "/opt/venv/lib/python3.10/site-packages/django/db/models/base.py", line 877, in save_base
updated = self._save_table(
File "/opt/venv/lib/python3.10/site-packages/django/db/models/base.py", line 1020, in _save_table
results = self._do_insert(
File "/opt/venv/lib/python3.10/site-packages/django/db/models/base.py", line 1061, in _do_insert
return manager._insert(
File "/opt/venv/lib/python3.10/site-packages/django/db/models/manager.py", line 87, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/db/models/query.py", line 1805, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
File "/opt/venv/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 1822, in execute_sql
cursor.execute(sql, params)
File "/opt/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
File "/opt/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/opt/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/opt/venv/lib/python3.10/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/opt/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
django.db.utils.IntegrityError: null value in column "max_validations_per_job" of relation "quality_control_qualitysettings" violates not-null constraint
DETAIL: Failing row contains (128, 0.4, 0.09, 0.01, 0.8, t, 0.1, t, 0.5, t, 0.05, t, t, 128, null, null, null).
```
### Environment
_No response_ | closed | 2024-12-12T09:13:20Z | 2024-12-24T07:26:23Z | https://github.com/cvat-ai/cvat/issues/8818 | [
"bug",
"need info"
] | RainSmith21 | 1 |
lux-org/lux | pandas | 228 | Doc & Workflow | open | 2021-01-15T05:05:30Z | 2021-01-15T05:05:31Z | https://github.com/lux-org/lux/issues/228 | [
"Epic"
] | jinimukh | 0 | |
slackapi/bolt-python | fastapi | 422 | Reduce Spacing between two blocks | ### Is there a way to reduce spacing between the **section type** and **context type**
**Our Requirements:**
Text1 which is followed by text2, text 2 has to be in smaller font size as compare to text1 (that's why we are using type: "context" here). But while doing so there seems to be a good difference between the two blocks which we don't want. Is it possible to reduce it?
Below mentioned is the payload for reference.
```json
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*We want to hear from you, how was it? how can we improve ?*"
}
},
{
"type": "context",
"elements": [
{
"type": "plain_text",
"text": "*No need to sugar-coat it, we value your authentic thoughts ",
"emoji": true
}
]
},
{
"dispatch_action": true,
"type": "input",
"element": {
"type": "plain_text_input",
"multiline": true,
"action_id": "plain_text_input-action"
},
"label": {
"type": "plain_text",
"text": "\n",
"emoji": true
}
}
]
}
``` | closed | 2021-07-25T09:07:47Z | 2021-08-06T09:55:57Z | https://github.com/slackapi/bolt-python/issues/422 | [
"question"
] | Cyb-Nikh | 2 |
graphistry/pygraphistry | pandas | 302 | [FEA] Graphistry UI: Type in filter box directly without having to select a property first | First: this is related to the Graphistry UI, but this repo still seemed like the most active place to register the request. If there's a better place then do let me know.
**Is your feature request related to a problem? Please describe.**
For our users 90% of the filters they want to add are `CONTAINS(prop, substring)`. In the Graphistry UI it forces users to select a property from the list of graph properties, and then it autofills a basic `proper = "something"` filter. This all unnecessary for our filtering use cases, is slower, and more importantly to us it causes extra complications in explaining to our users how to do `CONTAINS` filtering (e.g. "first select a prop, then delete it all and rewrite ...", which is a lot worse then "copy `CONTAINS(your_prop, your_substring)` into the filter box".)
**Describe the solution you'd like**
We would like to be able to type a filter directly into the filter box. I.e. without having to select a graph property first.
**Describe alternatives you've considered**
na
**Additional context**
na
| open | 2022-01-24T05:54:26Z | 2022-01-24T06:35:25Z | https://github.com/graphistry/pygraphistry/issues/302 | [
"enhancement"
] | DBCerigo | 1 |
jmcnamara/XlsxWriter | pandas | 286 | Append rows to table | I have an application that iterates through data from an external source. Would like to save that data to an Excel table as I'm walking through it, but **add_table** needs to know the number of rows in advance before I can write to it.
My workaround is to either walk the data twice (first time just to count the rows) or save the data as I'm reading it & write the complete table at the end in one step.
Is an "append to table" feature possible?
| closed | 2015-08-07T14:24:29Z | 2015-08-07T15:35:10Z | https://github.com/jmcnamara/XlsxWriter/issues/286 | [
"question",
"ready to close"
] | pmqs | 2 |
jeffknupp/sandman2 | sqlalchemy | 153 | Can Py2.7 be deprecated? | As everyone knows Python 2 is no longer supported. I think it makes sense for this project to drop the support for it as well. | open | 2020-04-28T01:59:36Z | 2020-04-28T02:20:18Z | https://github.com/jeffknupp/sandman2/issues/153 | [] | Qu4tro | 1 |
jina-ai/clip-as-service | pytorch | 317 | Can I use the C++ to connect the bert-serving-server? | **Prerequisites**
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**Question**
I have finished the training of the text classification using Python and export the symbols and parameters of model.
Then I want to use C++ to predict the classification, and find there is no C++ API to connect the the `bert-serving-server`. The way I think of is to use `bert-as-service` to serve HTTP requests in Json and then use the C++ to call the sever via HTTP POST request. Is there any better idea?
| open | 2019-04-12T10:43:00Z | 2019-04-15T07:49:11Z | https://github.com/jina-ai/clip-as-service/issues/317 | [] | shengfeng | 1 |
modelscope/data-juicer | data-visualization | 573 | DAAR文章里面图1的ef小标题是不是写错了 | ### Before Asking 在提问之前
- [x] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully. 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引。
- [x] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
### Search before asking 先搜索,再提问
- [x] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar questions. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的问题。
### Question
DAAR文章里面图1的e和f小标题是不是写错了,跟下面的介绍也没对上

### Additional 额外信息
_No response_ | closed | 2025-02-11T06:02:14Z | 2025-02-11T08:24:40Z | https://github.com/modelscope/data-juicer/issues/573 | [
"question"
] | xiafeng-nb | 3 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 629 | 1 | closed | 2022-08-22T12:53:43Z | 2022-09-30T06:35:56Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/629 | [] | XJWang628 | 0 | |
huggingface/datasets | pandas | 6,548 | Skip if a dataset has issues | ### Describe the bug
Hello everyone,
I'm using **load_datasets** from **huggingface** to download the datasets and I'm facing an issue, the download starts but it reaches some state and then fails with the following error:
Couldn't reach https://huggingface.co/datasets/wikimedia/wikipedia/resolve/4cb9b0d719291f1a10f96f67d609c5d442980dc9/20231101.ext/train-00000-of-00001.parquet
Failed to resolve \'huggingface.co\' ([Errno -3] Temporary failure in name resolution)"))')))

so I was wondering is there a parameter to be passed to load_dataset() to skip files that can't be downloaded??
### Steps to reproduce the bug
Parameter to be passed to load_dataset() of huggingface to skip files that can't be downloaded??
### Expected behavior
load_dataset() finishes without error
### Environment info
None | open | 2023-12-31T12:41:26Z | 2024-01-02T10:33:17Z | https://github.com/huggingface/datasets/issues/6548 | [] | hadianasliwa | 1 |
pytorch/vision | computer-vision | 8,745 | The link of `ImageNet()` doesn't have `ILSVRC2012_img_train.tar` and `ILSVRC2012_img_val.tar`. | ### 📚 The doc issue
[The link](https://image-net.org/challenges/LSVRC/2012/2012-downloads.php) of [ImageNet()](https://pytorch.org/vision/stable/generated/torchvision.datasets.ImageNet.html) doesn't have `ILSVRC2012_img_train.tar` and `ILSVRC2012_img_val.tar`.

### Suggest a potential alternative/fix
_No response_ | closed | 2024-11-24T12:47:18Z | 2024-11-27T17:15:21Z | https://github.com/pytorch/vision/issues/8745 | [] | hyperkai | 1 |
Esri/arcgis-python-api | jupyter | 1,731 | Swap the source of a hosted layer view using the ArcGIS API for Python | **Swap the source of a hosted layer view using the ArcGIS API for Python**
Do you know whether it is possible to swap the source of a hosted layer view using the ArcGIS API for Python? This can be done using AGOL - https://www.esri.com/arcgis-blog/products/arcgis-online/data-management/swapping-layers-a-great-way-to-build-and-maintain-your-feature-layer/. But I was wondering if the same could be done using the ArcGIS API for Python.
**Additional context**
Currently, I have a Python script that appends around 250,000 features to an ArcGIS Online feature layer. This script runs every day, but the append process can take anywhere from a couple of seconds to 10 minutes.
However, during the appending process, the dashboard becomes unusable. If I could perform a layer swap, it would solve my issue 🤗
I'm just wondering whether you could achieve this using the ArcGIS API for Python 🤔
 | closed | 2023-12-18T22:39:35Z | 2024-10-29T17:37:19Z | https://github.com/Esri/arcgis-python-api/issues/1731 | [
"enhancement"
] | GeeFernando | 4 |
openapi-generators/openapi-python-client | rest-api | 279 | Can't generate for a valid Swagger JSON file | **Describe the bug**
Nothing is generated, no types or methods.
**To Reproduce**
Steps to reproduce the behavior:
1. Generate for the link https://demo.defectdojo.org/api/v2/doc/?format=openapi
**Expected behavior**
Some types and methods in the client.
**OpenAPI Spec File**
[https://demo.defectdojo.org/api/v2/doc/?format=openapi](https://demo.defectdojo.org/api/v2/doc/?format=openapi)
**Desktop (please complete the following information):**
- OS: Win10
- Python Version: 3.6.7
- openapi-python-client version: 0.7.3
**Additional context**
Add any other context about the problem here.
[https://demo.defectdojo.org/api/v2/doc/?format=openapi](https://demo.defectdojo.org/api/v2/doc/?format=openapi)
Logs:
```
$ openapi-python-client update --path openapi.json
Updating defect-dojo-api-client
Warning(s) encountered while generating. Client was generated, but some pieces may be missing
ERROR parsing POST /api-token-auth/ within api-token-auth. Endpoint will not be generated.
cannot parse parameter of endpoint api-token-auth_create
Reference(ref='#/definitions/AuthToken')
ERROR parsing POST /development_environments/ within development_environments. Endpoint will not be generated.
cannot parse parameter of endpoint development_environments_create
Reference(ref='#/definitions/DevelopmentEnvironment')
ERROR parsing PUT /development_environments/{id}/ within development_environments. Endpoint will not be generated.
cannot parse parameter of endpoint development_environments_update
Reference(ref='#/definitions/DevelopmentEnvironment')
ERROR parsing PATCH /development_environments/{id}/ within development_environments. Endpoint will not be
generated.
cannot parse parameter of endpoint development_environments_partial_update
Reference(ref='#/definitions/DevelopmentEnvironment')
ERROR parsing POST /endpoint_status/ within endpoint_status. Endpoint will not be generated.
cannot parse parameter of endpoint endpoint_status_create
Reference(ref='#/definitions/EndpointStatus')
ERROR parsing PUT /endpoint_status/{id}/ within endpoint_status. Endpoint will not be generated.
cannot parse parameter of endpoint endpoint_status_update
Reference(ref='#/definitions/EndpointStatus')
ERROR parsing PATCH /endpoint_status/{id}/ within endpoint_status. Endpoint will not be generated.
cannot parse parameter of endpoint endpoint_status_partial_update
Reference(ref='#/definitions/EndpointStatus')
ERROR parsing POST /endpoints/ within endpoints. Endpoint will not be generated.
cannot parse parameter of endpoint endpoints_create
Reference(ref='#/definitions/Endpoint')
ERROR parsing PUT /endpoints/{id}/ within endpoints. Endpoint will not be generated.
cannot parse parameter of endpoint endpoints_update
Reference(ref='#/definitions/Endpoint')
ERROR parsing PATCH /endpoints/{id}/ within endpoints. Endpoint will not be generated.
cannot parse parameter of endpoint endpoints_partial_update
Reference(ref='#/definitions/Endpoint')
ERROR parsing POST /endpoints/{id}/generate_report/ within endpoints. Endpoint will not be generated.
cannot parse parameter of endpoint endpoints_generate_report
Reference(ref='#/definitions/ReportGenerateOption')
ERROR parsing POST /engagements/ within engagements. Endpoint will not be generated.
cannot parse parameter of endpoint engagements_create
Reference(ref='#/definitions/Engagement')
ERROR parsing PUT /engagements/{id}/ within engagements. Endpoint will not be generated.
cannot parse parameter of endpoint engagements_update
Reference(ref='#/definitions/Engagement')
ERROR parsing PATCH /engagements/{id}/ within engagements. Endpoint will not be generated.
cannot parse parameter of endpoint engagements_partial_update
Reference(ref='#/definitions/Engagement')
ERROR parsing POST /engagements/{id}/accept_risks/ within engagements. Endpoint will not be generated.
cannot parse parameter of endpoint engagements_accept_risks
Reference(ref='#/definitions/AcceptedRisk')
ERROR parsing POST /engagements/{id}/generate_report/ within engagements. Endpoint will not be generated.
cannot parse parameter of endpoint engagements_generate_report
Reference(ref='#/definitions/ReportGenerateOption')
ERROR parsing POST /engagements/{id}/notes/ within engagements. Endpoint will not be generated.
cannot parse parameter of endpoint engagements_notes_create
Reference(ref='#/definitions/Engagement')
ERROR parsing PATCH /engagements/{id}/notes/ within engagements. Endpoint will not be generated.
cannot parse parameter of endpoint engagements_notes_partial_update
Reference(ref='#/definitions/Engagement')
ERROR parsing POST /finding_templates/ within finding_templates. Endpoint will not be generated.
cannot parse parameter of endpoint finding_templates_create
Reference(ref='#/definitions/FindingTemplate')
ERROR parsing PUT /finding_templates/{id}/ within finding_templates. Endpoint will not be generated.
cannot parse parameter of endpoint finding_templates_update
Reference(ref='#/definitions/FindingTemplate')
ERROR parsing PATCH /finding_templates/{id}/ within finding_templates. Endpoint will not be generated.
cannot parse parameter of endpoint finding_templates_partial_update
Reference(ref='#/definitions/FindingTemplate')
ERROR parsing POST /findings/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_create
Reference(ref='#/definitions/FindingCreate')
ERROR parsing POST /findings/accept_risks/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_accept_risks
Reference(ref='#/definitions/AcceptedRisk')
ERROR parsing POST /findings/generate_report/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_generate_report
Reference(ref='#/definitions/ReportGenerateOption')
ERROR parsing PUT /findings/{id}/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_update
Reference(ref='#/definitions/Finding')
ERROR parsing PATCH /findings/{id}/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_partial_update
Reference(ref='#/definitions/Finding')
ERROR parsing PUT /findings/{id}/metadata/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_metadata_update
Reference(ref='#/definitions/FindingMeta')
ERROR parsing POST /findings/{id}/metadata/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_metadata_create
Reference(ref='#/definitions/FindingMeta')
ERROR parsing POST /findings/{id}/notes/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_notes_create
Reference(ref='#/definitions/AddNewNoteOption')
ERROR parsing PATCH /findings/{id}/notes/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_notes_partial_update
Reference(ref='#/definitions/AddNewNoteOption')
ERROR parsing PATCH /findings/{id}/remove_note/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_remove_note
Reference(ref='#/definitions/FindingNote')
ERROR parsing PUT /findings/{id}/remove_tags/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_remove_tags_update
Reference(ref='#/definitions/Tag')
ERROR parsing PATCH /findings/{id}/remove_tags/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_remove_tags_partial_update
Reference(ref='#/definitions/Tag')
ERROR parsing POST /findings/{id}/request_response/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_request_response_create
Reference(ref='#/definitions/BurpRawRequestResponse')
ERROR parsing POST /findings/{id}/tags/ within findings. Endpoint will not be generated.
cannot parse parameter of endpoint findings_tags_create
Reference(ref='#/definitions/Tag')
ERROR parsing POST /jira_configurations/ within jira_configurations. Endpoint will not be generated.
cannot parse parameter of endpoint jira_configurations_create
Reference(ref='#/definitions/JIRAInstance')
ERROR parsing PUT /jira_configurations/{id}/ within jira_configurations. Endpoint will not be generated.
cannot parse parameter of endpoint jira_configurations_update
Reference(ref='#/definitions/JIRAInstance')
ERROR parsing PATCH /jira_configurations/{id}/ within jira_configurations. Endpoint will not be generated.
cannot parse parameter of endpoint jira_configurations_partial_update
Reference(ref='#/definitions/JIRAInstance')
ERROR parsing POST /jira_finding_mappings/ within jira_finding_mappings. Endpoint will not be generated.
cannot parse parameter of endpoint jira_finding_mappings_create
Reference(ref='#/definitions/JIRAIssue')
ERROR parsing PUT /jira_finding_mappings/{id}/ within jira_finding_mappings. Endpoint will not be generated.
cannot parse parameter of endpoint jira_finding_mappings_update
Reference(ref='#/definitions/JIRAIssue')
ERROR parsing PATCH /jira_finding_mappings/{id}/ within jira_finding_mappings. Endpoint will not be generated.
cannot parse parameter of endpoint jira_finding_mappings_partial_update
Reference(ref='#/definitions/JIRAIssue')
ERROR parsing POST /jira_instances/ within jira_instances. Endpoint will not be generated.
cannot parse parameter of endpoint jira_instances_create
Reference(ref='#/definitions/JIRAInstance')
ERROR parsing PUT /jira_instances/{id}/ within jira_instances. Endpoint will not be generated.
cannot parse parameter of endpoint jira_instances_update
Reference(ref='#/definitions/JIRAInstance')
ERROR parsing PATCH /jira_instances/{id}/ within jira_instances. Endpoint will not be generated.
cannot parse parameter of endpoint jira_instances_partial_update
Reference(ref='#/definitions/JIRAInstance')
ERROR parsing POST /jira_product_configurations/ within jira_product_configurations. Endpoint will not be
generated.
cannot parse parameter of endpoint jira_product_configurations_create
Reference(ref='#/definitions/JIRAProject')
ERROR parsing PUT /jira_product_configurations/{id}/ within jira_product_configurations. Endpoint will not be generated.
cannot parse parameter of endpoint jira_product_configurations_update
Reference(ref='#/definitions/JIRAProject')
ERROR parsing PATCH /jira_product_configurations/{id}/ within jira_product_configurations. Endpoint will not be generated.
cannot parse parameter of endpoint jira_product_configurations_partial_update
Reference(ref='#/definitions/JIRAProject')
ERROR parsing POST /jira_projects/ within jira_projects. Endpoint will not be generated.
cannot parse parameter of endpoint jira_projects_create
Reference(ref='#/definitions/JIRAProject')
ERROR parsing PUT /jira_projects/{id}/ within jira_projects. Endpoint will not be generated.
cannot parse parameter of endpoint jira_projects_update
Reference(ref='#/definitions/JIRAProject')
ERROR parsing PATCH /jira_projects/{id}/ within jira_projects. Endpoint will not be generated.
cannot parse parameter of endpoint jira_projects_partial_update
Reference(ref='#/definitions/JIRAProject')
ERROR parsing POST /metadata/ within metadata. Endpoint will not be generated.
cannot parse parameter of endpoint metadata_create
Reference(ref='#/definitions/Meta')
ERROR parsing PUT /metadata/{id}/ within metadata. Endpoint will not be generated.
cannot parse parameter of endpoint metadata_update
Reference(ref='#/definitions/Meta')
ERROR parsing PATCH /metadata/{id}/ within metadata. Endpoint will not be generated.
cannot parse parameter of endpoint metadata_partial_update
Reference(ref='#/definitions/Meta')
ERROR parsing POST /note_type/ within note_type. Endpoint will not be generated.
cannot parse parameter of endpoint note_type_create
Reference(ref='#/definitions/NoteType')
ERROR parsing PUT /note_type/{id}/ within note_type. Endpoint will not be generated.
cannot parse parameter of endpoint note_type_update
Reference(ref='#/definitions/NoteType')
ERROR parsing PATCH /note_type/{id}/ within note_type. Endpoint will not be generated.
cannot parse parameter of endpoint note_type_partial_update
Reference(ref='#/definitions/NoteType')
ERROR parsing PUT /notes/{id}/ within notes. Endpoint will not be generated.
cannot parse parameter of endpoint notes_update
Reference(ref='#/definitions/Note')
ERROR parsing PATCH /notes/{id}/ within notes. Endpoint will not be generated.
cannot parse parameter of endpoint notes_partial_update
Reference(ref='#/definitions/Note')
ERROR parsing POST /product_types/ within product_types. Endpoint will not be generated.
cannot parse parameter of endpoint product_types_create
Reference(ref='#/definitions/ProductType')
ERROR parsing PUT /product_types/{id}/ within product_types. Endpoint will not be generated.
cannot parse parameter of endpoint product_types_update
Reference(ref='#/definitions/ProductType')
ERROR parsing PATCH /product_types/{id}/ within product_types. Endpoint will not be generated.
cannot parse parameter of endpoint product_types_partial_update
Reference(ref='#/definitions/ProductType')
ERROR parsing POST /product_types/{id}/generate_report/ within product_types. Endpoint will not be generated.
cannot parse parameter of endpoint product_types_generate_report
Reference(ref='#/definitions/ReportGenerateOption')
ERROR parsing POST /products/ within products. Endpoint will not be generated.
cannot parse parameter of endpoint products_create
Reference(ref='#/definitions/Product')
ERROR parsing PUT /products/{id}/ within products. Endpoint will not be generated.
cannot parse parameter of endpoint products_update
Reference(ref='#/definitions/Product')
ERROR parsing PATCH /products/{id}/ within products. Endpoint will not be generated.
cannot parse parameter of endpoint products_partial_update
Reference(ref='#/definitions/Product')
ERROR parsing POST /products/{id}/generate_report/ within products. Endpoint will not be generated.
cannot parse parameter of endpoint products_generate_report
Reference(ref='#/definitions/ReportGenerateOption')
ERROR parsing POST /regulations/ within regulations. Endpoint will not be generated.
cannot parse parameter of endpoint regulations_create
Reference(ref='#/definitions/Regulation')
ERROR parsing PUT /regulations/{id}/ within regulations. Endpoint will not be generated.
cannot parse parameter of endpoint regulations_update
Reference(ref='#/definitions/Regulation')
ERROR parsing PATCH /regulations/{id}/ within regulations. Endpoint will not be generated.
cannot parse parameter of endpoint regulations_partial_update
Reference(ref='#/definitions/Regulation')
ERROR parsing POST /scan_settings/ within scan_settings. Endpoint will not be generated.
cannot parse parameter of endpoint scan_settings_create
Reference(ref='#/definitions/ScanSettingsCreate')
ERROR parsing PUT /scan_settings/{id}/ within scan_settings. Endpoint will not be generated.
cannot parse parameter of endpoint scan_settings_update
Reference(ref='#/definitions/ScanSettings')
ERROR parsing PATCH /scan_settings/{id}/ within scan_settings. Endpoint will not be generated.
cannot parse parameter of endpoint scan_settings_partial_update
Reference(ref='#/definitions/ScanSettings')
ERROR parsing POST /sonarqube_issues/ within sonarqube_issues. Endpoint will not be generated.
cannot parse parameter of endpoint sonarqube_issues_create
Reference(ref='#/definitions/SonarqubeIssue')
ERROR parsing PUT /sonarqube_issues/{id}/ within sonarqube_issues. Endpoint will not be generated.
cannot parse parameter of endpoint sonarqube_issues_update
Reference(ref='#/definitions/SonarqubeIssue')
ERROR parsing PATCH /sonarqube_issues/{id}/ within sonarqube_issues. Endpoint will not be generated.
cannot parse parameter of endpoint sonarqube_issues_partial_update
Reference(ref='#/definitions/SonarqubeIssue')
ERROR parsing POST /sonarqube_product_configurations/ within sonarqube_product_configurations. Endpoint will not be generated.
cannot parse parameter of endpoint sonarqube_product_configurations_create
Reference(ref='#/definitions/SonarqubeProduct')
ERROR parsing PUT /sonarqube_product_configurations/{id}/ within sonarqube_product_configurations. Endpoint will not be generated.
cannot parse parameter of endpoint sonarqube_product_configurations_update
Reference(ref='#/definitions/SonarqubeProduct')
ERROR parsing PATCH /sonarqube_product_configurations/{id}/ within sonarqube_product_configurations. Endpoint will not be generated.
cannot parse parameter of endpoint sonarqube_product_configurations_partial_update
Reference(ref='#/definitions/SonarqubeProduct')
ERROR parsing POST /sonarqube_transitions/ within sonarqube_transitions. Endpoint will not be generated.
cannot parse parameter of endpoint sonarqube_transitions_create
Reference(ref='#/definitions/SonarqubeIssueTransition')
ERROR parsing PUT /sonarqube_transitions/{id}/ within sonarqube_transitions. Endpoint will not be generated.
cannot parse parameter of endpoint sonarqube_transitions_update
Reference(ref='#/definitions/SonarqubeIssueTransition')
ERROR parsing PATCH /sonarqube_transitions/{id}/ within sonarqube_transitions. Endpoint will not be generated.
cannot parse parameter of endpoint sonarqube_transitions_partial_update
Reference(ref='#/definitions/SonarqubeIssueTransition')
ERROR parsing POST /stub_findings/ within stub_findings. Endpoint will not be generated.
cannot parse parameter of endpoint stub_findings_create
Reference(ref='#/definitions/StubFindingCreate')
ERROR parsing PUT /stub_findings/{id}/ within stub_findings. Endpoint will not be generated.
cannot parse parameter of endpoint stub_findings_update
Reference(ref='#/definitions/StubFinding')
ERROR parsing PATCH /stub_findings/{id}/ within stub_findings. Endpoint will not be generated.
cannot parse parameter of endpoint stub_findings_partial_update
Reference(ref='#/definitions/StubFinding')
ERROR parsing PUT /system_settings/{id}/ within system_settings. Endpoint will not be generated.
cannot parse parameter of endpoint system_settings_update
Reference(ref='#/definitions/SystemSettings')
ERROR parsing PATCH /system_settings/{id}/ within system_settings. Endpoint will not be generated.
cannot parse parameter of endpoint system_settings_partial_update
Reference(ref='#/definitions/SystemSettings')
ERROR parsing POST /technologies/ within technologies. Endpoint will not be generated.
cannot parse parameter of endpoint technologies_create
Reference(ref='#/definitions/AppAnalysis')
ERROR parsing PUT /technologies/{id}/ within technologies. Endpoint will not be generated.
cannot parse parameter of endpoint technologies_update
Reference(ref='#/definitions/AppAnalysis')
ERROR parsing PATCH /technologies/{id}/ within technologies. Endpoint will not be generated.
cannot parse parameter of endpoint technologies_partial_update
Reference(ref='#/definitions/AppAnalysis')
ERROR parsing POST /test_types/ within test_types. Endpoint will not be generated.
cannot parse parameter of endpoint test_types_create
Reference(ref='#/definitions/TestType')
ERROR parsing PUT /test_types/{id}/ within test_types. Endpoint will not be generated.
cannot parse parameter of endpoint test_types_update
Reference(ref='#/definitions/TestType')
ERROR parsing PATCH /test_types/{id}/ within test_types. Endpoint will not be generated.
cannot parse parameter of endpoint test_types_partial_update
Reference(ref='#/definitions/TestType')
ERROR parsing POST /tests/ within tests. Endpoint will not be generated.
cannot parse parameter of endpoint tests_create
Reference(ref='#/definitions/TestCreate')
ERROR parsing PUT /tests/{id}/ within tests. Endpoint will not be generated.
cannot parse parameter of endpoint tests_update
Reference(ref='#/definitions/Test')
ERROR parsing PATCH /tests/{id}/ within tests. Endpoint will not be generated.
cannot parse parameter of endpoint tests_partial_update
Reference(ref='#/definitions/Test')
ERROR parsing POST /tests/{id}/accept_risks/ within tests. Endpoint will not be generated.
cannot parse parameter of endpoint tests_accept_risks
Reference(ref='#/definitions/AcceptedRisk')
ERROR parsing POST /tests/{id}/generate_report/ within tests. Endpoint will not be generated.
cannot parse parameter of endpoint tests_generate_report
Reference(ref='#/definitions/ReportGenerateOption')
ERROR parsing POST /tests/{id}/notes/ within tests. Endpoint will not be generated.
cannot parse parameter of endpoint tests_notes_create
Reference(ref='#/definitions/TestCreate')
ERROR parsing PATCH /tests/{id}/notes/ within tests. Endpoint will not be generated.
cannot parse parameter of endpoint tests_notes_partial_update
Reference(ref='#/definitions/Test')
ERROR parsing POST /tool_configurations/ within tool_configurations. Endpoint will not be generated.
cannot parse parameter of endpoint tool_configurations_create
Reference(ref='#/definitions/ToolConfiguration')
ERROR parsing PUT /tool_configurations/{id}/ within tool_configurations. Endpoint will not be generated.
cannot parse parameter of endpoint tool_configurations_update
Reference(ref='#/definitions/ToolConfiguration')
ERROR parsing PATCH /tool_configurations/{id}/ within tool_configurations. Endpoint will not be generated.
cannot parse parameter of endpoint tool_configurations_partial_update
Reference(ref='#/definitions/ToolConfiguration')
ERROR parsing POST /tool_product_settings/ within tool_product_settings. Endpoint will not be generated.
cannot parse parameter of endpoint tool_product_settings_create
Reference(ref='#/definitions/ToolProductSettings')
ERROR parsing PUT /tool_product_settings/{id}/ within tool_product_settings. Endpoint will not be generated.
cannot parse parameter of endpoint tool_product_settings_update
Reference(ref='#/definitions/ToolProductSettings')
ERROR parsing PATCH /tool_product_settings/{id}/ within tool_product_settings. Endpoint will not be generated.
cannot parse parameter of endpoint tool_product_settings_partial_update
Reference(ref='#/definitions/ToolProductSettings')
ERROR parsing POST /tool_types/ within tool_types. Endpoint will not be generated.
cannot parse parameter of endpoint tool_types_create
Reference(ref='#/definitions/ToolType')
ERROR parsing PUT /tool_types/{id}/ within tool_types. Endpoint will not be generated.
cannot parse parameter of endpoint tool_types_update
Reference(ref='#/definitions/ToolType')
ERROR parsing PATCH /tool_types/{id}/ within tool_types. Endpoint will not be generated.
cannot parse parameter of endpoint tool_types_partial_update
Reference(ref='#/definitions/ToolType')
ERROR parsing POST /users/ within users. Endpoint will not be generated.
cannot parse parameter of endpoint users_create
Reference(ref='#/definitions/User')
ERROR parsing PUT /users/{id}/ within users. Endpoint will not be generated.
cannot parse parameter of endpoint users_update
Reference(ref='#/definitions/User')
ERROR parsing PATCH /users/{id}/ within users. Endpoint will not be generated.
cannot parse parameter of endpoint users_partial_update
Reference(ref='#/definitions/User')
If you believe this was a mistake or this tool is missing a feature you need, please open an issue at https://github.com/triaxtec/openapi-python-client/issues/new/choose
```
| closed | 2020-12-29T11:11:16Z | 2020-12-30T21:14:43Z | https://github.com/openapi-generators/openapi-python-client/issues/279 | [
"🐞bug"
] | damiencarol | 3 |
piskvorky/gensim | nlp | 3,324 | provenance, copyright holders and licensing of `gensim/test/test_data/`? | On behalf of my employer, I have packaged gensim for Debian:
https://tracker.debian.org/pkg/gensim
In the process of auditing the gensim git repository for inclusion in Debian I noticed by using web search engines that some of the files in the [gensim/test/test_data/](https://github.com/RaRe-Technologies/gensim/tree/develop/gensim/test/test_data) directory seem to have been copied from the user comments on various websites such as IMDB. Presumably these comments were not owned by RaRe Technologies (or other gensim contributors) and were not licensed under the LGPL like the rest of gensim.
Other files seemed to indicate they were copied from Wikipedia, which definitely isn't LGPL. Others seemed to be statistics computed from some data and others seem to be generated files.
So I then wondered about all the files in the test data directory; where they came from, who owns them, what license they are under and since many of them are binary files how they were generated, what data were they generated from, what tools were they generated with and what the copyright/licensing of those tools are.
Without any answers to these questions I wasn't confident that I could get gensim into Debian quickly, so consequently I removed this directory from the Debian source package and added some [patches](https://github.com/pabs3/gensim/compare/skip-datapath).
I don't know if it will be feasible to reconcile this difference between the gensim git repository and the Debian source package, but I wanted to bring this to your attention and start a discussion about it.
It was mentioned in another issue that gensim tests in some cases generate files at test time instead of relying on pre-generated binary files. Perhaps some of the other tests could be changed to do that too.
For the cases where data is needed at test time, perhaps each data set could be in a separate directory and have a README alongside it detailing the provenance, copyright holders and licensing of each data set.
Some of the test data might no longer be needed and thus could be removed. | open | 2022-04-13T07:31:14Z | 2022-04-13T22:39:25Z | https://github.com/piskvorky/gensim/issues/3324 | [
"housekeeping"
] | pabs3 | 4 |
dynaconf/dynaconf | django | 447 | Multi env settings | In 2.2.3 version of dynaconf i have:
.secrets.toml
```
[default]
a = 3
[development]
a = 3
```
main.py
```python
from dynaconf import settings
print(settings.A)
```
And its work fine
In 3.1.2 version I start using a new confg.py mechanism:
.config.py (generated by dynaconf cli)
```
from dynaconf import Dynaconf
settings = Dynaconf(
envvar_prefix="DYNACONF",
settings_files=['settings.toml', '.secrets.toml'],
)
```
main.py
```python
from config import settings
print(settings.A)
```
I have an error (previous main.py file works fine (from dynaconf import settings)):
```
> poetry run python main.py
Traceback (most recent call last):
File "main.py", line 4, in <module>
print(settings.A)
File "/home/dyens/.cache/pypoetry/virtualenvs/dyntest-bSb0oaYP-py3.7/lib64/python3.7/site-packages/dynaconf/base.py", line 164, in __getattr__
value = getattr(self._wrapped, name)
AttributeError: 'Settings' object has no attribute 'A'
```
If i try to print:
```python
print(settings.to_dict())
```
I have:
```
{'DEFAULT': {'a': 3}, 'DEVELOPMENT': {'a': 3}}
```
What i missed in my new configuration?
Thank you!
| closed | 2020-10-12T14:06:12Z | 2020-10-12T19:15:48Z | https://github.com/dynaconf/dynaconf/issues/447 | [
"question"
] | dyens | 2 |
httpie/cli | python | 1,408 | backslashes that precede a number are not preserved | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
```sh
$ http --offline --print=B : \\0=
```
## Current result
```json
{
"0": ""
}
```
## Expected result
```json
{
"\\0": ""
}
```
---
## Debug output
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
```bash
$ http --debug --offline --print=B : \\0=
HTTPie 3.2.1
Requests 2.22.0
Pygments 2.7.2
Python 3.8.10 (default, Mar 15 2022, 12:22:08)
[GCC 9.4.0]
/usr/bin/python3
Linux 5.10.102.1-microsoft-standard-WSL2
<Environment {'apply_warnings_filter': <function Environment.apply_warnings_filter at 0x7f494214a5e0>,
'args': Namespace(),
'as_silent': <function Environment.as_silent at 0x7f494214a4c0>,
'colors': 256,
'config': {'default_options': []},
'config_dir': PosixPath('/home/ducaale/.httpie'),
'devnull': <property object at 0x7f494212a310>,
'is_windows': False,
'log_error': <function Environment.log_error at 0x7f494214a550>,
'program_name': 'http',
'quiet': 0,
'rich_console': <functools.cached_property object at 0x7f49421c37f0>,
'rich_error_console': <functools.cached_property object at 0x7f4942122910>,
'show_displays': True,
'stderr': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>,
'stdin_encoding': 'utf-8',
'stdin_isatty': True,
'stdout': <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>
<PluginManager {'adapters': [],
'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>,
<class 'httpie.plugins.builtin.DigestAuthPlugin'>,
<class 'httpie.plugins.builtin.BearerAuthPlugin'>],
'converters': [],
'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>,
<class 'httpie.output.formatters.json.JSONFormatter'>,
<class 'httpie.output.formatters.xml.XMLFormatter'>,
<class 'httpie.output.formatters.colors.ColorFormatter'>]}>
>>> requests.request(**{'auth': None,
'data': b'{"0": ""}',
'headers': <HTTPHeadersDict('User-Agent': b'HTTPie/3.2.1', 'Accept': b'application/json, */*;q=0.5', 'Content-Type': b'application/json')>,
'method': 'post',
'params': <generator object MultiValueOrderedDict.items at 0x7f4941d267b0>,
'url': 'http://localhost'})
{
"0": ""
}
```
## Additional information, screenshots, or code examples
N/A
| open | 2022-06-06T21:31:41Z | 2022-06-07T11:57:53Z | https://github.com/httpie/cli/issues/1408 | [
"bug"
] | ducaale | 6 |
slackapi/bolt-python | fastapi | 699 | How can I respond to a OPTIONS request with bolt-python? | Im using bolt-python 1.14.3.
Im using the OAuthFlow and changing `handle_installation`.
I want to respond to `handle_installation` with:
```python
return BoltResponse(
status=200,
body=json.dumps({"install_url": url}),
headers=flow.append_set_cookie_headers(
{
"Content-Type": "text/html; charset=utf-8",
"access-control-allow-origin": "*",
"access-control-allow-methods": "*",
"access-control-allow-headers": "*"
},
state_cookie,
),
)
```
But I need the application to also respond with CORS on the preflight check in the browser. Is this possible to do? Basically need a way to return a response to an OPTIONS request. | closed | 2022-08-14T02:18:10Z | 2022-08-15T01:18:37Z | https://github.com/slackapi/bolt-python/issues/699 | [
"question"
] | Apollorion | 2 |
coqui-ai/TTS | python | 3,431 | VITS multi speaker not work.[Bug] | ### Describe the bug
After 130000 steps of vits multi-speaker training with aishell3, the generated wav belongs to one speaker voice regardless of the speaker_idx. I have debugged the inference process, which seems right.
Anyone can give me some insights?
### To Reproduce
{
"output_path": "/home/nfs02/wangzj/checkpoints/aishell3-new",
"logger_uri": null,
"run_name": "aishell3_new",
"project_name": null,
"run_description": "\ud83d\udc38Coqui trainer run.",
"print_step": 1000,
"plot_step": 100,
"model_param_stats": false,
"wandb_entity": null,
"dashboard_logger": "tensorboard",
"save_on_interrupt": true,
"log_model_step": 10000,
"save_step": 10000,
"save_n_checkpoints": 5,
"save_checkpoints": true,
"save_all_best": false,
"save_best_after": 10000,
"target_loss": null,
"print_eval": false,
"test_delay_epochs": -1,
"run_eval": true,
"run_eval_steps": null,
"distributed_backend": "nccl",
"distributed_url": "tcp://localhost:54321",
"mixed_precision": true,
"precision": "fp16",
"epochs": 1000,
"batch_size": 64,
"eval_batch_size": 16,
"grad_clip": [
1000,
1000
],
"scheduler_after_epoch": true,
"lr": 0.001,
"optimizer": "AdamW",
"optimizer_params": {
"betas": [
0.8,
0.99
],
"eps": 1e-09,
"weight_decay": 0.01
},
"lr_scheduler": null,
"lr_scheduler_params": {},
"use_grad_scaler": false,
"allow_tf32": false,
"cudnn_enable": true,
"cudnn_deterministic": false,
"cudnn_benchmark": false,
"training_seed": 54321,
"model": "vits",
"num_loader_workers": 4,
"num_eval_loader_workers": 4,
"use_noise_augment": false,
"audio": {
"fft_size": 1024,
"sample_rate": 22050,
"win_length": 1024,
"hop_length": 256,
"num_mels": 80,
"mel_fmin": 0,
"mel_fmax": null
},
"use_phonemes": true,
"phonemizer": "chinese_phonemzier",
"phoneme_language": null,
"compute_input_seq_cache": true,
"text_cleaner": null,
"enable_eos_bos_chars": false,
"test_sentences_file": "",
"phoneme_cache_path": "/home/nfs02/wangzj/checkpoints/aishell3-new/phoneme_cache",
"characters": {
"characters_class": "TTS.tts.utils.text.characters.PinyinPhonemes",
"vocab_dict": null,
"pad": "<PAD>",
"eos": "<EOS>",
"bos": "<BOS>",
"blank": "<BLNK>",
"characters": "o3 uo3 er sh ou2 iou1 m ong3 ao2 vn1 ang2 e4 uei3 ian4 vn4 ia4 ve2 uai4 t #3 uang1 uan4 iao1 ang4 ve1 ai4 uang2 uang3 iang1 ong1 p iii ing4 ang1 ou1 ao1 an3 ii en4 v3 ao3 o4 #2 o2 uei4 ve3 uen3 i2 uan1 van4 x ie3 a ie1 iou4 an2 ou4 iao4 eos u4 ua2 z iong4 eng3 uang4 van1 er2 iao3 vn3 in3 a1 e1 en2 e3 i3 d i iii1 v4 o1 i1 ii4 n ia1 uei1 uo1 iong3 e2 van2 ueng1 ai2 ii2 ia ii3 iou2 c f l ian2 zh uan3 o uen4 ai1 vn2 ei2 ong2 ua ou3 e ing2 iang2 uo4 uan2 ie4 ei3 en3 in1 ia2 j er3 ang3 #0 u1 uo in2 ou eng2 s ei4 iou3 eng1 iang3 ao4 eng4 iii2 uai1 #1 ong4 iang4 sil ^ sp ii1 er4 uen1 ei b iao2 ia3 iii3 an1 uai3 ai3 a2 i4 ua4 en g uai2 uen2 iii4 an4 k ua3 a4 v2 ch ian3 u2 van3 ie2 v1 uo2 u3 ing1 ua1 in4 ei1 r en1 a3 ing3 ian1 uei2 h iong2 q iong1 ve4",
"punctuations": "!'(),-.:;? ",
"phonemes": null,
"is_unique": false,
"is_sorted": true
},
"add_blank": false,
"batch_group_size": 5,
"loss_masking": null,
"min_audio_len": 1,
"max_audio_len": Infinity,
"min_text_len": 1,
"max_text_len": 512,
"compute_f0": false,
"compute_energy": false,
"compute_linear_spec": true,
"precompute_num_workers": 10,
"start_by_longest": false,
"shuffle": false,
"drop_last": false,
"datasets": [
{
"formatter": "aishell3",
"dataset_name": "",
"path": "/home/nfs02/wangzj/dataset/aishell3",
"meta_file_train": "",
"ignored_speakers": null,
"language": "",
"phonemizer": "",
"meta_file_val": "",
"meta_file_attn_mask": ""
}
],
"test_sentences": [
"\u4eca\u5929\u5929\u6c14\u4e0d\u9519\u5440",
"\u6211\u6bcf\u5468\u8fdb\u884c\u4e09\u6b21\u5065\u8eab",
"\u4eca\u591c\u7684\u6c5f\u6ee9\u6ca1\u6709\u70df\u82b1",
"\u8fd9\u4e2a\u6708\u6211\u53d1\u4e86\u4e5d\u5343\u516b\u767e\u4e03\u5341\u516d\u5757\u94b1\u7684\u5de5\u8d44"
],
"eval_split_max_size": null,
"eval_split_size": 0.01,
"use_speaker_weighted_sampler": false,
"speaker_weighted_sampler_alpha": 1.0,
"use_language_weighted_sampler": false,
"language_weighted_sampler_alpha": 1.0,
"use_length_weighted_sampler": false,
"length_weighted_sampler_alpha": 1.0,
"model_args": {
"num_chars": 205,
"out_channels": 513,
"spec_segment_size": 32,
"hidden_channels": 192,
"hidden_channels_ffn_text_encoder": 768,
"num_heads_text_encoder": 2,
"num_layers_text_encoder": 6,
"kernel_size_text_encoder": 3,
"dropout_p_text_encoder": 0.1,
"dropout_p_duration_predictor": 0.5,
"kernel_size_posterior_encoder": 5,
"dilation_rate_posterior_encoder": 1,
"num_layers_posterior_encoder": 16,
"kernel_size_flow": 5,
"dilation_rate_flow": 1,
"num_layers_flow": 4,
"resblock_type_decoder": "1",
"resblock_kernel_sizes_decoder": [
3,
7,
11
],
"resblock_dilation_sizes_decoder": [
[
1,
3,
5
],
[
1,
3,
5
],
[
1,
3,
5
]
],
"upsample_rates_decoder": [
8,
8,
2,
2
],
"upsample_initial_channel_decoder": 512,
"upsample_kernel_sizes_decoder": [
16,
16,
4,
4
],
"periods_multi_period_discriminator": [
2,
3,
5,
7,
11
],
"use_sdp": true,
"noise_scale": 1.0,
"inference_noise_scale": 0.667,
"length_scale": 1,
"noise_scale_dp": 1.0,
"inference_noise_scale_dp": 1.0,
"max_inference_len": null,
"init_discriminator": true,
"use_spectral_norm_disriminator": false,
"use_speaker_embedding": true,
"num_speakers": 174,
"speakers_file": "/home/nfs02/wangzj/checkpoints/aishell3-new/aishell3_new-December-12-2023_10+57PM-0000000/speakers.pth",
"d_vector_file": null,
"speaker_embedding_channels": 256,
"use_d_vector_file": false,
"d_vector_dim": 0,
"detach_dp_input": true,
"use_language_embedding": false,
"embedded_language_dim": 4,
"num_languages": 0,
"language_ids_file": null,
"use_speaker_encoder_as_loss": false,
"speaker_encoder_config_path": "",
"speaker_encoder_model_path": "",
"condition_dp_on_speaker": true,
"freeze_encoder": false,
"freeze_DP": false,
"freeze_PE": false,
"freeze_flow_decoder": false,
"freeze_waveform_decoder": false,
"encoder_sample_rate": null,
"interpolate_z": true,
"reinit_DP": false,
"reinit_text_encoder": false
},
"lr_gen": 0.0002,
"lr_disc": 0.0002,
"lr_scheduler_gen": "ExponentialLR",
"lr_scheduler_gen_params": {
"gamma": 0.999875,
"last_epoch": -1
},
"lr_scheduler_disc": "ExponentialLR",
"lr_scheduler_disc_params": {
"gamma": 0.999875,
"last_epoch": -1
},
"kl_loss_alpha": 1.0,
"disc_loss_alpha": 1.0,
"gen_loss_alpha": 1.0,
"feat_loss_alpha": 1.0,
"mel_loss_alpha": 45.0,
"dur_loss_alpha": 1.0,
"speaker_encoder_loss_alpha": 1.0,
"return_wav": true,
"use_weighted_sampler": false,
"weighted_sampler_attrs": {},
"weighted_sampler_multipliers": {},
"r": 1,
"num_speakers": 174,
"use_speaker_embedding": true,
"speakers_file": "/home/nfs02/wangzj/checkpoints/aishell3-new/aishell3_new-December-12-2023_10+57PM-0000000/speakers.pth",
"speaker_embedding_channels": 256,
"language_ids_file": null,
"use_language_embedding": false,
"use_d_vector_file": false,
"d_vector_file": null,
"d_vector_dim": 0
}
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
> Training Environment:
| > Backend: Torch
| > Mixed precision: True
| > Precision: fp16
| > Current device: 0
| > Num. of GPUs: 1
| > Num. of CPUs: 20
| > Num. of Torch Threads: 20
| > Torch seed: 54321
| > Torch CUDNN: True
| > Torch CUDNN deterministic: False
| > Torch CUDNN benchmark: False
| > Torch TF32 MatMul: False
1 v100-32G GPU
```
### Additional context
_No response_ | closed | 2023-12-14T13:36:59Z | 2024-03-08T08:58:21Z | https://github.com/coqui-ai/TTS/issues/3431 | [
"bug",
"wontfix"
] | zjwang21 | 2 |
rthalley/dnspython | asyncio | 1,137 | DDR Test Anomolies | **Describe the bug**
As discussed in #1136, I've been having issues with the DDR tests. I've experimented further and have one finding. To gather more information, I modified tests/test_ddr.py::test_basic_ddr_sync to print out information instead of passing or failing based on the asserts and then raising an error at the end to get the stdout buffer. I get different results depending on which connection I'm on, but they almost always appear to fail. Generally one of the name servers will succeed ('1.1.1.1' or '8.8.8.8') but at least on these connections, it's rare for both to succeed.
Before the res.try_ddr line in the test, res.nameservers is always a list of string (e.g. ['1.1.1.1']). After try_ddr, it can be one of three things (depending on which connection I'm using):
A list of string (unchanged)
A list of class 'dns.nameserver.DoTNameserver'
A list of class class 'dns.nameserver.DoHNameserver'
Since both strings and dns.nameserver objects are allowed as nameservers, this is fine, but the post-try_ddr string causes the test to fail since it doesn't match the asserts. Looking at the try_ddr code, it looks like this is an expected possibility. Does a test failure here make sense? It seems like a flaky test since DDR may or may not produce an updated list.
Would it make more sense to only fail the test if both '1.1.1.1' and '8.8.8.8' fail?
**To Reproduce**
Not sure if that's possible.
**Context (please complete the following information):**
- dnspython version 2.7.0 rc1
- Python version 3.12
- OS: Debian unstable
| open | 2024-09-30T02:40:38Z | 2024-10-05T16:51:23Z | https://github.com/rthalley/dnspython/issues/1137 | [
"Enhancement Request",
"Future"
] | kitterma | 1 |
tensorflow/tensor2tensor | machine-learning | 1,460 | How to implement caching mechanism in universal transformer decoder? | Hi,
It seems to me that 'cache' is a quite important part @MostafaDehghani. It reduces the time complexity greatly. The only difficulty here may be that we don't know in advance how many recurrence steps we will take. Any idea about how to handle this?
https://github.com/tensorflow/tensor2tensor/blob/ac74489b6aa1e9c5a9abd12d919ab66d9a99c1e8/tensor2tensor/models/research/universal_transformer.py#L131 | open | 2019-02-21T09:18:46Z | 2019-02-21T09:18:46Z | https://github.com/tensorflow/tensor2tensor/issues/1460 | [] | PANXiao1994 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.