repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
pandas-dev/pandas | pandas | 60,204 | BUG: Incorrect logical operation between pandas dataframe and series | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
Here is an example:
import pandas as pd
df = pd.DataFrame({
'A': [5, 15, 10, 8],
'B': [20, 3, 7, 12]
})
result = (df >= 10) | (df['A'] >= 10)
result
```
The output:
```
A B 0 1 2 3
0 False True False False False False
1 True False False False False False
2 True False False False False False
3 False True False False False False
```
### Issue Description
1. I would expect the results in column `1` and column `2` to be `True` since it's an `|` operation between dataframe and series.
2. Could you please direct me to the appropriate user manual? I couldn't locate the one that explains the logical operations between a pandas DataFrame and a Series.
Thanks a lot!
### Expected Behavior
I would expect the results in column `1` and column `2` to be `True` since it's an `|` operation between dataframe and series.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.15
python-bits : 64
OS : Linux
OS-release : 6.9.10-1rodete5-amd64
Version : #1 SMP PREEMPT_DYNAMIC Debian 6.9.10-1rodete5 (2024-09-04)
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.1.1
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.2
Cython : None
sphinx : None
IPython : 8.28.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.9.0
html5lib : None
hypothesis : None
gcsfs : 2024.9.0post1
jinja2 : 3.1.4
lxml.etree : None
matplotlib : 3.9.2
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : 0.24.0
psycopg2 : None
pymysql : None
pyarrow : 17.0.0
pyreadstat : None
pytest : 8.3.3
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : 2.0.36
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| open | 2024-11-05T23:24:05Z | 2025-02-12T15:34:37Z | https://github.com/pandas-dev/pandas/issues/60204 | [
"Bug",
"Numeric Operations",
"Needs Discussion"
] | jialuoo | 7 |
feature-engine/feature_engine | scikit-learn | 84 | created new features by all categorical features combinations | **Is your feature request related to a problem? Please describe.**
if we have categorical features how to created new features by all features combinatoric combination
since in real life categorical features are NOT independent , but many of them are dependent from each to others
even scikit learn can not do, but you will?
related to
https://github.com/PacktPublishing/Python-Feature-Engineering-Cookbook/issues/1
**Describe the solution you'd like**
for example maximum number of combined features is given: or 2 or 4 or 5
for pandas DF you can use
concatenation
https://stackoverflow.com/questions/19377969/combine-two-columns-of-text-in-dataframe-in-pandas-python
columns = ['whatever', 'columns', 'you', 'choose']
df['period'] = df[columns].astype(str).sum(axis=1)
so three features combinations from 11 features
features combinatoric combination
seems to be 3 nested loops are not good for this
for i in range(1,11)
for j in range(i+1,11)
for k in range(j+1,11)
you need to get 165 new features from all combinations (not permutations )
then you get many new features
"
Another alternative that I've seen from some Kaggle masters is to join the categories in 2 different variables, into a new categorical variable, so for example, if you have the variable gender, with the values female and male, for observations 1 and 2, and the variable colour with the value blue and green for observations 1 and 2 respectively, you could create a 3rd categorical variable called gender-colour, with the values female-blue for observation 1 and male-green for observation 2. Then you would have to apply the encoding methods from section 3 to this new variable
."
--------------------------------------------------------------------------------------
yes do this
but it should not be necessary pandas
also you need to think about RAM use, since it will be a lot of new features
before creating new features
think about converting categorical features to "int" types with small amount of digits from numpy ,
---------------------------------------------------------------------------------------
| open | 2020-07-29T15:05:25Z | 2021-11-09T15:31:18Z | https://github.com/feature-engine/feature_engine/issues/84 | [
"new transformer"
] | Sandy4321 | 18 |
graphql-python/graphene | graphql | 805 | Cannot Use Top-Level Fragment in Mutation |
I'm unable to use fragments on the top-level of a mutation. I could be mistaken, but seems like a bug - I'm porting an working graphql schema from node.js into graphene (without relay). Anyone know if this is correct/desired behavior?
For example, the following mutation works:
```
fragment CommentInfo on Comment {
id
content
}
mutation addPost($input: AddPostInput!) {
addPost(input: $input) {
title
content
comments {
...CommentInfo
}
}
}
```
However, the following mutation doesn't:
```
fragment PostInfo on Post {
id
title
content
}
mutation addPost($input: AddPostInput!) {
addPost(input: $input) {
...PostInfo
}
}
```
The failed mutation prints the following:
```
{
"errors": [
{
"message": "Fragment PostInfo cannot be spread here as objects of type AddPost can never be of type Post",
"locations": [
{
"line": 9,
"column": 5
}
]
}
]
}
```
| closed | 2018-07-24T01:00:07Z | 2018-12-30T11:19:09Z | https://github.com/graphql-python/graphene/issues/805 | [
"👀 more info needed"
] | lgants | 2 |
inventree/InvenTree | django | 9,280 | Header char field for webhook messages too short | Hello,
I use inventree and want to leverage the webhook functionality to connect external services.
My wanted service (Lexoffice) uses an signature based message authentication mechanism. Unfortunately the signature is carried in a header field the webhook message. So it exceeds the given limit of 256 characters.
https://github.com/inventree/InvenTree/blob/f7536a9f897df6484a2077950ff7b21144c9c385/src/backend/InvenTree/common/models.py#L1464
How do I expand the charfield in my current production enviroment?
Would be great if this field is larger in future versions of inventree.
Kind regards
Dennis | open | 2025-03-11T10:57:51Z | 2025-03-11T11:01:58Z | https://github.com/inventree/InvenTree/issues/9280 | [] | SeriousD | 1 |
JoeanAmier/TikTokDownloader | api | 39 | 请问,如何下载图集里的原图? | open | 2023-07-30T06:09:58Z | 2023-07-30T08:14:33Z | https://github.com/JoeanAmier/TikTokDownloader/issues/39 | [] | huikunzhou | 1 | |
FlareSolverr/FlareSolverr | api | 832 | ModuleNotFoundError: No module named 'ipaddress' from linux binary | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.2.2
- Last working FlareSolverr version: -
- Operating system: Linux (PopOS)
- Are you using Docker: [yes/no] -
- FlareSolverr User-Agent (see log traces or / endpoint): -
- Are you using a VPN: [yes/no] -
- Are you using a Proxy: [yes/no] -
- Are you using Captcha Solver: [yes/no] -
- If using captcha solver, which one: -
- URL to test this issue: -
```
### Description
I downloaded binary from the releases section , extracted, when i try to start it by `./flaresolverr ` it gave me that error
I tried after installing `ipaddress` package from pip by not luck
I am using Python 3.10.6 do i need 3.11 even for the binary? because it is not mentioned for that.
### Logged Error Messages
```text
[291123] Module object for pyimod02_importers is NULL!
Traceback (most recent call last):
File "PyInstaller/loader/pyimod02_importers.py", line 22, in <module>
File "pathlib.py", line 14, in <module>
File "urllib/parse.py", line 40, in <module>
ModuleNotFoundError: No module named 'ipaddress'
Traceback (most recent call last):
File "PyInstaller/loader/pyiboot01_bootstrap.py", line 17, in <module>
ModuleNotFoundError: No module named 'pyimod02_importers'
[291123] Failed to execute script 'pyiboot01_bootstrap' due to unhandled exception!
```
### Screenshots
_No response_ | closed | 2023-07-26T13:42:24Z | 2023-07-27T00:19:51Z | https://github.com/FlareSolverr/FlareSolverr/issues/832 | [] | bipinkrish | 2 |
GibbsConsulting/django-plotly-dash | plotly | 265 | DjangoDash constructor or dash.Dash | **Q1)** As per documentation of django-plotly-dash, we have to set the app as
`app = DjangoDash('SimpleExample')`
However, in the Dash documentations, we set app as
`app = dash.Dash(__name__, external_stylesheets=external_stylesheets)`
How are we able to reconcile these two different statements?
EDIT: It seems like DjangoDash taskes in external_stylesheets as an argument as well
**Q2)** I have already registered `"django_plotly_dash.apps.DjangoPlotlyDashConfig",` as an installed app under my settings.py file.
However, I'm still getting an error for an invalid block tag: `Invalid block tag on line 5: 'plotly_app', expected 'endblock'. Did you forget to register or load this tag?` How would I be able to rectify this?
SOLUTION: Add `{% load plotly_dash %}` at the top before `{% plotly_app name="SimpleExample" %}`
**Q3** I have managed to incorporate the Dash application into my django app, but I have some challenges in having the CSS reflected in the Dash application.
1) I have added in ""django_plotly_dash.middleware.BaseMiddleware"," to the bototm of my MIDDLEWARE list
2) I have added in plotly_header / plotly_footer in my template
3) Under the Network tab in Chrome, I don't see the `external_stylesheets` that I have added to my DjangoDash object.
```
external_stylesheets = [
"https://codepen.io/chriddyp/pen/bWLwgP.css",
"https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/css/bootstrap.min.css",
]
app = DjangoDash("SimpleExample", external_stylesheets=external_stylesheets)
```
What can I do to allow these to appear?
SOLUTION: Resolved by enabling bootstrap throughout the application, and doing away with the external .css codepen
| closed | 2020-07-16T14:59:00Z | 2020-07-17T01:35:42Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/265 | [] | etjkai | 0 |
koaning/scikit-lego | scikit-learn | 43 | feature request: Column Selector | Selects columns based on a name. Accepts `Iterable(str)` or `str` (which converts to an iterable of length 1. | closed | 2019-03-20T09:08:36Z | 2019-03-20T13:19:10Z | https://github.com/koaning/scikit-lego/issues/43 | [] | sandervandorsten | 0 |
RomelTorres/alpha_vantage | pandas | 268 | Simple Query not working as expected | Using the example:
from alpha_vantage.timeseries import TimeSeries
from pprint import pprint
ts = TimeSeries(key='XXXX', output_format='pandas')
data, meta_data = ts.get_intraday(symbol='MSFT',interval='5min', outputsize='full')
pprint(data.head(2))
This works as shown in the docs, but what I'm trying to see is *all* data. I'm getting a truncated version
pprint(data.head(3504))
or just
print(data)
$ python3 av.py
1. open 2. high 3. low 4. close 5. volume
date
2020-11-06 20:00:00 223.30 223.40 223.30 223.40 3809.0
2020-11-06 19:55:00 223.25 223.30 223.25 223.30 1074.0
2020-11-06 19:50:00 223.40 223.41 223.29 223.30 3903.0
2020-11-06 19:45:00 223.40 223.40 223.40 223.40 150.0
2020-11-06 19:40:00 223.41 223.41 223.35 223.35 400.0
... ... ... ... ... ...
2020-10-12 04:35:00 217.53 217.86 217.53 217.86 2004.0
2020-10-12 04:30:00 217.50 217.50 217.50 217.50 248.0
2020-10-12 04:25:00 216.84 216.88 216.84 216.88 1465.0
2020-10-12 04:20:00 216.66 216.84 216.66 216.84 1086.0
2020-10-12 04:05:00 216.21 216.40 216.21 216.40 1349.0
[3504 rows x 5 columns]
I.e. - What's with the row: ... ... ... ... ... ...
| closed | 2020-11-09T01:11:47Z | 2020-12-21T02:39:33Z | https://github.com/RomelTorres/alpha_vantage/issues/268 | [] | TheCrockett | 3 |
babysor/MockingBird | pytorch | 132 | 训练到7344/20414错误提示:_pickle.PicklingError: Can't pickle <class 'MemoryError'>: it's not the same object as builtins.MemoryError | File "C:\Users\86158\Anaconda3\envs\pytorch\lib\multiprocessing\queues.py", line 239, in _feed
obj = _ForkingPickler.dumps(obj)
File "C:\Users\86158\Anaconda3\envs\pytorch\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "C:\Users\86158\Anaconda3\envs\pytorch\lib\site-packages\torch\multiprocessing\reductions.py", line 319, in reduce_storage
metadata = storage._share_filename_()
RuntimeError: Couldn't open shared file mapping: <0000029FFD7D77B2>, error code: <1455>
{| Epoch: 1/1 (7341/20414) | Loss: 0.7548 | 0.86 steps/s | Step: 7k | }Traceback (most recent call last):
File "C:\Users\86158\Anaconda3\envs\pytorch\lib\multiprocessing\queues.py", line 239, in _feed
obj = _ForkingPickler.dumps(obj)
File "C:\Users\86158\Anaconda3\envs\pytorch\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "C:\Users\86158\Anaconda3\envs\pytorch\lib\site-packages\torch\multiprocessing\reductions.py", line 319, in reduce_storage
metadata = storage._share_filename_()
RuntimeError: Couldn't open shared file mapping: <000001EBF5A60222>, error code: <1455>
{| Epoch: 1/1 (7342/20414) | Loss: 0.7547 | 0.86 steps/s | Step: 7k | }Traceback (most recent call last):
File "C:\Users\86158\Anaconda3\envs\pytorch\lib\multiprocessing\queues.py", line 239, in _feed
obj = _ForkingPickler.dumps(obj)
File "C:\Users\86158\Anaconda3\envs\pytorch\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'MemoryError'>: it's not the same object as builtins.MemoryError
{| Epoch: 1/1 (7343/20414) | Loss: 0.7564 | 0.87 steps/s | Step: 7k | }Traceback (most recent call last):
File "C:\Users\86158\Anaconda3\envs\pytorch\lib\multiprocessing\queues.py", line 239, in _feed
obj = _ForkingPickler.dumps(obj)
File "C:\Users\86158\Anaconda3\envs\pytorch\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
**_pickle.PicklingError: Can't pickle <class 'MemoryError'>: it's not the same object as builtins.MemoryError**
{| Epoch: 1/1 (7344/20414) | Loss: 0.7568 | 0.87 steps/s | Step: 7k | }Traceback (most recent call last):
大佬们 帮忙解决 /(ㄒoㄒ)/~~ 训练了一天了 从早上到晚上
| open | 2021-10-09T11:22:27Z | 2022-03-09T01:54:26Z | https://github.com/babysor/MockingBird/issues/132 | [] | yinjia823 | 3 |
HumanSignal/labelImg | deep-learning | 209 | which function of this code will be called when I click the button `OK` ? | hello, I am interested in your this code. after I draw a `RectBox` on an image, when I click the button `OK`, which function of your code will be called?
thank you very much~~
look forward your reply. | closed | 2017-12-04T10:24:00Z | 2017-12-07T03:06:55Z | https://github.com/HumanSignal/labelImg/issues/209 | [] | PapaMadeleine2022 | 0 |
xonsh/xonsh | data-science | 4,806 | AttributeError: 'NoneType' object has no attribute 'flush' | ## xonfig
<details>
```
+------------------+----------------------+
| xonsh | 0.12.4 |
| Git SHA | c3fc7edb |
| Commit Date | May 8 17:26:23 2022 |
| Python | 3.10.4 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.29 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.12.0 |
| on posix | True |
| on linux | True |
| distro | ubuntu |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib | [] |
| RC file 1 | /home/johny/.xonshrc |
+------------------+----------------------+
```
</details>
## Expected Behavior
To exit xonsh correctly.
## Current Behavior
When I exit xonsh it throws an exception. This not happens every time, only in certain conditions.
### Traceback (if applicable)
<details>
```
Exception ignored in atexit callback: <function shutdown at 0x7fe84db6d900>
Traceback (most recent call last):
File "/usr/lib/python3.10/logging/__init__.py", line 2182, in shutdown
h.flush()
File "/usr/lib/python3.10/logging/__init__.py", line 1084, in flush
self.stream.flush()
File "/home/johny/.local/pipx/venvs/xonsh/lib/python3.10/site-packages/xonsh/__amalgam__.py", line 16263, in flush
self.std.flush()
AttributeError: 'NoneType' object has no attribute 'flush'
```
</details>
## Steps to Reproduce
I can reproduce this issue in this way:
- Start xonsh.
- `$ xonfig web`
- Press Ctrl+C to stop server.
- Press Ctrl+D to exit xonsh. The mentioned exception is thrown.
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| closed | 2022-05-10T21:50:49Z | 2022-05-20T17:15:15Z | https://github.com/xonsh/xonsh/issues/4806 | [] | johny65 | 1 |
numba/numba | numpy | 9,748 | Enable converting dict() to build_map and set() to build_set for simple cases | <!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Feature request
<!--
Please include details of the feature you would like to see, why you would
like to see it/the use case.
-->
In Python users can initialize dictionaries in two different ways:
- Using `{}`
- Using the `dict()` function
While it seems that modern Python style guidelines would strongly encourage using `{}` for dictionary literals, including empty dictionaries, some people may still use `dict()` for readability. In Python, this results in differing bytecode as can be seen with the following simple example:
```python
In [1]: def f():
...: ... return {}
...:
In [2]: def g():
...: ... return dict()
...:
In [3]: import dis
In [4]: dis.dis(f)
1 0 RESUME 0
2 2 BUILD_MAP 0
4 RETURN_VALUE
In [5]: dis.dis(g)
1 0 RESUME 0
2 2 LOAD_GLOBAL 1 (NULL + dict)
12 CALL 0
20 RETURN_VALUE
```
Here `dict()` is kept as a function call and not a build_map, which also occurs in the corresponding Numba IR. This is potentially problematic because it means that one cannot reliably look at just `build_map` to determine if a value is a literal dictionary and would need to also support checking `dict()`. Similarly if any user prefers stylistic to use `dict()` in a situation where the literal syntax is feasible, they could miss out on any literal optimizations that only work for `build_map`.
My understanding is that Python **must** do this because of Python allowance for replacing builtin names. For example a user can technically replace dict with any other function, such as extending it for their own type support.
In contrast I don't think it's feasible or possible for the `dict()` definition in Numba to be fully replaced. As a result, we could convert any call to `dict()` that can be written as a literal `build_map` in the early stages of the IR to ensure consistency. A similar change should be possible for sets as well. | open | 2024-10-09T18:58:19Z | 2024-10-11T09:26:22Z | https://github.com/numba/numba/issues/9748 | [
"feature_request"
] | njriasan | 0 |
tensorpack/tensorpack | tensorflow | 1,372 | when augmentation,i can not understand some functions | when augmentation,i can not understand the function box_to_point8 and point8_to_box.could you explain it for me?thank you for your help. | closed | 2019-12-19T02:20:42Z | 2019-12-20T07:52:19Z | https://github.com/tensorpack/tensorpack/issues/1372 | [] | tieguanyin803 | 1 |
JaidedAI/EasyOCR | machine-learning | 1,384 | Add method to unload models and free RAM | Current implementation of OCR have no method to free RAM, as result server sometimes down due to RAM out, especially when server spawn multiple workers.
# Use case
I use EasyOCR + FastAPI + Gunicorn with multiple workers.
Server creates one instance of of EasyOCR for every language direction and keep it in RAM for fast access.
When one worker takes requests for ~12 different languages, it spawn N instances of EasyOCR and eventually fall with "not enough RAM" error.
It would be nice to have some method like `close`/`stop`/`dispose` to stop instance and free RAM.
Also this problem occurs on preloading stage when server creates EasyOCR instances one by one with every supported language, to ensure all models are downloaded and will be available when server will be started.
We have a lot of initialized instances of EasyOCR with different models, and it keeps in RAM forever until preloading script in run | open | 2025-03-09T14:50:10Z | 2025-03-09T14:50:10Z | https://github.com/JaidedAI/EasyOCR/issues/1384 | [] | vitonsky | 0 |
AirtestProject/Airtest | automation | 696 | 报告里的截图质量有参数能调节么 | (请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。)
**(重要!问题分类)**
* 测试开发环境AirtestIDE使用问题 -> https://github.com/AirtestProject/AirtestIDE/issues
* 控件识别、树状结构、poco库报错 -> https://github.com/AirtestProject/Poco/issues
* 图像识别、设备控制相关问题 -> 按下面的步骤
**描述问题bug**
(简洁清晰得概括一下遇到的问题是什么。或者是报错的traceback信息。)
需要保存高质量的截图,现在的截图质量太低,有参数可以调节么
```
(在这里粘贴traceback或其他报错信息)
```
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)
**复现步骤**
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**预期效果**
(预期想要得到什么、见到什么)
**python 版本:** `python3.5`
**airtest 版本:** `1.0.69`
> airtest版本通过`pip freeze`可以命令可以查到
**设备:**
- 型号: [e.g. google pixel 2]
- 系统: [e.g. Android 8.1]
- (别的信息)
**其他相关环境信息**
(其他运行环境,例如在linux ubuntu16.04上运行异常,在windows上正常。)
| closed | 2020-02-26T07:48:38Z | 2020-02-26T08:10:00Z | https://github.com/AirtestProject/Airtest/issues/696 | [] | neuzou | 1 |
pywinauto/pywinauto | automation | 1,182 | pywinauto hook application closed | Is there a detection / hook when the application is manually closed?
**Case**: An app is controlled by pywinauto. During this process, the user manually quits the app. How to handle this case and stop the script?
## Expected Behavior
script stops.
## Actual Behavior
script continues and pywinauto runs without errors. | open | 2022-02-22T10:34:41Z | 2022-02-22T10:34:41Z | https://github.com/pywinauto/pywinauto/issues/1182 | [] | malmr | 0 |
amidaware/tacticalrmm | django | 2,005 | Enhance Automation Policy view on dashboard | **Is your feature request related to a problem? Please describe.**
Automation policies can be applied at multiple levels: Client, Site, and Agent. There is also a clear distinction between Workstation policies and Server policies. These are all good things. Currently, the Checks/Tasks view where Automation Policies are visible on the dashboard only shows a Shield/Magnifying icon to indicate that something is applied because of a policy. It's unclear at what level the policy is applied at a glance.
**Describe the solution you'd like**
Modify the shield icon to display where the policy is applied and show the information in detail when you hover over the shield icon, not just the generic text "This [check/task] is managed by a policy." Include a small "C" in the corner of the icon for "Client", "S" for "Site", and "A" for "Agent".
**Describe alternatives you've considered**
There is no alternative functionality for my suggested enhancement.
**Additional context**
N/A.
| open | 2024-09-19T20:22:15Z | 2024-09-19T20:22:15Z | https://github.com/amidaware/tacticalrmm/issues/2005 | [] | btrfs-d | 0 |
marcomusy/vedo | numpy | 902 | Matching 3D rays with color to image pixels | Hi @marcomusy,
I have an interesting problem which I have encountered and I would be interested to hear your opinion and whether I could address it with vedo.
Briefly I have a point cloud and for each of the points I have a predicted color value depending the viewing direction. The viewing direction could be 360 degrees but in my case I have limited it to 180 degrees due to the normal. Now from each point I am throwing a bunch of multiple rays (viewing directions/angles) per point and for each ray I have a different rgb value depending the ray direction. Now I have an input image with which I would like to best match the rgb values of the pixels with best corresponding rgb values from the rays and get the indices of these rays. So imagine something like that:

My first idea was to find to get the distance between rgb values of the rays and the rgb values of the image pixels and filter out the first 200-300 with the lowest color distance. This didn't work that well though as you can see bellow (the green rays are more or less the ones that should have been found, while the blue ones are the ones I am extracting):

So I am trying to figure out any other approach which could give me some better results. Thus, in principle I want to filter good rays based only on rgb values if that makes any sense.
```
import os
import numpy as np
import vedo as vd
import json
def main():
# Opening JSON file
with open('./data.json') as json_file:
data = json.load(json_file)
pcd = vd.Points(data['pcd'])
rays = vd.Arrows(data['rays_start'], data['rays_end'], c='b', s=0.5)
cam = vd.Line(data['cam_pnts'], c='green')
pic = vd.Picture(np.asarray(data['pic'])*255)
dim_pic = pic.dimensions()
pic = pic.scale(2 / np.max(dim_pic) * 0.3).apply_transform(data['cam_transformation']).shift(dx=-0.30, dy=-0.3, dz=0.6)
vd.show(pcd, pic, rays, cam, axes=1, interactive=True).close()
return 0
if __name__ == "__main__":
main()
```
[data.zip](https://github.com/marcomusy/vedo/files/12109760/data.zip)
| open | 2023-07-20T13:52:53Z | 2023-07-26T16:35:30Z | https://github.com/marcomusy/vedo/issues/902 | [
"help wanted"
] | ttsesm | 8 |
graphdeco-inria/gaussian-splatting | computer-vision | 657 | Question on the use of GT poses directly into 3DGS | Hi GS-ers,
I'm trying to get a gaussian splat of the [ICL NUIM](http://redwood-data.org/indoor/dataset.html) dataset.
I want to use the `ground truth path` (so without COLMAP) and the existent `.ply` point-cloud that comes with the dataset scenes.
From what I understood there is an extra transformation to apply to my ground truth path for it to comply to the COLMAP format.
I found one in the `scene/dataset_reader.py` in `readCamerasFromTransforms()` :
```python
# NeRF 'transform_matrix' is a camera-to-world transform
c2w = np.array(frame["transform_matrix"])
# change from OpenGL/Blender camera axes (Y up, Z back) to COLMAP (Y down, Z forward)
c2w[:3, 1:3] *= -1
# get the world-to-camera transform and set R, T
w2c = np.linalg.inv(c2w)
R = np.transpose(w2c[:3,:3]) # R is stored transposed due to 'glm' in CUDA code
T = w2c[:3, 3]
```
However the path still doesn't seem right after the transformation ...
Is there something I am missing ?
Thanks for your help,
Best regards :)
| closed | 2024-02-15T11:59:24Z | 2024-11-27T08:44:50Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/657 | [] | leblond14u | 3 |
fastapi/sqlmodel | pydantic | 310 | Before sqlmodel I used pydantic modle as input and output Schema . But now i m switched with the sqlmodel but i have some issue , some field(like password) that i don,t want to gave to the user in output schema how it will be restricted: | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class Book(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
title: str
description: str
```
### Description
Before sqlmodel I used pydantic modle as input and output Schema . But now i m switched with the sqlmodel but i have some issue , some field(like password) that i don,t want to gave to the user in output schema how it will be restricted:
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.9.5
### Additional Context
_No response_ | closed | 2022-04-22T06:56:35Z | 2024-10-29T08:06:10Z | https://github.com/fastapi/sqlmodel/issues/310 | [
"question"
] | israr96418 | 6 |
drivendataorg/erdantic | pydantic | 90 | Modality `zero` should only be determined by Optional typing | I believe that the current way of defining the modality is not fully correct. If the cardinality is many, then the modality will become zero. But this means you will never get one-to-many. I propose that the modality gets determined to only check if a field is nullable and not also if the cardinality is many.
https://github.com/drivendataorg/erdantic/blob/b618ab54593d3b89853c2ce22f0b47f8bec41255/erdantic/erd.py#L56C1-L58C10 | closed | 2023-08-27T16:08:55Z | 2024-03-31T01:06:04Z | https://github.com/drivendataorg/erdantic/issues/90 | [
"question"
] | ion-elgreco | 2 |
HIT-SCIR/ltp | nlp | 60 | 以xml方式输入时,如果某些attr缺失,server会挂掉 | RT
| closed | 2014-04-16T06:40:03Z | 2014-04-17T15:58:30Z | https://github.com/HIT-SCIR/ltp/issues/60 | [
"bug"
] | Oneplus | 0 |
MagicStack/asyncpg | asyncio | 582 | asyncpg error: “no pg_hba.conf entry for host” in Heroku | I'm using asyncpg to connect my database in Heroku postgresql, using python:
```
import asyncpg
async def create_db_pool():
bot.pg_con = await asyncpg.create_pool(dsn="postgres://....", host="....amazonaws.com", user="xxx", database="yyy", port="5432", password="12345")
```
it was working perfectly until I received an email from heroku advising me of a maintenance:
`Maintenance (DATABASE_URL on myappname) is starting now. We will update you when it has completed.`
then this error appeared:
`asyncpg.exceptions.InvalidAuthorizationSpecificationError: no pg_hba.conf entry for host "123.456.789.10", user "xxx", database "yyy", SSL off`
I tried to follow some help, like putting ssl=True but this error appeared:
`ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1108)`
same as putting ssl="allow"
`asyncpg.exceptions.InvalidPasswordError: password authentication failed for user "xxx"`
what can I do to fix this?
https://stackoverflow.com/questions/62053185/asyncpg-error-no-pg-hba-conf-entry-for-host-in-heroku
| open | 2020-05-31T20:13:10Z | 2024-09-07T19:22:01Z | https://github.com/MagicStack/asyncpg/issues/582 | [] | Kami-Power | 1 |
google-research/bert | tensorflow | 975 | Compared with CBOW, skip-gram and GloVe, what is the effect of embedding words with BERT? | Compared with CBOW, skip-gram and GloVe, what is the effect of embedding words with BERT? I think it's a very interesting question. | open | 2019-12-28T14:43:33Z | 2019-12-30T08:14:34Z | https://github.com/google-research/bert/issues/975 | [] | WHQ1111 | 1 |
liangliangyy/DjangoBlog | django | 567 | 不懂 | cannot import name 'smart_text' from 'django.utils.encoding'
| closed | 2022-03-31T03:21:16Z | 2022-04-11T07:43:25Z | https://github.com/liangliangyy/DjangoBlog/issues/567 | [] | curry011 | 1 |
opengeos/streamlit-geospatial | streamlit | 139 | https://geospatial.streamlitapp.com can not be accessed. | https://geospatial.streamlitapp.com can not be accessed.

| closed | 2024-08-02T09:05:11Z | 2024-08-20T17:46:26Z | https://github.com/opengeos/streamlit-geospatial/issues/139 | [] | lllllrrrr | 1 |
PaddlePaddle/PaddleHub | nlp | 1,445 | 运行gpu的demo 出错。 | https://www.paddlepaddle.org.cn/hubdetail?name=chinese_ocr_db_crnn_server&en_category=TextRecognition
按照这个教程写的代码。代码我改成gpu的了。
import paddlehub as hub
import cv2
ocr = hub.Module(name="chinese_ocr_db_crnn_mobile")
result = ocr.recognize_text(images=[cv2.imread(r'C:\Users\bin\Desktop\temp\jpg')],use_gpu=True)
print(result)
环境如下

异常如下。

| open | 2021-06-04T02:38:11Z | 2021-06-07T01:42:41Z | https://github.com/PaddlePaddle/PaddleHub/issues/1445 | [] | bbhxwl | 2 |
marcomusy/vedo | numpy | 140 | [info] VTK 9.0.0 released | Just wanted to share the good news that VTK 9.0.0 has been released: https://discourse.vtk.org/t/vtk-9-0-0/3205 | open | 2020-05-05T13:41:47Z | 2020-06-16T12:57:16Z | https://github.com/marcomusy/vedo/issues/140 | [
"bug"
] | RubendeBruin | 3 |
mirumee/ariadne | api | 660 | snake_case_fallback_resolvers not calling obj.get(attr_name) | **Ariadne version:** 0.13.0
**Python version:** 3.8.11
Hello. I am using the [databases](https://www.encode.io/databases/) package with an [asyncpg](https://magicstack.github.io/asyncpg/current/) backend to interact with a PostgreSQL database. The objects returned from my queries are of the type `databases.backends.postgres.Record`. The desired attributes can only can accessed via the get method. However, when I use `snake_case_fallback_resolvers`, Ariadne has trouble resolving the requested fields and I receive the following error: `Cannot return null for non-nullable field`
If I instead use the regular `fallback_resolvers` (adjusting my schema's naming conventions), Ariadne is able to resolve the requested fields.
Is this a bug or am I doing something wrong? Thank you for your time.
| closed | 2021-08-31T22:54:18Z | 2021-09-03T22:52:35Z | https://github.com/mirumee/ariadne/issues/660 | [
"enhancement",
"roadmap"
] | RodrigoTMOLima | 1 |
pinry/pinry | django | 359 | Proxy authentication by http header value | When self-hosting multiple applications, you really want to have a single point for user management and authentication. It is annoying to login to each and every app seperately.
A pretty simple way to centralize authentication is achieved by deploying apps behind a reverse proxy, and use proxy auth. The proxy handles authentication in some way and sets http headers containing the username that was successfully logged-in. The apps read the headers and associate incoming requests to that user.
The perfect proxy auth feature for me would work like this:
1. Start the app with additional environment variables:
* containing the name of the initial admin user (e.g. admin=admin_user)
* enabling proxy auth (e.g. proxy_auth=true)
* setting the key of the http header that contains the username (e.g. auth_header=X-Authenticated-User)
2. Configure the reverse proxy to authenticate incoming requests in any way you like.
3. Let the reverse proxy set X-Authenticated-User to the authenticated username on every request.
4. The app treats the requests as if they belong to the appropriate user session.
5. Bonus: if the app does not know the username, it creates a new user with that name.
Other SSO methods like OIDC still require the user to login with each app, even it no credentials are required. It is still an additional step that is unneeded and hurting the user experience.
Additional context:
I am using the app for [this product](https://getportal.org/). Since this is a single-user platform, users really should see no login screen at all, not even for SSO. | open | 2022-11-02T17:23:47Z | 2022-11-02T17:23:47Z | https://github.com/pinry/pinry/issues/359 | [] | max-tet | 0 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 209 | [BUG] Brief and clear description of the problem | I installed api on my server, when I send a link to tiktok video, it downloads the same video, can you help me solve the problem? | closed | 2023-06-03T10:02:36Z | 2024-04-23T05:04:03Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/209 | [
"BUG",
"enhancement"
] | artemmarkov050 | 2 |
pandas-dev/pandas | data-science | 60,370 | ENH: Improve Code Quality in pandas/core/reshape Module | ## Summary
Refactor the pandas/core/reshape module to improve code quality by reducing duplication, replacing hard-coded values, and simplifying complex conditionals.
## Problem Description
The pandas/core/reshape module implements key reshaping functions (pivot, melt, and unstack) used in data manipulation workflows. A review of pivot.py and melt.py reveals a couple of areas where code quality could be improved:
**Nested Conditionals:**
* In melt.py, nested conditionals add complexity, making the code harder to read and maintain.
* Suggestion: Refactor these conditionals into smaller, more modular functions.
**Hard-Coded Values:**
* In pivot.py, hard-coded strings (e.g., "All" for margins) reduce flexibility.
* Suggestion: Replace hard-coded values with constants for maintainability.
## Relevant File
* **melt.py**
* **pivot.py**
## Proposed Solution
**Refactor Nested Conditionals in melt.py**
* Nested Conditional in `ensure_list_vars()`
* Before:
```python
def ensure_list_vars(arg_vars, variable: str, columns) -> list:
if arg_vars is not None:
if not is_list_like(arg_vars):
return [arg_vars]
elif isinstance(columns, MultiIndex) and not isinstance(arg_vars, list):
raise ValueError(
f"{variable} must be a list of tuples when columns are a MultiIndex"
)
else:
return list(arg_vars)
else:
return []
```
* After:
```python
def ensure_list_vars(arg_vars, variable: str, columns) -> list:
if arg_vars is None:
return []
if not is_list_like(arg_vars):
return [arg_vars]
if isinstance(columns, MultiIndex) and not isinstance(arg_vars, list):
raise ValueError(
f"{variable} must be a list of tuples when columns are a MultiIndex"
)
return list(arg_vars)
```
* Nested Conditional in `melt()` for `id_vars`:
* Before:
```python
if id_vars or value_vars:
if col_level is not None:
level = frame.columns.get_level_values(col_level)
else:
level = frame.columns
labels = id_vars + value_vars
idx = level.get_indexer_for(labels)
missing = idx == -1
if missing.any():
missing_labels = [
lab for lab, not_found in zip(labels, missing) if not_found
]
raise KeyError(
"The following id_vars or value_vars are not present in "
f"the DataFrame: {missing_labels}"
)
if value_vars_was_not_none:
frame = frame.iloc[:, algos.unique(idx)]
else:
frame = frame.copy(deep=False)
else:
frame = frame.copy(deep=False)
```
* After:
```python
def validate_and_get_level(frame, id_vars, value_vars, col_level):
level = frame.columns.get_level_values(col_level) if col_level is not None else frame.columns
labels = id_vars + value_vars
idx = level.get_indexer_for(labels)
missing = idx == -1
if missing.any():
missing_labels = [lab for lab, not_found in zip(labels, missing) if not_found]
raise KeyError(
"The following id_vars or value_vars are not present in "
f"the DataFrame: {missing_labels}"
)
return idx
if id_vars or value_vars:
idx = validate_and_get_level(frame, id_vars, value_vars, col_level)
if value_vars_was_not_none:
frame = frame.iloc[:, algos.unique(idx)]
else:
frame = frame.copy(deep=False)
```
* Nested Conditionals for Setting `var_name` in `melt()`:
* Before:
```python
if var_name is None:
if isinstance(frame.columns, MultiIndex):
if len(frame.columns.names) == len(set(frame.columns.names)):
var_name = frame.columns.names
else:
var_name = [f"variable_{i}" for i in range(len(frame.columns.names))]
else:
var_name = [
frame.columns.name if frame.columns.name is not None else "variable"
]
elif is_list_like(var_name):
if isinstance(frame.columns, MultiIndex):
if is_iterator(var_name):
var_name = list(var_name)
if len(var_name) > len(frame.columns):
raise ValueError(
f"{var_name=} has {len(var_name)} items, "
f"but the dataframe columns only have {len(frame.columns)} levels."
)
else:
raise ValueError(f"{var_name=} must be a scalar.")
else:
var_name = [var_name]
```
After:
```python
def determine_var_name(frame, var_name):
if var_name is None:
return _default_var_name(frame)
if is_list_like(var_name):
_validate_list_var_name(var_name, frame)
return list(var_name)
return [var_name]
def _default_var_name(frame):
if isinstance(frame.columns, MultiIndex):
if len(frame.columns.names) == len(set(frame.columns.names)):
return frame.columns.names
return [f"variable_{i}" for i in range(len(frame.columns.names))]
return [frame.columns.name or "variable"]
def _validate_list_var_name(var_name, frame):
if isinstance(frame.columns, MultiIndex):
if is_iterator(var_name):
var_name = list(var_name)
if len(var_name) > len(frame.columns):
raise ValueError(
f"{var_name=} has {len(var_name)} items, "
f"but the dataframe columns only have {len(frame.columns)} levels."
)
else:
raise ValueError(f"{var_name=} must be a scalar.")
var_name = determine_var_name(frame, var_name)
```
* Benefits:
* Improves readability:
Simplifies the main function, making the logic clearer and easier to follow.
* Makes the logic easier to test and maintain:
Enables independent testing of each helper function, ensuring robust behavior.
* Separation of concerns:
Each helper function is now responsible for a single, well-defined task, aligning with the principle of single responsibility.
**Replace Hard-Coded Values in pivot.py**
* Before:
```python
# Hard-coded string for margins
margins_name: Hashable = "All"
```
* After:
```python
# Define a constant for the hard-coded value
MARGIN_NAME = "All"
# Use the constant in the code
margins_name: Hashable = MARGIN_NAME:
```
* Benefits:
* Makes the code more readable and maintainable.
* Centralizes the value so it can be reused or modified easily.
## Testing
**Unit Testing Helper Functions:**
Write focused tests for each new helper function to validate their behavior under expected, edge, and erroneous inputs. For example:
* Ensure validate_and_get_level() correctly identifies missing variables and raises KeyError.
* Test determine_var_name() with var_name=None, scalar inputs, and multi-level columns.
**Regression Testing Parent Functions:**
Run all pre-existing tests for the parent functions (e.g., melt()) to confirm they maintain their functionality after the refactor.
**Edge Cases:**
Include additional tests for edge scenarios, such as:
* Empty id_vars or value_vars.
* DataFrames with unusual column configurations like MultiIndex or missing names.
## Labels
* `ENH`
* `Code Quality`
## Compliance with Contributing Guide
* **Focus:** The issue is specific and addresses code quality improvements without scope creep.
* **Clarity:** Includes actionable suggestions and a clear implementation path.
### Please provide feedback and let me know if you would like further refinements! | closed | 2024-11-20T07:21:32Z | 2024-12-03T01:37:45Z | https://github.com/pandas-dev/pandas/issues/60370 | [] | Koookadooo | 2 |
Baiyuetribe/kamiFaka | flask | 145 | docker 启动报错 | PytzUsageWarning: The localize method is no longer necessary, as this time zone supports the fold attribute (PEP 495). For more details on migrating to a PEP 495-compliant implementation, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html
return self.timezone.localize(datetime(**values))
命令:`docker run -d -it -p 80:8080 --name pay baiyuetribe/kamifaka`
服务器为海外服务器
| open | 2023-02-17T09:57:22Z | 2023-02-17T09:59:29Z | https://github.com/Baiyuetribe/kamiFaka/issues/145 | [
"bug",
"good first issue",
"question"
] | oneoy | 0 |
supabase/supabase-py | flask | 1,064 | Frequent httpx.RemoteProtocolError: Server disconnected |
# Bug report
- [x] I confirm this is a bug with Supabase, not with my own application.
- [x] I confirm I have searched the [Docs](https://docs.supabase.com), GitHub [Discussions](https://github.com/supabase/supabase/discussions), and [Discord](https://discord.supabase.com).
## Describe the bug
When making API requests to Supabase using PostgREST client, the server unexpectedly disconnects, resulting in a `httpx.RemoteProtocolError: Server disconnected` error. This happens intermittently when trying to retrieve data from a specific table or batch insert.
## To Reproduce
Steps to reproduce the behavior:
1. Set up a connection to Supabase using client
2. Attempt to execute a query to retrieve data from a table
3. The server disconnects during the request, throwing a `RemoteProtocolError`
Code snippet demonstrating the issue:
```python
# Using postgrest client to query a table
result = client.table("my_table").select("*").eq("key", "value").execute()
# This results in server disconnection
```
## Expected behavior
The query should complete successfully and return the requested data without any server disconnection.
## System information
- OS: Linux
- Version of postgrest-py: [latest]
- Version of httpx: [latest]
- Python version: 3.11
## Additional context
As of now, We have added retry mechanism over certain call during exception of HTTPX disconnect. Following stack trace,
```bash
File "/usr/local/lib/python3.11/site-packages/postgrest/_sync/request_builder.py", line 58, in execute
r = self.session.request(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 825, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dd_tracer/python/ddtrace/contrib/internal/httpx/patch.py", line 166, in _wrapped_sync_send
resp = wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1014, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 249, in handle_request
with map_httpcore_exceptions():
File "/usr/local/lib/python3.11/contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 118, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.RemoteProtocolError: Server disconnected
```
| open | 2025-02-26T03:28:05Z | 2025-03-21T16:03:38Z | https://github.com/supabase/supabase-py/issues/1064 | [
"bug"
] | immortal3 | 10 |
deeppavlov/DeepPavlov | tensorflow | 885 | [question] How to reproduce training of KBQA component? | Hi! Is it possible to train KBQA component?
http://docs.deeppavlov.ai/en/master/components/kbqa.html provides only the guide about how to use pre-trained model.
```
from deeppavlov import configs
from deeppavlov import train_model
train_model(configs.kbqa.kbqa_rus)
```
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-e8633f37c93c> in <module>
----> 1 train_model(configs.kbqa.kbqa_rus)
F:\conda\envs\dp_kbqa\lib\site-packages\deeppavlov-0.3.1-py3.6.egg\deeppavlov\__init__.py in train_model(config, download, recursive)
29 # TODO: make better
30 def train_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> Chainer:
---> 31 train_evaluate_model_from_config(config, download=download, recursive=recursive)
32 return build_model(config, load_trained=True)
33
F:\conda\envs\dp_kbqa\lib\site-packages\deeppavlov-0.3.1-py3.6.egg\deeppavlov\core\commands\train.py in train_evaluate_model_from_config(config, iterator, to_train, evaluation_targets, to_validate, download, start_epoch_num, recursive)
119
120 if to_train:
--> 121 trainer.train(iterator)
122
123 res = {}
F:\conda\envs\dp_kbqa\lib\site-packages\deeppavlov-0.3.1-py3.6.egg\deeppavlov\core\trainers\nn_trainer.py in train(self, iterator)
289 def train(self, iterator: DataLearningIterator) -> None:
290 """Call :meth:`~fit_chainer` and then :meth:`~train_on_batches` with provided data iterator as an argument"""
--> 291 self.fit_chainer(iterator)
292 if callable(getattr(self._chainer, 'train_on_batch', None)):
293 try:
F:\conda\envs\dp_kbqa\lib\site-packages\deeppavlov-0.3.1-py3.6.egg\deeppavlov\core\trainers\fit_trainer.py in fit_chainer(self, iterator)
127 writer.flush()
128 else:
--> 129 preprocessed = self._chainer.compute(*iterator.get_instances(), targets=targets)
130 if len(targets) == 1:
131 preprocessed = [preprocessed]
TypeError: compute() missing 1 required positional argument: 'x'
``` | closed | 2019-06-18T16:49:18Z | 2020-05-18T21:44:20Z | https://github.com/deeppavlov/DeepPavlov/issues/885 | [] | StrikerRUS | 4 |
huggingface/transformers | machine-learning | 35,978 | HPD-Transformer: A Hybrid Parsing-Density Transformer for Efficient Structured & Probabilistic Reasoning | ### Model description
**Overview**
HPD‑Transformer is a hybrid AI model combining structured parsing (syntax/semantic analysis) and probabilistic density estimation (uncertainty-aware reasoning) within a single, energy-efficient framework. Developed under the brand name **OpenSeek**, HPD‑Transformer outperforms several general-purpose LLMs (e.g., ChatGPT‑4, Qwen 2.5 Max, DeepSeek) on specialized tasks while reducing computational costs by up to 60–70%.
### Key Features
- **Hybrid Architecture**: Integrates parsing and density modules.
- **Sparse Mixture of Experts (MoE)**: Domain‑specific experts reduce compute cost.
- **Energy Efficiency**: Uses quantization, pruning, and Performer attention for ~60% lower FLOPs.
- **Multi‑Modal & Multilingual**: Handles text, tables, and 50+ languages.
- **Real‑Time UI**: Interactive visualization for parsing, uncertainty estimates, and more.
### Methodology Highlights
1. **Hybrid Parsing-Density**:
- Parsing Module: Lightweight transformer blocks (Performer) for syntactic/semantic analysis.
- Density Module: Monte Carlo dropout & Sparse Gaussian Processes for uncertainty modeling.
2. **Sparse MoE**:
- 32 experts (small feed-forward networks), each specialized in a domain (medical, legal, finance, etc.).
- Top-2 routing activates only the most relevant experts per token.
3. **Training**:
- **Knowledge Distillation** from teacher models (ChatGPT‑4, Qwen 2.5 Max, etc.).
- **RLHF**: Reinforcement Learning from Human Feedback for correctness and clarity.
- **Curriculum Learning**: General pretraining → domain-specific → task-specific.
- **Online Meta-Learning**: Real-time adaptation without full retraining.
4. **Efficiency**:
- 8-bit Quantization, structured pruning, and mixed-precision training.
- Performer (FAVOR+) attention for O(n) complexity.
5. **Evaluation & Benchmarks**:
- Targets >80% accuracy on MMLU, surpassing ChatGPT‑4 (~78%).
- Achieves lower inference cost ($0.001/query) vs. ChatGPT‑4’s ($0.005/query).
6. **Use Cases**:
- High-stakes fields (healthcare, legal, finance) needing interpretable outputs.
- Edge deployments where compute/energy are limited.
7. **Limitations**:
- Context window limited to ~8k tokens (less than some mega-LLMs).
- May require additional domain experts for niche tasks.
**Reference Implementation**
We provide a reference PyTorch implementation (see code snippets below) that includes:
- Shared Embedding Layer
- Parsing Module (Performer-based)
- Density Module (Bayesian Neural Network + MC dropout)
- Sparse Mixture of Experts (Top-2 gating)
- Simple training loop for demonstration
**UI/Deployment**
- FastAPI backend with Docker support for cloud or on-prem deployment.
- Optional Streamlit/React UI to visualize dependency parsing and uncertainty in real-time.
- Supports edge deployments via ONNX or TensorFlow Lite.
**License**
- Core modules are open-sourced under Apache 2.0.
- Extended enterprise features available for commercial use.
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
[HPD.docx](https://github.com/user-attachments/files/18615085/HPD.docx) | open | 2025-01-31T08:27:11Z | 2025-01-31T08:27:11Z | https://github.com/huggingface/transformers/issues/35978 | [
"New model"
] | infodevlovable | 0 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 315 | Question: How to access info in building relay.Node and connection | It is convenient to just add two lines of code to do get by id and get all queries:
```python
class EmployeeQuery(graphene.ObjectType):
employee = relay.Node.Field(Employee)
all_employees = SQLAlchemyConnectionField(Employee.connection, sort=Employee.sort_argument())
```
My question is, how do I enhance/customize the query so I can access the `info` for authorization purpose?
Currently I have to implement my own resolver, but for `all_employees` I lost the relay edges:
```python
class EmployeeQuery(graphene.ObjectType):
employee = graphene.Field(Employee, id=graphene.ID(required=True))
def resolve_employee(parent, info, **args):
id = args.get('id')
print(f"resolve_employee: {id}, =========== user = {info.context.user}")
return relay.Node.get_node_from_global_id(info, id, only_type=Employee)
all_employees = graphene.List(Employee)
def resolve_all_employees(parent, info, **args):
print(f"resolve_all_employee: =========== user = {info.context.user}")
return Employee.get_query(info).all()
```
Is there a better more "graphene SQLAlchemy" way?
| closed | 2021-08-16T20:14:03Z | 2023-02-24T14:56:08Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/315 | [] | shaozi | 4 |
mljar/mljar-supervised | scikit-learn | 654 | Problem with computing importance plots for sklearn algorithms | I got error message when training:
```
'DecisionTreeAlgorithm' object has no attribute 'classes_'
Problem during computing permutation importance. Skipping ...
``` | closed | 2023-09-20T12:50:25Z | 2023-09-20T14:37:46Z | https://github.com/mljar/mljar-supervised/issues/654 | [] | pplonski | 1 |
amisadmin/fastapi-amis-admin | sqlalchemy | 169 | The filter on input-group does not work | I have the following field configuration in a model:
```python
store_min_cost: float = Field(
default=0.0, nullable=True, title="КЛ ₽(м2) min",
amis_table_column={'type': "number", 'kilobitSeparator': True, 'sortable': True},
amis_filter_item=
{
"type": "input-group",
"description": "по умолчанию указан предельный диапазон",
"validationConfig": {"errorMode": "partial"},
"body": [
{
"type": "input-text",
"size": "sm",
"source": "/get_filter_range/?mark=min&model=ComplexBase&field=store_min_cost",
"name": "s_min",
"autoComplete": False,
"validations": {"isNumeric": True, "maximum": "${s_max}"},
"validationErrors": {
"isNumeric": "Допустимо только числовое значение",
"maximum": "Не может превышать правое значение",
},
},
{
"type": "input-text",
"size": "sm",
"source": "/get_filter_range/?mark=max&model=ComplexBase&field=store_min_cost",
"name": "s_max",
"autoComplete": False,
"validations": {"isNumeric": True, "minimum": "${s_min}"},
"validationErrors": {
"isNumeric": "Допустимо только числовое значение",
"minimum": "Не может быть ниже левого значения",
},
},
]
}
)
```
This grouping is necessary to set min\max values. I take the data for the limits from the API (**that’s why I can’t use input-range** - there is no way to dynamically specify the limit based on data from the database).
I see everything in the filter,

but the search does not react in any way to changing values. And when debugging, I see that the value from the form is not forwarded.

Any advice is welcome!
| open | 2024-04-29T18:41:56Z | 2024-04-29T18:41:56Z | https://github.com/amisadmin/fastapi-amis-admin/issues/169 | [] | SergShulga | 0 |
pytest-dev/pytest-xdist | pytest | 255 | Make load scheduler configurable | I have several projects where the distribution of tests runtime is quite scattered, eg:
- 1000 tests of 10ms
- 100 tests of 1 minute
The current load scheduler comes short in this case, as it often ends up sending a batch of slow tests to the same worker.
As a workaround, I use a forked LoadScheduler that uses a fixed queue size (which I use with the minimum value of 2 -> each worker only has one test in its queue at any time):
```
class FixedLocalQueueLoadScheduling(LoadScheduling): # no cover
"""
A fork of pytest-xdist default load scheduler that uses a fixed size for workers local queue size.
"""
def __init__(self, config, log=None, queue_size=2):
super().__init__(config, log)
if queue_size < 2:
raise ValueError('Queue size must be at least 2')
self.queue_size = queue_size
def check_schedule(self, node, duration=0):
if node.shutting_down:
return
if self.pending:
node_pending = self.node2pending[node]
if len(node_pending) < self.queue_size:
num_send = self.queue_size - len(node_pending)
self._send_tests(node, num_send)
self.log("num items waiting for node:", len(self.pending))
def schedule(self):
assert self.collection_is_completed
# Initial distribution already happened, reschedule on all nodes
if self.collection is not None:
for node in self.nodes:
self.check_schedule(node)
return
# allow nodes to have different collections
if not self._check_nodes_have_same_collection():
self.log('**Different tests collected, aborting run**')
return
# Collections are identical, create the index of pending items.
self.collection = list(self.node2collection.values())[0]
self.pending[:] = range(len(self.collection))
if not self.collection:
return
# Send a batch of tests to run. If we don't have at least two
# tests per node, we have to send them all so that we can send
# shutdown signals and get all nodes working.
initial_batch = min(len(self.pending), self.queue_size * len(self.nodes))
# distribute tests round-robin up to the batch size
# (or until we run out)
nodes = cycle(self.nodes)
for i in range(initial_batch):
self._send_tests(next(nodes), 1)
if not self.pending:
# initial distribution sent all tests, start node shutdown
for node in self.nodes:
node.shutdown()
```
It would be nice to have at least one of these propositions implemented in xdist:
1. Integrate this scheduler (or an even simpler version where queue_size=2)
2. Make LoadScheduler configurable, so that users can provide initial_batch_size / items_per_node_min / items_per_node_max
3. When sending a batch of jobs to a node, shuffle like for the initial batch
4. Maybe improve/reduce a bit the defaults settings for initial_batch_size / items_per_node_min / items_per_node_max | closed | 2017-12-06T14:29:13Z | 2022-12-23T11:21:22Z | https://github.com/pytest-dev/pytest-xdist/issues/255 | [
"enhancement"
] | nicoulaj | 1 |
CorentinJ/Real-Time-Voice-Cloning | python | 317 | KeyError: "Registering two gradient with name 'BlockLSTM'! Getting this error | KeyError: "Registering two gradient with name 'BlockLSTM'! (Previous registration was in register C:\\Users\\pks89\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\registry.py:66)" | closed | 2020-04-11T15:55:25Z | 2020-07-05T08:58:18Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/317 | [] | pks889 | 2 |
katanaml/sparrow | computer-vision | 61 | Validation Error | ValidationError(
model='DynamicModel',
errors=[
{
'loc': ('__root__',),
'msg': 'Expecting value: line 1 column 1 (char 0)',
'type': 'value_error.jsondecode',
'ctx': {
'msg': 'Expecting value',
'doc': 'Empty Response',
'pos': 0,
'lineno': 1,
'colno': 1
}
}
]
Llama index with this LLM: 'adrienbrault/nous-hermes2theta-llama3-8b:q5_K_M' keep getting this error
| closed | 2024-08-04T13:44:30Z | 2024-08-05T07:14:30Z | https://github.com/katanaml/sparrow/issues/61 | [] | Sumeet213 | 1 |
python-restx/flask-restx | flask | 141 | How do I programmatically access the sample requests from the generated swagger UI | **Ask a question**
For a given restx application, I can see a rich set of details contained in the generated Swagger UI, for example for each endpoint, I can see sample requests populated with default values from the restx `fields` I created to serve as the components when defining the endpoints. These show up as example `curl` commands that I can copy/paste into a shell (as well as being executed from the 'Try it out' button).
However, I want to access this data programmatically from the app client itself. Suppose I load and run the app in a standalone Python program and have a handle to the Flask `app` object. I can see attributes such as `api.application.blueprints['restx_doc']` to get a handle to the `Apidoc` object.
But I cannot find out where this object stores all the information I need to programmatically reconstruct valid requests to the service's endpoint.
| open | 2020-05-23T19:46:12Z | 2020-05-23T19:46:12Z | https://github.com/python-restx/flask-restx/issues/141 | [
"question"
] | espears1 | 0 |
holoviz/panel | matplotlib | 7,458 | ButtonIcon improvements | Using the `ButtonIcon` with `panel-graphic-walker` I find there are a few issues
## Does not trigger when clicking the text
The button only triggers when clicking the icon. Not the `name`/ text.

This is unexpected for users and makes it hard to hit.
I would suggest improving the widget by also triggering when the `name`/ text is hit.
Right now I would recommend users to use `Button` if they want to use a `name`. But then they can't use the awesome `active_icon` feature.
## The name ButtonIcon is in reverse order
In Panel we call it `Button`, `MenuButton`, `CheckButtonGroup`, `RadioButtonGroup`. I.e. first the name of the feature. Then `Button` or `ButtonGroup`.
`ButtonIcon` is reversed making it hard to remember and the framework less logical.
I would suggest deprecating the name in favor of `IconButton`.
| open | 2024-11-03T07:39:42Z | 2024-11-03T07:42:49Z | https://github.com/holoviz/panel/issues/7458 | [
"type: feature"
] | MarcSkovMadsen | 0 |
django-import-export/django-import-export | django | 1,160 | How do you deal with server timeouts? | I found an issue that was created a few years back but the answer is not valid anymore: https://github.com/django-import-export/django-import-export/issues/301
I believe this is a common problem if you're importing a large dataset. We should document how to get around server timeouts. | closed | 2020-07-01T00:06:17Z | 2020-07-12T14:22:30Z | https://github.com/django-import-export/django-import-export/issues/1160 | [
"question"
] | charleshan | 2 |
pbugnion/gmaps | jupyter | 352 | API change for collections in Python 3.3+ breaks is_atomic in options.py | Running in Python 3.10 and calling `gmaps.symbol_layer()` with `info_boxes` set to a list of strings:
```
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gmaps\options.py:40, in is_atomic(elem)
34 def is_atomic(elem):
35 """
36 True if an element is a single atom and false if it's a collection
37 """
38 return (
39 isinstance(elem, string_types) or
---> 40 not isinstance(elem, collections.Iterable)
41 )
AttributeError: module 'collections' has no attribute 'Iterable'
```
[In python 3.3+ these are moved into the collections.abc (abstract base classes) module.](https://docs.python.org/3/library/collections.abc.html) For some reason I have to access them like this when testing in my code:
```python
import _collections_abc
_collections_abc.Iterable
``` | open | 2022-04-13T15:55:21Z | 2023-06-27T14:13:39Z | https://github.com/pbugnion/gmaps/issues/352 | [] | whudson | 5 |
hankcs/HanLP | nlp | 1,244 | 无法安装python版本 | 具体操作和报错如下:
Last login: Mon Jul 15 20:16:21 on ttys001
MacBook-Pro-de-Chen:~ noah$ pip install pyhanlp
Collecting pyhanlp
Collecting jpype1>=0.7.0 (from pyhanlp)
Using cached https://files.pythonhosted.org/packages/28/63/784834e8a24ec2e1ad7f703c3dc6c6fb372a77cc68a2fdff916e18a4449e/JPype1-0.7.0.tar.gz
Building wheels for collected packages: jpype1
Building wheel for jpype1 (setup.py) ... error
ERROR: Complete output from command /Users/noah/anaconda3/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/private/var/folders/bb/yzzgnhrj70q9s996rsfz6txw0000gn/T/pip-install-ynmh4yg5/jpype1/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/var/folders/bb/yzzgnhrj70q9s996rsfz6txw0000gn/T/pip-wheel-1dve7lyv --python-tag cp37:
ERROR: /Users/noah/anaconda3/lib/python3.7/distutils/dist.py:274: UserWarning: Unknown distribution option: 'use_scm_version'
warnings.warn(msg)
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.7-x86_64-3.7
creating build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jcollection.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jcomparable.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_classpath.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jio.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jtypes.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_pykeywords.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jproxy.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_gui.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_darwin.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/nio.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jstring.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_cygwin.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/__init__.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jboxed.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/types.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/beans.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jvmfinder.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/imports.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jcustomizer.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_core.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jinit.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_linux.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jarray.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jobject.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jclass.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_windows.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jexception.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/reflect.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jpackage.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
running build_ext
running build_java
Using Jar cache
creating build/lib
creating build/lib/org
creating build/lib/org/jpype
creating build/lib/org/jpype/classloader
copying native/jars/org/jpype/classloader/JPypeClassLoader.class -> build/lib/org/jpype/classloader
copying native/jars/org.jpype.jar -> build/lib
running build_thunk
Building thunks
including thunk build/lib/org/jpype/classloader/JPypeClassLoader.class
including thunk build/lib/org.jpype.jar
/private/var/folders/bb/yzzgnhrj70q9s996rsfz6txw0000gn/T/pip-install-ynmh4yg5/jpype1/setupext/build_ext.py:85: FeatureNotice: Turned ON Numpy support for fast Java array access
FeatureNotice)
building '_jpype' extension
creating build/temp.macosx-10.7-x86_64-3.7
creating build/temp.macosx-10.7-x86_64-3.7/build
creating build/temp.macosx-10.7-x86_64-3.7/build/src
creating build/temp.macosx-10.7-x86_64-3.7/native
creating build/temp.macosx-10.7-x86_64-3.7/native/python
creating build/temp.macosx-10.7-x86_64-3.7/native/common
gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Users/noah/anaconda3/include -arch x86_64 -I/Users/noah/anaconda3/include -arch x86_64 -DMACOSX=1 -DHAVE_NUMPY=1 -Inative/common/include -Inative/python/include -Ibuild/src -Inative/jni_include -I/Users/noah/anaconda3/lib/python3.7/site-packages/numpy/core/include -I/Users/noah/anaconda3/include/python3.7m -c build/src/jp_thunk.cpp -o build/temp.macosx-10.7-x86_64-3.7/build/src/jp_thunk.o -ggdb
warning: include path for stdlibc++ headers not found; pass '-stdlib=libc++' on the command line to use the libc++ standard library instead [-Wstdlibcxx-not-found]
In file included from build/src/jp_thunk.cpp:1:
In file included from build/src/jp_thunk.h:3:
native/common/include/jpype.h:82:10: fatal error: 'map' file not found
#include <map>
^~~~~
1 warning and 1 error generated.
error: command 'gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for jpype1
Running setup.py clean for jpype1
Failed to build jpype1
Installing collected packages: jpype1, pyhanlp
Running setup.py install for jpype1 ... error
ERROR: Complete output from command /Users/noah/anaconda3/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/private/var/folders/bb/yzzgnhrj70q9s996rsfz6txw0000gn/T/pip-install-ynmh4yg5/jpype1/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/bb/yzzgnhrj70q9s996rsfz6txw0000gn/T/pip-record-l51xr0fq/install-record.txt --single-version-externally-managed --compile:
ERROR: /Users/noah/anaconda3/lib/python3.7/distutils/dist.py:274: UserWarning: Unknown distribution option: 'use_scm_version'
warnings.warn(msg)
running install
running build
running build_py
creating build/lib.macosx-10.7-x86_64-3.7
creating build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jcollection.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jcomparable.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_classpath.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jio.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jtypes.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_pykeywords.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jproxy.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_gui.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_darwin.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/nio.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jstring.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_cygwin.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/__init__.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jboxed.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/types.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/beans.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jvmfinder.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/imports.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jcustomizer.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_core.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jinit.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_linux.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jarray.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jobject.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jclass.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_windows.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jexception.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/reflect.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
copying jpype/_jpackage.py -> build/lib.macosx-10.7-x86_64-3.7/jpype
running build_ext
running build_java
Using Jar cache
copying native/jars/org/jpype/classloader/JPypeClassLoader.class -> build/lib/org/jpype/classloader
copying native/jars/org.jpype.jar -> build/lib
running build_thunk
Building thunks
including thunk build/lib/org/jpype/classloader/JPypeClassLoader.class
including thunk build/lib/org.jpype.jar
/private/var/folders/bb/yzzgnhrj70q9s996rsfz6txw0000gn/T/pip-install-ynmh4yg5/jpype1/setupext/build_ext.py:85: FeatureNotice: Turned ON Numpy support for fast Java array access
FeatureNotice)
building '_jpype' extension
creating build/temp.macosx-10.7-x86_64-3.7
creating build/temp.macosx-10.7-x86_64-3.7/build
creating build/temp.macosx-10.7-x86_64-3.7/build/src
creating build/temp.macosx-10.7-x86_64-3.7/native
creating build/temp.macosx-10.7-x86_64-3.7/native/python
creating build/temp.macosx-10.7-x86_64-3.7/native/common
gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Users/noah/anaconda3/include -arch x86_64 -I/Users/noah/anaconda3/include -arch x86_64 -DMACOSX=1 -DHAVE_NUMPY=1 -Inative/common/include -Inative/python/include -Ibuild/src -Inative/jni_include -I/Users/noah/anaconda3/lib/python3.7/site-packages/numpy/core/include -I/Users/noah/anaconda3/include/python3.7m -c build/src/jp_thunk.cpp -o build/temp.macosx-10.7-x86_64-3.7/build/src/jp_thunk.o -ggdb
warning: include path for stdlibc++ headers not found; pass '-stdlib=libc++' on the command line to use the libc++ standard library instead [-Wstdlibcxx-not-found]
In file included from build/src/jp_thunk.cpp:1:
In file included from build/src/jp_thunk.h:3:
native/common/include/jpype.h:82:10: fatal error: 'map' file not found
#include <map>
^~~~~
1 warning and 1 error generated.
error: command 'gcc' failed with exit status 1
----------------------------------------
ERROR: Command "/Users/noah/anaconda3/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/private/var/folders/bb/yzzgnhrj70q9s996rsfz6txw0000gn/T/pip-install-ynmh4yg5/jpype1/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/bb/yzzgnhrj70q9s996rsfz6txw0000gn/T/pip-record-l51xr0fq/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/bb/yzzgnhrj70q9s996rsfz6txw0000gn/T/pip-install-ynmh4yg5/jpype1/
MacBook-Pro-de-Chen:~ noah$
求问如何解决 | closed | 2019-07-15T18:26:39Z | 2022-03-08T12:26:16Z | https://github.com/hankcs/HanLP/issues/1244 | [
"ignored"
] | sunc33 | 6 |
tiangolo/uvicorn-gunicorn-fastapi-docker | fastapi | 46 | [1] [CRITICAL] WORKER TIMEOUT (pid:45) | i post many requests to the server, the gunicorn worker will raise the error "[CRITICAL] WORKER TIMEOUT (pid:45)". and it can not deal with the last request before restart。so the last request which this error worker get before restart will has not any response. please help me how to solve this error @tiangolo ,Thanks
my gunicorn config is :
bind = "0.0.0.0:7075"
worker=13
worker_connections = 1000
keepalive = 20
daemon = False
timeout = 120
preload_app = True
max_requests_jitter = 1024
worker_class = "uvicorn.workers.UvicornWorker"
max_requests = 2048
graceful_timeout = 120
errorlog = "/logs/gunicorn_error.log" | closed | 2020-05-27T01:46:53Z | 2022-02-19T20:26:03Z | https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/46 | [
"answered"
] | dtMndas | 3 |
sanic-org/sanic | asyncio | 2,684 | Sanic doesn't shutdown cleanly on Mac | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
When running a simple server on mac os 13.1, after using ctrl-c to shutdown the app, a socket exception is thrown instead of a graceful shutdown
```sh
python3 helloworld.py
[2023-02-14 12:23:23 -0700] [6169] [DEBUG] Creating multiprocessing context using 'spawn'
[2023-02-14 12:23:23][DEBUG] Creating multiprocessing context using 'spawn'
[2023-02-14 12:23:23 -0700] [6169] [DEBUG] Starting a process: Sanic-Server-0-0
[2023-02-14 12:23:23][DEBUG] Starting a process: Sanic-Server-0-0
[2023-02-14 12:23:24 -0700] [6175] [DEBUG] Process ack: Sanic-Server-0-0 [6175]
[2023-02-14 12:23:24][DEBUG] Process ack: Sanic-Server-0-0 [6175]
[2023-02-14 12:23:24 -0700] [6175] [INFO] Starting worker [6175]
[2023-02-14 12:23:24][INFO] Starting worker [6175]
^C[2023-02-14 12:23:26 -0700] [6169] [INFO] Received signal SIGINT. Shutting down.
[2023-02-14 12:23:26][INFO] Received signal SIGINT. Shutting down.
[2023-02-14 12:23:26 -0700] [6169] [DEBUG] Terminating a process: Sanic-Server-0-0 [6175]
[2023-02-14 12:23:26][DEBUG] Terminating a process: Sanic-Server-0-0 [6175]
[2023-02-14 12:23:26 -0700] [6169] [INFO] Server Stopped
[2023-02-14 12:23:26][INFO] Server Stopped
Traceback (most recent call last):
File "/Users/tylerprete/sandbox/asana/asana2/asana/server/kube_app/apps/helloworld/helloworld.py", line 22, in <module>
app.run(host="127.0.0.1", port=8086, debug=True)
File "/usr/local/lib/python3.9/site-packages/sanic/mixins/startup.py", line 209, in run
serve(primary=self) # type: ignore
File "/usr/local/lib/python3.9/site-packages/sanic/mixins/startup.py", line 880, in serve
sock.shutdown(SHUT_RDWR)
OSError: [Errno 57] Socket is not connected
[2023-02-14 12:23:26 -0700] [6175] [INFO] Stopping worker [6175]
[2023-02-14 12:23:26][INFO] Stopping worker [6175]
```
### Code snippet
```python3
from sanic import Sanic
from sanic.response import html, text
app = Sanic("helloworld")
@app.get("/")
def hello_world(request):
print("Serving /")
return html("<p>Hello, World!</p>")
if __name__ == "__main__":
app.run(host="127.0.0.1", port=8086, debug=True)
```
### Expected Behavior
On linux I run this and get the following (removing the sanic banners for brevity):
```sh
python3 helloworld.py
[2023-02-14 19:17:43 +0000] [23570] [DEBUG] Creating multiprocessing context using 'spawn'
[2023-02-14 19:17:43][DEBUG] Creating multiprocessing context using 'spawn'
[2023-02-14 19:17:43 +0000] [23570] [DEBUG] Starting a process: Sanic-Server-0-0
[2023-02-14 19:17:43][DEBUG] Starting a process: Sanic-Server-0-0
[2023-02-14 19:17:43 +0000] [23579] [DEBUG] Process ack: Sanic-Server-0-0 [23579]
[2023-02-14 19:17:43][DEBUG] Process ack: Sanic-Server-0-0 [23579]
[2023-02-14 19:17:43 +0000] [23579] [INFO] Starting worker [23579]
[2023-02-14 19:17:43][INFO] Starting worker [23579]
^C[2023-02-14 19:17:45 +0000] [23570] [INFO] Received signal SIGINT. Shutting down.
[2023-02-14 19:17:45][INFO] Received signal SIGINT. Shutting down.
[2023-02-14 19:17:45 +0000] [23570] [DEBUG] Terminating a process: Sanic-Server-0-0 [23579]
[2023-02-14 19:17:45][DEBUG] Terminating a process: Sanic-Server-0-0 [23579]
[2023-02-14 19:17:45 +0000] [23570] [INFO] Server Stopped
[2023-02-14 19:17:45][INFO] Server Stopped
[2023-02-14 19:17:45 +0000] [23579] [INFO] Stopping worker [23579]
[2023-02-14 19:17:45][INFO] Stopping worker [23579]
```
### How do you run Sanic?
As a script (`app.run` or `Sanic.serve`)
### Operating System
macOS Ventura 13.1
### Sanic Version
22.12.0
### Additional context
_No response_ | closed | 2023-02-14T19:27:43Z | 2023-02-14T20:59:43Z | https://github.com/sanic-org/sanic/issues/2684 | [
"bug"
] | tylerprete | 1 |
ghtmtt/DataPlotly | plotly | 32 | Facet plots | From version 2.0.12 the facet plotting is available. A third variable can be used for plotting just the category:
https://plot.ly/python/facet-trellis/
seems very easy to implement, but be aware of the plotly version installed | closed | 2017-07-04T07:14:36Z | 2018-05-15T12:49:34Z | https://github.com/ghtmtt/DataPlotly/issues/32 | [
"enhancement"
] | ghtmtt | 2 |
kensho-technologies/graphql-compiler | graphql | 604 | Lack of a precise definition of meta fields | Currently the best definition of what meta fields are is "fields that do not represent a property/column in the underlying vertex type". Since the word "meta" means "self-referential", it would make sense that meta fields return information about the schema. However, _x_count returns information about the data in the underlying database. Therefore, because of _x_count, it is quite hard to come up with a definition better than the one above. | open | 2019-10-23T21:44:55Z | 2019-10-23T21:44:55Z | https://github.com/kensho-technologies/graphql-compiler/issues/604 | [
"documentation"
] | pmantica1 | 0 |
firerpa/lamda | automation | 42 | [ISSUE] Failed to spawn: unable to determine ClassLinker field offsets | rt; 我使用frida -H 192.168.0.114:65000 -f uni.UNIB6233DD
提示:
Failed to spawn: unable to determine ClassLinker field offsets
不知道什么原因- 我的需求是手机开机自动运行frida,然后我在电脑上直接使用就行了-
我使用的手机是pixel5 | closed | 2023-04-15T03:21:24Z | 2023-09-09T06:58:41Z | https://github.com/firerpa/lamda/issues/42 | [] | sunpx3 | 1 |
marcomusy/vedo | numpy | 972 | 'Box' object has no attribute 'origin' | I assume this is a bug in 2023.5.0 as it's not mentioned in the release changes.
Origin seems to be removed from the documentation completely as well. Is there a replacement variable or is it expected to run `mesh.box().vertices.mean(axis=0)` each time instead? | closed | 2023-11-16T00:20:04Z | 2023-11-16T21:33:01Z | https://github.com/marcomusy/vedo/issues/972 | [] | JeffreyWardman | 2 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 273 | [BUG] endpoint closed | {
"status": "endpoint closed",
"message": "此端点已关闭请在配置文件中开启/This endpoint is closed, please enable it in the configuration file"
} | closed | 2023-09-16T07:28:35Z | 2023-09-16T07:29:39Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/273 | [
"BUG",
"enhancement"
] | diowcnx | 1 |
stanford-oval/storm | nlp | 70 | Good Repository | closed | 2024-07-13T02:34:54Z | 2024-07-13T10:28:00Z | https://github.com/stanford-oval/storm/issues/70 | [] | MAFLIXD | 0 | |
pydantic/pydantic-ai | pydantic | 238 | Function tool calling on OllamaModel returns ModelTextResponse instead of ModelStructuredResponse | I'm running this example
https://github.com/pydantic/pydantic-ai/blob/main/pydantic_ai_examples/bank_support.py
when using ollama model like `'ollama:qwen2.5:0.5b'` here
https://github.com/pydantic/pydantic-ai/blob/84c1190880219595903df2cea96e5e7146bd715b/pydantic_ai_examples/bank_support.py#L48
The response from agent is like below
```python
ModelStructuredResponse(
calls=[
ToolCall(
tool_name="customer_balance",
args=ArgsJson(args_json='{"include_pending":false}'),
tool_call_id="call_vz43blys",
)
],
timestamp=datetime.datetime(2024, 12, 13, 10, 34, 13, tzinfo=datetime.timezone.utc),
role="model-structured-response",
)
ModelTextResponse(
content="Your current account balance is $123.45. Thank you for using our bank. Feel free to call us if you have any questions anytime.",
timestamp=datetime.datetime(2024, 12, 13, 10, 34, 13, tzinfo=datetime.timezone.utc),
role="model-text-response",
)
ModelTextResponse(
content="Sure thing! I've fixed it for you. Next time please ask a specific query so we can provide more personalized assistance.",
timestamp=datetime.datetime(2024, 12, 13, 10, 34, 14, tzinfo=datetime.timezone.utc),
role="model-text-response",
)
```
which is a text response instead of structured response
the response from a model like gemini is like
```python
ModelStructuredResponse(
calls=[
ToolCall(
tool_name="customer_balance",
args=ArgsDict(args_dict={"include_pending": False}),
tool_call_id=None,
)
],
timestamp=datetime.datetime(
2024, 12, 13, 10, 33, 19, 184502, tzinfo=datetime.timezone.utc
),
role="model-structured-response",
)
ModelStructuredResponse(
calls=[
ToolCall(
tool_name="final_result",
args=ArgsDict(
args_dict={
"risk": 1,
"block_card": False,
"support_advice": "Your current balance is 123.45. \\n Have a great day!",
}
),
tool_call_id=None,
)
],
timestamp=datetime.datetime(
2024, 12, 13, 10, 33, 21, 322077, tzinfo=datetime.timezone.utc
),
role="model-structured-response",
)
ModelStructuredResponse(
calls=[
ToolCall(
tool_name="customer_balance",
args=ArgsDict(args_dict={"include_pending": False}),
tool_call_id=None,
),
ToolCall(
tool_name="final_result",
args=ArgsDict(
args_dict={
"block_card": True,
"risk": 2,
"support_advice": "We have blocked your card. Please contact us to request a new one.",
}
),
tool_call_id=None,
),
],
timestamp=datetime.datetime(
2024, 12, 13, 10, 33, 22, 498653, tzinfo=datetime.timezone.utc
),
role="model-structured-response",
)
```
and they are all structured responses | closed | 2024-12-13T10:47:11Z | 2024-12-14T11:51:30Z | https://github.com/pydantic/pydantic-ai/issues/238 | [
"model-limitation"
] | metaboulie | 4 |
plotly/dash-table | dash | 154 | Rename `Table`, rename `dash_table`? | Component options:
- `Table` - Clean, but it conflicts with `html.Table`. Not necessarily a blocker
- `DataTable` - Seems fine. Matches the `data=` property too.
- `InteractiveTable` - too long
- Any other options?
Library options:
- `import dash_table as dt`
- `import dash_interactive_table as dit` :x:
- `import dash_spreadsheet as ds`
I think I prefer either:
```
import dash_table as dt
dt.DataTable
```
or what we currently have:
```
import dash_table as dt
dt.Table
``` | closed | 2018-10-22T18:52:29Z | 2018-10-31T18:59:25Z | https://github.com/plotly/dash-table/issues/154 | [] | chriddyp | 3 |
httpie/cli | python | 824 | Cookies from original request cannot be combined with response cookies in session file | Consider the following request:
```bash
https --session=/tmp/c-session "https://localhost:8721/customer/business/1" Cookie:sAuth=foo6
```
If the server sets cookies `XSRF-TOKEN` and `JSESSIONID`, the session file will look like this:
```json
{
"__meta__": {
"about": "HTTPie session file",
"help": "https://httpie.org/doc#sessions",
"httpie": "1.0.3"
},
"auth": {
"password": null,
"type": null,
"username": null
},
"cookies": {
"JSESSIONID": {
"expires": null,
"path": "/",
"secure": true,
"value": "091642DF767443D96E72C6FDEE561428"
},
"XSRF-TOKEN": {
"expires": null,
"path": "/",
"secure": true,
"value": "af6eb371-ce07-4583-bdce-efbfa09728f9"
}
},
"headers": {
"Cookie": "sAuth=foo6"
}
}
```
When the request is repeated with the same session file (but without the sAuth given on the command line), the result is that only the cookie sAuth is sent, not the cookies `JSESSIONID` and `XSRF-TOKEN`:
```bash
https --verbose --session=/tmp/c-session "https://localhost:8721/apis/customer/business/1"
```
Request:
```http
GET /apis/customer/business/1 HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Cookie: saamAuth=foo6
Host: localhost:8721
User-Agent: HTTPie/1.0.3
```
I would have expected to have all three cookies set in the request. | closed | 2019-12-08T08:13:03Z | 2021-12-28T12:15:00Z | https://github.com/httpie/cli/issues/824 | [
"bug",
"help wanted",
"sessions"
] | strindberg | 3 |
home-assistant/core | python | 140,871 | BMW Connected drive giving a wrong charging end time (past) | ### The problem
The end charging time is just giving the polling time and not the end charging time. Not sure when this started, but my car was recently updated. So it could be linked to that. Anyway, the time is not correct in HA, but it is correct in the BMW app.
### What version of Home Assistant Core has the issue?
core-14.2
### What was the last working version of Home Assistant Core?
core-14.2
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
BMW connected drive
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/bmw_connected_drive
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
End charging time is taking the timestamp of the last poll | open | 2025-03-18T12:30:52Z | 2025-03-23T17:08:49Z | https://github.com/home-assistant/core/issues/140871 | [
"integration: bmw_connected_drive"
] | GeeGee-be | 10 |
MagicStack/asyncpg | asyncio | 220 | Connection not being returned to the pool after connection loss | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.13.0
* **PostgreSQL version**: 9.4
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: using docker image aidanlister/postgres-hstore
* **Python version**: 3.6.3
* **Platform**: Fedora 27
* **Do you use pgbouncer?**: no
* **Did you install asyncpg with pip?**: yes
* **If you built asyncpg locally, which version of Cython did you use?**:
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: didn't try uvloop
While my application is running some queries I interrupt the connection by removing the ethernet cable from the computer. After doing so some connections are never returned to the pool, even though the timeout is set for the acquire() and fetch() methods. I know they are never returned to the pool because I print the queue size every time it finishes.
I can't send the whole code because it's quite extensive, but the database operations are concentrated in a single file:
```python
import src.controllers.configs as configs_controller
import asyncio
import logging
import asyncpg
import traceback
import decimal
QUERY_TRIES = 2
POOL_MAX_SIZE = 3
_databases = dict()
_logger = logging.getLogger("DatabaseController")
async def _create_pool(access_information):
return await asyncpg.create_pool(
**access_information,
min_size=0,
max_size=POOL_MAX_SIZE,
max_queries=30,
timeout=5,
command_timeout=10,
max_inactive_connection_lifetime=180
)
async def connect():
# Create a connection pool for each database defined in the configuration
global _databases
_databases = {
database_name: await _create_pool(
configs_controller.database_access[database_name])
for database_name in configs_controller.database_access
}
async def close_connections():
for database_name, database_pool in _databases.items():
await database_pool.close()
def check_database(database):
if database not in _databases:
error = f"Database '{database}' not initialized"
_logger.error(error)
raise Exception(error)
async def execute(database, query, *args):
# Acquire a connection
check_database(database)
async with _databases[database].acquire() as connection:
await connection.execute(query, *args)
async def executemany(database, query, *args):
# Acquire a connection
check_database(database)
async with _databases[database].acquire() as connection:
await connection.executemany(query, *args)
def _decimal_to_float(data):
for row in data:
for key, value in row.items():
if isinstance(value, decimal.Decimal):
row[key] = float(value)
async def _fetch_data(database, query, *args):
# Acquire a connection
async with _databases[database].acquire(timeout=20) as connection:
try:
result = await connection.fetch(query, *args)
result = [dict(row) for row in result]
_decimal_to_float(result)
return result
# Any exception while fetching the data shouldn't trigger a retry, so
# they are caught here
except asyncio.TimeoutError:
_logger.error(f"Query timed out\n{query}{args}")
async def print_counts():
for database_name, database in _databases.items():
print(database_name, database._queue.qsize(), POOL_MAX_SIZE)
async def fetch(database, query, *args):
check_database(database)
# Try to run the query a number of times
count = 0
while count != QUERY_TRIES:
count += 1
try:
return await _fetch_data(database, query, *args)
# The following exceptions may retry to fetch the data
# If caught SerializationError
except asyncpg.exceptions.SerializationError:
_logger.info("Conflict with recovery, retrying")
# If caught TimeoutError (a connection timeout, not a query timeout)
except asyncio.TimeoutError:
_logger.info("Connection timed out, retrying")
# Return None if caught any other exception
except:
_logger.error(f"{traceback.format_exc()}\n{query} {args}")
return None
# Delay before retrying
await asyncio.sleep(1)
```
After removing the ethernet cable, I wait for some time so an external timeout is triggered (`await asyncio.wait(futures, timeout=30)`). When this happens, the application should have finished all the tasks (if everything went well) and I would be able to finish it safelly. Before letting the loop close, there's a delay and I interrupt the execution using Ctrl+C. It works fine when there are no pending tasks, but when the previous event happens, some of the tasks "lost" are interrupted, generating the a stack trace like the following one.
```
[2017-11-01 00:09:25,800] (ERROR) asyncio: Task was destroyed but it is pending!
task: <Task pending coro=<Pool.release.<locals>._release_impl() running at /usr/local/lib/python3.6/site-packages/asyncpg/pool.py:465> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f830f109d68>()]> cb=[shield.<locals>._done_callback() at /usr/local/lib/python3.6/asyncio/tasks.py:672]>
[2017-11-01 00:09:25,804] (ERROR) asyncio: Task was destroyed but it is pending!
task: <Task pending coro=<Pool.release.<locals>._release_impl() running at /usr/local/lib/python3.6/site-packages/asyncpg/pool.py:465> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f830f0891f8>()]> cb=[shield.<locals>._done_callback() at /usr/local/lib/python3.6/asyncio/tasks.py:672]>
[2017-11-01 00:09:25,808] (ERROR) asyncio: Fatal write error on socket transport
protocol: <asyncpg.protocol.protocol.Protocol object at 0x7f830f6bb588>
transport: <_SelectorSocketTransport fd=9>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 192, in release
await self._con.reset()
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 986, in reset
await self.execute(reset_query)
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 238, in execute
return await self._protocol.query(query, timeout)
File "asyncpg/protocol/protocol.pyx", line 296, in query
AttributeError: 'weakref' object has no attribute 'cline_in_traceback'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/asyncio/selector_events.py", line 762, in write
n = self._sock.send(data)
OSError: [Errno 9] Bad file descriptor
Exception ignored in: <coroutine object Pool.release.<locals>._release_impl at 0x7f830f197678>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 465, in _release_impl
File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 203, in release
File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 192, in release
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 986, in reset
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 238, in execute
File "asyncpg/protocol/protocol.pyx", line 296, in query
AttributeError: 'weakref' object has no attribute 'cline_in_traceback'
[2017-11-01 00:09:25,813] (ERROR) asyncio: Fatal write error on socket transport
protocol: <asyncpg.protocol.protocol.Protocol object at 0x7f830f6bb6d8>
transport: <_SelectorSocketTransport fd=10>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 192, in release
await self._con.reset()
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 986, in reset
await self.execute(reset_query)
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 238, in execute
return await self._protocol.query(query, timeout)
File "asyncpg/protocol/protocol.pyx", line 296, in query
AttributeError: 'weakref' object has no attribute 'cline_in_traceback'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/asyncio/selector_events.py", line 762, in write
n = self._sock.send(data)
OSError: [Errno 9] Bad file descriptor
Exception ignored in: <coroutine object Pool.release.<locals>._release_impl at 0x7f830f197990>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 465, in _release_impl
File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 203, in release
File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 192, in release
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 986, in reset
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 238, in execute
File "asyncpg/protocol/protocol.pyx", line 296, in query
AttributeError: 'weakref' object has no attribute 'cline_in_traceback'
[2017-11-01 00:09:25,817] (ERROR) asyncio: Task was destroyed but it is pending!
task: <Task pending coro=<DefaultModule.run() running at ./src/models/module.py:52> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f830f1093a8>()]>>
[2017-11-01 00:09:25,821] (ERROR) asyncio: Task was destroyed but it is pending!
task: <Task pending coro=<DefaultModule.run() running at ./src/models/module.py:52> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f830f089198>()]>>
[2017-11-01 00:09:25,825] (ERROR) DatabaseController: Traceback (most recent call last):
File "./src/controllers/database.py", line 102, in fetch
_logger.info("Conflict with recovery, retrying")
GeneratorExit
```
I've tried adding some timeouts in other places, but there's nothing I can do to make it go back to the pool. I even tried to add some logs trying to track where it's happening, but couldn't find it.
A simple version of the application is:
```python
async def run():
queries = []
futures = [database_controller.fetch(query) for query in queries]
await asyncio.wait(futures, timeout=30) # Connection drops while executing this line
await database_controller.print_counts() # Prints a queue size smaller than the pool max size when the connection was lost
await asyncio.sleep(1000) # Interrupting the execution here after waiting a lot more than every timeout set in the code
``` | closed | 2017-11-01T00:31:02Z | 2017-11-15T20:05:01Z | https://github.com/MagicStack/asyncpg/issues/220 | [
"bug"
] | GabrielSalla | 17 |
deepspeedai/DeepSpeed | deep-learning | 5,655 | [BUG]模型卡在trainer.train()一直不训练 | **Describe the bug**
数据集加载都没有问题,模型一直卡在finetune.py文件中的trainer.trian()
包环境:
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
absl-py 2.1.0 pypi_0 pypi
accelerate 0.30.1 pypi_0 pypi
addict 2.4.0 pypi_0 pypi
aiofiles 23.2.1 pypi_0 pypi
altair 5.3.0 pypi_0 pypi
annotated-types 0.7.0 pypi_0 pypi
anyio 4.4.0 pypi_0 pypi
attrs 23.2.0 pypi_0 pypi
binutils_impl_linux-64 2.36.1 h193b22a_2 conda-forge
binutils_linux-64 2.36 hf3e587d_10 conda-forge
bitsandbytes-cuda114 0.26.0.post2 pypi_0 pypi
blessed 1.20.0 pypi_0 pypi
blinker 1.8.2 pypi_0 pypi
blis 0.7.11 pypi_0 pypi
bzip2 1.0.8 h5eee18b_6 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ca-certificates 2024.6.2 hbcca054_0 conda-forge
cachetools 5.3.3 pypi_0 pypi
catalogue 2.0.10 pypi_0 pypi
certifi 2024.2.2 pypi_0 pypi
charset-normalizer 3.3.2 pypi_0 pypi
click 8.1.7 pypi_0 pypi
cloudpathlib 0.16.0 pypi_0 pypi
cmake 3.25.0 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
confection 0.1.5 pypi_0 pypi
contourpy 1.2.1 pypi_0 pypi
cycler 0.12.1 pypi_0 pypi
cymem 2.0.8 pypi_0 pypi
deepspeed 0.14.4+eda5075 pypi_0 pypi
editdistance 0.6.2 pypi_0 pypi
einops 0.7.0 pypi_0 pypi
et-xmlfile 1.1.0 pypi_0 pypi
exceptiongroup 1.2.1 pypi_0 pypi
fairscale 0.4.0 pypi_0 pypi
fastapi 0.110.3 pypi_0 pypi
ffmpy 0.3.2 pypi_0 pypi
filelock 3.14.0 pypi_0 pypi
flask 3.0.3 pypi_0 pypi
fonttools 4.53.0 pypi_0 pypi
fsspec 2024.5.0 pypi_0 pypi
gcc_impl_linux-64 11.2.0 h82a94d6_16 conda-forge
gcc_linux-64 11.2.0 h39a9532_10 conda-forge
gpustat 1.1.1 pypi_0 pypi
gradio 4.26.0 pypi_0 pypi
gradio-client 0.15.1 pypi_0 pypi
grpcio 1.64.1 pypi_0 pypi
gxx_impl_linux-64 11.2.0 h82a94d6_16 conda-forge
gxx_linux-64 11.2.0 hacbe6df_10 conda-forge
h11 0.14.0 pypi_0 pypi
hjson 3.1.0 pypi_0 pypi
httpcore 1.0.5 pypi_0 pypi
httpx 0.27.0 pypi_0 pypi
huggingface-hub 0.23.2 pypi_0 pypi
idna 3.7 pypi_0 pypi
importlib-resources 6.4.0 pypi_0 pypi
install 1.3.5 pypi_0 pypi
itsdangerous 2.2.0 pypi_0 pypi
jinja2 3.1.4 pypi_0 pypi
joblib 1.4.2 pypi_0 pypi
jsonlines 4.0.0 pypi_0 pypi
jsonschema 4.22.0 pypi_0 pypi
jsonschema-specifications 2023.12.1 pypi_0 pypi
kernel-headers_linux-64 2.6.32 he073ed8_17 conda-forge
kiwisolver 1.4.5 pypi_0 pypi
langcodes 3.4.0 pypi_0 pypi
language-data 1.2.0 pypi_0 pypi
ld_impl_linux-64 2.36.1 hea4e1c9_2 conda-forge
libaio 0.9.3 pypi_0 pypi
libffi 3.4.4 h6a678d5_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libgcc-devel_linux-64 11.2.0 h0952999_16 conda-forge
libgcc-ng 13.2.0 h77fa898_7 conda-forge
libgomp 13.2.0 h77fa898_7 conda-forge
libsanitizer 11.2.0 he4da1e4_16 conda-forge
libstdcxx-devel_linux-64 11.2.0 h0952999_16 conda-forge
libstdcxx-ng 13.2.0 hc0a3c3a_7 conda-forge
libuuid 1.41.5 h5eee18b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
lit 15.0.7 pypi_0 pypi
lxml 5.2.2 pypi_0 pypi
marisa-trie 1.1.1 pypi_0 pypi
markdown 3.6 pypi_0 pypi
markdown-it-py 3.0.0 pypi_0 pypi
markdown2 2.4.10 pypi_0 pypi
markupsafe 2.1.5 pypi_0 pypi
matplotlib 3.7.4 pypi_0 pypi
mdurl 0.1.2 pypi_0 pypi
more-itertools 10.1.0 pypi_0 pypi
mpmath 1.3.0 pypi_0 pypi
murmurhash 1.0.10 pypi_0 pypi
ncurses 6.4 h6a678d5_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
networkx 3.3 pypi_0 pypi
ninja 1.10.0 pypi_0 pypi
ninja-base 1.10.2 hd09550d_5 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
nltk 3.8.1 pypi_0 pypi
numpy 1.24.4 pypi_0 pypi
nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
nvidia-ml-py 12.535.161 pypi_0 pypi
nvidia-nccl-cu12 2.18.1 pypi_0 pypi
nvidia-nvjitlink-cu12 12.5.40 pypi_0 pypi
nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
nvitop 1.3.2 pypi_0 pypi
opencv-python-headless 4.5.5.64 pypi_0 pypi
openpyxl 3.1.2 pypi_0 pypi
openssl 3.3.1 h4ab18f5_0 conda-forge
orjson 3.10.3 pypi_0 pypi
packaging 23.2 pypi_0 pypi
pandas 2.2.2 pypi_0 pypi
peft 0.11.1 pypi_0 pypi
pillow 10.1.0 pypi_0 pypi
pip 24.0 py310h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
portalocker 2.8.2 pypi_0 pypi
preshed 3.0.9 pypi_0 pypi
protobuf 4.25.0 pypi_0 pypi
psutil 5.9.8 pypi_0 pypi
py-cpuinfo 9.0.0 pypi_0 pypi
pydantic 2.7.2 pypi_0 pypi
pydantic-core 2.18.3 pypi_0 pypi
pydub 0.25.1 pypi_0 pypi
pygments 2.18.0 pypi_0 pypi
pynvml 11.5.0 pypi_0 pypi
pyparsing 3.1.2 pypi_0 pypi
pyproject 1.3.1 pypi_0 pypi
python 3.10.14 h955ad1f_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
python-dateutil 2.9.0.post0 pypi_0 pypi
python-multipart 0.0.9 pypi_0 pypi
pytz 2024.1 pypi_0 pypi
pyyaml 6.0.1 pypi_0 pypi
readline 8.2 h5eee18b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
referencing 0.35.1 pypi_0 pypi
regex 2024.5.15 pypi_0 pypi
requests 2.32.3 pypi_0 pypi
rich 13.7.1 pypi_0 pypi
rpds-py 0.18.1 pypi_0 pypi
ruff 0.4.7 pypi_0 pypi
sacrebleu 2.3.2 pypi_0 pypi
safetensors 0.4.3 pypi_0 pypi
seaborn 0.13.0 pypi_0 pypi
semantic-version 2.10.0 pypi_0 pypi
sentencepiece 0.1.99 pypi_0 pypi
setuptools 69.5.1 py310h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
shellingham 1.5.4 pypi_0 pypi
shortuuid 1.0.11 pypi_0 pypi
six 1.16.0 pypi_0 pypi
smart-open 6.4.0 pypi_0 pypi
sniffio 1.3.1 pypi_0 pypi
socksio 1.0.0 pypi_0 pypi
spacy 3.7.2 pypi_0 pypi
spacy-legacy 3.0.12 pypi_0 pypi
spacy-loggers 1.0.5 pypi_0 pypi
sqlite 3.45.3 h5eee18b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
srsly 2.4.8 pypi_0 pypi
starlette 0.37.2 pypi_0 pypi
sympy 1.12.1 pypi_0 pypi
sysroot_linux-64 2.12 he073ed8_17 conda-forge
tabulate 0.9.0 pypi_0 pypi
tensorboard 2.16.2 pypi_0 pypi
tensorboard-data-server 0.7.2 pypi_0 pypi
tensorboardx 1.8 pypi_0 pypi
termcolor 2.4.0 pypi_0 pypi
thinc 8.2.3 pypi_0 pypi
timm 0.9.10 pypi_0 pypi
tk 8.6.14 h39e8969_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tokenizers 0.19.1 pypi_0 pypi
tomlkit 0.12.0 pypi_0 pypi
toolz 0.12.1 pypi_0 pypi
torch 2.1.2+cu118 pypi_0 pypi
torchaudio 2.1.2+cu118 pypi_0 pypi
torchvision 0.16.2+cu118 pypi_0 pypi
tqdm 4.66.1 pypi_0 pypi
transformers 4.40.0 pypi_0 pypi
triton 2.1.0 pypi_0 pypi
typer 0.9.4 pypi_0 pypi
typing-extensions 4.8.0 pypi_0 pypi
tzdata 2024.1 pypi_0 pypi
urllib3 2.2.1 pypi_0 pypi
uvicorn 0.24.0.post1 pypi_0 pypi
wasabi 1.1.3 pypi_0 pypi
wcwidth 0.2.13 pypi_0 pypi
weasel 0.3.4 pypi_0 pypi
websockets 11.0.3 pypi_0 pypi
werkzeug 3.0.3 pypi_0 pypi
wheel 0.43.0 py310h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
xz 5.4.6 h5eee18b_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zlib 1.2.13 h5eee18b_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ds_report:
[2024-06-13 11:43:07,921] [WARNING] [real_accelerator.py:162:get_accelerator] Setting accelerator to CPU. If you have GPU or other accelerator, we were unable to detect it.
[2024-06-13 11:43:07,982] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cpu (auto detect)
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
deepspeed_not_implemented [NO] ....... [OKAY]
deepspeed_ccl_comm ..... [NO] ....... [OKAY]
deepspeed_shm_comm ..... [NO] ....... [OKAY]
cpu_adam ............... [YES] ...... [OKAY]
fused_adam ............. [YES] ...... [OKAY]
输出情况:
prepare trainer
<class 'trainer.CPMTrainer'>
trainer ok
错误情况:
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Detected kernel version 3.10.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
max_steps is given, it will override any value given in num_train_epochs
max_steps is given, it will override any value given in num_train_epochs
max_steps is given, it will override any value given in num_train_epochs
max_steps is given, it will override any value given in num_train_epochs
Using /public/home/lzu2/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...
Using /public/home/lzu2/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...
Using /public/home/lzu2/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...Using /public/home/lzu2/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...
代码部分:
print("prepare trainer")
trainer = CPMTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
**data_module,
)
print(type(trainer))
print("trainer ok")
trainer.train()
trainer.save_state()
print("trainer sucess") | closed | 2024-06-13T03:47:35Z | 2024-06-13T08:48:22Z | https://github.com/deepspeedai/DeepSpeed/issues/5655 | [
"bug",
"training"
] | limllzu | 0 |
suitenumerique/docs | django | 661 | Numchild not maintained when a document is soft deleted | ## Bug Report
**Problematic behavior**
When a document is soft deleted and this document is a child, its parent numchild field is leaved unchanged
**Expected behavior/code**
When a document is soft deleted, then its parent numchild field should be decremented
**Steps to Reproduce**
```
def test_models_documents_numchild():
document = factories.DocumentFactory()
assert document.numchild == 0
factories.DocumentFactory(parent=document)
assert document.numchild == 1
to_delete = factories.DocumentFactory(parent=document)
assert document.numchild == 2
factories.DocumentFactory()
assert document.numchild == 2
to_delete.soft_delete()
document.refresh_from_db()
assert document.numchild == 1
```
| closed | 2025-02-24T15:08:12Z | 2025-03-19T09:23:03Z | https://github.com/suitenumerique/docs/issues/661 | [
"bug",
"backend"
] | lunika | 1 |
lux-org/lux | jupyter | 142 | Improve error message when values specified as attributes | Warning message when values are specified without attributes is not very interpretable.

| closed | 2020-11-17T12:18:19Z | 2020-11-19T01:11:46Z | https://github.com/lux-org/lux/issues/142 | [] | dorisjlee | 1 |
voila-dashboards/voila | jupyter | 685 | CI timeout on many_iopub_messages | Still seeing this failing on CI.
E.g from https://travis-ci.org/github/voila-dashboards/voila/jobs/715151469 we see
```
WARNING traitlets:manager.py:510 Notebook many_iopub_messages.ipynb is not trusted
WARNING traitlets:client.py:612 Timeout waiting for IOPub output
``` | open | 2020-08-31T14:06:16Z | 2020-08-31T14:06:16Z | https://github.com/voila-dashboards/voila/issues/685 | [] | maartenbreddels | 0 |
cvat-ai/cvat | computer-vision | 8,686 | Issues related to tasks with honeypots | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
- ~~Check if `GET /jobs` can be optimized for tasks with gt_pool validation mode (e.g. in the case of 500 jobs it takes 17s)~~


- When updating `disabled_frames` in task validation layout, outdated data is returned in the response
- Optimize `PATCH /tasks/id/validation_layout`
For instance, when disabling one validation frame and shuffling honeypots:
Request duration: 114254 ms

### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
- git commit: 1e7ff33
```
| closed | 2024-11-12T13:44:31Z | 2024-12-19T16:52:11Z | https://github.com/cvat-ai/cvat/issues/8686 | [
"bug"
] | Marishka17 | 1 |
tensorpack/tensorpack | tensorflow | 1,523 | Issue when using automatic mixed precision in training with evaluation callback | ### 1. What you did:
I tried to use automatic mixed precision when training a MaskRCNN model via a graph rewrite. As presented here: https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/experimental/enable_mixed_precision_graph_rewrite, I added the following line at the end of the generalized_rcnn function GeneralizedRCNN.optimizer(): `opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt)`
### 2. What you observed:
When I train the model without evaluation callback, there is no issue at all. Once it is trained, if I load the model with OfflinePredictor, it also works well. However, if I train the model with evaluation callback, I get the following error during the first evaluation:
```
InternalError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _do_call(self, fn, *args)
1364 try:
-> 1365 return fn(*args)
1366 except errors.OpError as e:
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
1349 return self._call_tf_sessionrun(options, feed_dict, fetch_list,
-> 1350 target_list, run_metadata)
1351
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
1442 fetch_list, target_list,
-> 1443 run_metadata)
1444
InternalError: 2 root error(s) found.
(0) Internal: Blas GEMM launch failed : a.shape=(12032000, 1), b.shape=(1, 4), m=12032000, n=4, k=1
[[{{node tower-pred-0/fpn/upsample_lat4/Tensordot/MatMul}}]]
(1) Internal: Blas GEMM launch failed : a.shape=(12032000, 1), b.shape=(1, 4), m=12032000, n=4, k=1
[[{{node tower-pred-0/fpn/upsample_lat4/Tensordot/MatMul}}]]
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
InternalError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/tensorpack/train/interface.py in launch_train_with_config(config, trainer)
97 starting_epoch=config.starting_epoch,
98 max_epoch=config.max_epoch,
---> 99 extra_callbacks=config.extra_callbacks)
100
101
/opt/conda/lib/python3.7/site-packages/tensorpack/train/base.py in train_with_defaults(self, _sentinel, callbacks, monitors, session_creator, session_init, steps_per_epoch, starting_epoch, max_epoch, extra_callbacks)
340 self.train(callbacks, monitors,
341 session_creator, session_init,
--> 342 steps_per_epoch, starting_epoch, max_epoch)
343
344 def __new__(cls, *args, **kwargs):
/opt/conda/lib/python3.7/site-packages/tensorpack/train/base.py in train(self, callbacks, monitors, session_creator, session_init, steps_per_epoch, starting_epoch, max_epoch)
312 self.setup_callbacks(callbacks, monitors)
313 self.initialize(session_creator, session_init)
--> 314 self.main_loop(steps_per_epoch, starting_epoch, max_epoch)
315
316 def train_with_defaults(
/opt/conda/lib/python3.7/site-packages/tensorpack/utils/argtools.py in wrapper(*args, **kwargs)
166 cache.add(func)
167
--> 168 return func(*args, **kwargs)
169
170 return wrapper
/opt/conda/lib/python3.7/site-packages/tensorpack/train/base.py in main_loop(self, steps_per_epoch, starting_epoch, max_epoch)
284
285 # trigger epoch outside the timing region.
--> 286 self._callbacks.trigger_epoch()
287 logger.info("Training has finished!")
288 except (StopTraining, tf.errors.OutOfRangeError) as e:
/opt/conda/lib/python3.7/site-packages/tensorpack/callbacks/base.py in trigger_epoch(self)
154
155 def trigger_epoch(self):
--> 156 self._trigger_epoch()
157
158 def _trigger_epoch(self):
/opt/conda/lib/python3.7/site-packages/tensorpack/callbacks/group.py in _trigger_epoch(self)
93 display_name = str(cb)
94 with tm.timed_callback(display_name):
---> 95 cb.trigger_epoch()
96 tm.log()
97
/opt/conda/lib/python3.7/site-packages/tensorpack/callbacks/base.py in trigger_epoch(self)
154
155 def trigger_epoch(self):
--> 156 self._trigger_epoch()
157
158 def _trigger_epoch(self):
/opt/conda/lib/python3.7/concurrent/futures/_base.py in result(self, timeout)
433 raise CancelledError()
434 elif self._state == FINISHED:
--> 435 return self.__get_result()
436 else:
437 raise TimeoutError()
/opt/conda/lib/python3.7/concurrent/futures/_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
/opt/conda/lib/python3.7/concurrent/futures/thread.py in run(self)
55
56 try:
---> 57 result = self.fn(*self.args, **self.kwargs)
58 except BaseException as exc:
59 self.future.set_exception(exc)
/home/jovyan/eval.py in predict_dataflow()
--> 157 outputs = predict_image(img, model_func)
/home/jovyan/eval.py in predict_image(img, model_func)
---> 46 outputs = model_func(img)
/opt/conda/lib/python3.7/site-packages/tensorpack/predict/base.py in __call__(self, *dp)
39 list[array]: list of outputs
40 """
---> 41 output = self._do_call(dp)
42 if self.return_input:
43 return (dp, output)
/opt/conda/lib/python3.7/site-packages/tensorpack/predict/base.py in _do_call(self, dp)
134 # run_metadata = tf.RunMetadata()
135 # options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
--> 136 return self._callable(*dp)
137
138
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _generic_run(*feed_args, **kwargs)
1230 feed: feed_val for feed, feed_val in zip(feed_list, feed_args)
1231 }
-> 1232 return self.run(fetches, feed_dict=feed_dict, **kwargs)
1233
1234 return _generic_run
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
954 try:
955 result = self._run(None, fetches, feed_dict, options_ptr,
--> 956 run_metadata_ptr)
957 if run_metadata:
958 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1178 if final_fetches or final_targets or (handle and feed_dict_tensor):
1179 results = self._do_run(handle, final_targets, final_fetches,
-> 1180 feed_dict_tensor, options, run_metadata)
1181 else:
1182 results = []
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1357 if handle is None:
1358 return self._do_call(_run_fn, feeds, fetches, targets, options,
-> 1359 run_metadata)
1360 else:
1361 return self._do_call(_prun_fn, handle, feeds, fetches)
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _do_call(self, fn, *args)
1382 '\nsession_config.graph_options.rewrite_options.'
1383 'disable_meta_optimizer = True')
-> 1384 raise type(e)(node_def, op, message)
1385
1386 def _extend_graph(self):
InternalError: 2 root error(s) found.
(0) Internal: Blas GEMM launch failed : a.shape=(12032000, 1), b.shape=(1, 4), m=12032000, n=4, k=1
[[node tower-pred-0/fpn/upsample_lat4/Tensordot/MatMul (defined at /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]
(1) Internal: Blas GEMM launch failed : a.shape=(12032000, 1), b.shape=(1, 4), m=12032000, n=4, k=1
[[node tower-pred-0/fpn/upsample_lat4/Tensordot/MatMul (defined at /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]
0 successful operations.
0 derived errors ignored.
Original stack trace for 'tower-pred-0/fpn/upsample_lat4/Tensordot/MatMul':
File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/opt/conda/lib/python3.7/site-packages/traitlets/config/application.py", line 845, in launch_instance
app.start()
File "/opt/conda/lib/python3.7/site-packages/ipykernel/kernelapp.py", line 612, in start
self.io_loop.start()
File "/opt/conda/lib/python3.7/site-packages/tornado/platform/asyncio.py", line 199, in start
self.asyncio_loop.run_forever()
File "/opt/conda/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
self._run_once()
File "/opt/conda/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
handle._run()
File "/opt/conda/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/opt/conda/lib/python3.7/site-packages/tornado/ioloop.py", line 688, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "/opt/conda/lib/python3.7/site-packages/tornado/ioloop.py", line 741, in _run_callback
ret = callback()
File "/opt/conda/lib/python3.7/site-packages/tornado/gen.py", line 814, in inner
self.ctx_run(self.run)
File "/opt/conda/lib/python3.7/site-packages/tornado/gen.py", line 775, in run
yielded = self.gen.send(value)
File "/opt/conda/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 374, in dispatch_queue
yield self.process_one()
File "/opt/conda/lib/python3.7/site-packages/tornado/gen.py", line 250, in wrapper
runner = Runner(ctx_run, result, future, yielded)
File "/opt/conda/lib/python3.7/site-packages/tornado/gen.py", line 741, in __init__
self.ctx_run(self.run)
File "/opt/conda/lib/python3.7/site-packages/tornado/gen.py", line 775, in run
yielded = self.gen.send(value)
File "/opt/conda/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 358, in process_one
yield gen.maybe_future(dispatch(*args))
File "/opt/conda/lib/python3.7/site-packages/tornado/gen.py", line 234, in wrapper
yielded = ctx_run(next, result)
File "/opt/conda/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 261, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "/opt/conda/lib/python3.7/site-packages/tornado/gen.py", line 234, in wrapper
yielded = ctx_run(next, result)
File "/opt/conda/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 538, in execute_request
user_expressions, allow_stdin,
File "/opt/conda/lib/python3.7/site-packages/tornado/gen.py", line 234, in wrapper
yielded = ctx_run(next, result)
File "/opt/conda/lib/python3.7/site-packages/ipykernel/ipkernel.py", line 302, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/opt/conda/lib/python3.7/site-packages/ipykernel/zmqshell.py", line 539, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2895, in run_cell
raw_cell, store_history, silent, shell_futures)
File "/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell
return runner(coro)
File "/opt/conda/lib/python3.7/site-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner
coro.send(None)
File "/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3166, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3357, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3437, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-f9d37edbca59>", line 23, in <module>
commit_hash = "unknown",
File "/home/jovyan/train.py", line 315, in train_mask_rcnn
launch_train_with_config(traincfg, trainer)
File "/opt/conda/lib/python3.7/site-packages/tensorpack/train/interface.py", line 99, in launch_train_with_config
extra_callbacks=config.extra_callbacks)
File "/opt/conda/lib/python3.7/site-packages/tensorpack/train/base.py", line 342, in train_with_defaults
steps_per_epoch, starting_epoch, max_epoch)
File "/opt/conda/lib/python3.7/site-packages/tensorpack/train/base.py", line 312, in train
self.setup_callbacks(callbacks, monitors)
File "/opt/conda/lib/python3.7/site-packages/tensorpack/utils/argtools.py", line 168, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/tensorpack/train/base.py", line 209, in setup_callbacks
self._callbacks.setup_graph(weakref.proxy(self))
File "/opt/conda/lib/python3.7/site-packages/tensorpack/callbacks/base.py", line 59, in setup_graph
self._setup_graph()
File "/opt/conda/lib/python3.7/site-packages/tensorpack/callbacks/group.py", line 68, in _setup_graph
cb.setup_graph(self.trainer)
File "/opt/conda/lib/python3.7/site-packages/tensorpack/callbacks/base.py", line 59, in setup_graph
self._setup_graph()
File "/home/jovyan/eval.py", line 305, in _setup_graph
self.predictors = [self._build_predictor(k % num_gpu) for k in range(self.num_predictor)]
File "/home/jovyan/eval.py", line 305, in <listcomp>
self.predictors = [self._build_predictor(k % num_gpu) for k in range(self.num_predictor)]
File "/home/jovyan/eval.py", line 319, in _build_predictor
return self.trainer.get_predictor(self._in_names, self._out_names, device=idx)
File "/opt/conda/lib/python3.7/site-packages/tensorpack/train/tower.py", line 136, in get_predictor
self.tower_func(*input.get_input_tensors())
File "/opt/conda/lib/python3.7/site-packages/tensorpack/tfutils/tower.py", line 291, in __call__
output = self._tower_fn(*args)
File "/home/jovyan/modeling/generalized_rcnn.py", line 129, in build_graph
features = self.backbone(image)
File "/home/jovyan/modeling/generalized_rcnn.py", line 307, in backbone
p23456 = fpn_model('fpn', c2345)
File "/opt/conda/lib/python3.7/site-packages/tensorpack/models/registry.py", line 173, in wrapped_func
outputs = func(*args, **actual_args)
File "/home/jovyan/modeling/model_fpn.py", line 65, in fpn_model
lat = lat + upsample2x('upsample_lat{}'.format(6 - idx), lat_sum_5432[-1])
File "/home/jovyan/modeling/model_fpn.py", line 51, in upsample2x
data_format='channels_first')
File "/opt/conda/lib/python3.7/site-packages/tensorpack/models/registry.py", line 173, in wrapped_func
outputs = func(*args, **actual_args)
File "/opt/conda/lib/python3.7/site-packages/tensorpack/models/pool.py", line 127, in FixedUnPooling
ret = tf.tensordot(x, mat, axes=1) # bxcxhxwxshxsw
File "/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 4071, in tensordot
ab_matmul = matmul(a_reshape, b_reshape)
File "/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 2754, in matmul
a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
File "/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_math_ops.py", line 6136, in mat_mul
name=name)
File "/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
self._traceback = tf_stack.extract_stack()
```
### 4. Your environment:
```
sys.platform linux
Python 3.7.10 | packaged by conda-forge | (default, Feb 19 2021, 16:07:37) [GCC 9.3.0]
Tensorpack v0.10.1-0-g8f831349
Numpy 1.19.5
TensorFlow 1.15.5/v1.15.5-1-g7d0c58b5326
TF Compiler Version 7.3.1 20180303
TF CUDA support True
TF MKL support False
TF XLA support False
Nvidia Driver /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.450.51.06
CUDA /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudart.so.11.0.221
CUDNN /usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.4
NCCL /usr/lib/x86_64-linux-gnu/libnccl.so.2.7.8
CUDA_VISIBLE_DEVICES Unspecified
GPU 0 Tesla T4
Free RAM 21.86/29.45 GB
CPU Count 8
Horovod 0.21.3
cv2 4.4.0
msgpack 1.0.2
python-prctl False
```
**Question**: is it possible to run evaluation callback while training with automatic mixed precision (even if it already works in inference outside of the training) or are there changes to perform to make it work?
| open | 2021-04-26T10:55:59Z | 2021-05-04T12:27:38Z | https://github.com/tensorpack/tensorpack/issues/1523 | [] | martinjammes | 0 |
pytorch/pytorch | numpy | 149,177 | [Dist] Async op isend and irecv bug | ### 🐛 Describe the bug

I write a pp parallel framework for inference (for some reason, i can't post codes in the issue), and i found the time series is not correct, because of isend irecv behavior is a bit weird, just like the picture show
### Versions
cuda version: 12.2
torch version: 2.4.1
nccl version: 2.20.5 (from torch.cuda.nccl.version())
OS: Linux g340-cd51-2800-18c3-adff-a69e-f1f5 5.4.143.bsk.8-amd64 #5.4.143.bsk.8 SMP Debian 5.4.143.bsk.8 Wed Jul 20 08:43:36 UTC x86_64 GNU/Linux
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | open | 2025-03-14T04:05:16Z | 2025-03-20T09:26:25Z | https://github.com/pytorch/pytorch/issues/149177 | [
"oncall: distributed",
"module: c10d"
] | feifei-111 | 2 |
holoviz/panel | plotly | 7,667 | Plotly Maps: Copyright Notice Field overlaps with other Panel Elements | <!--
Thanks for contacting us! Please read and follow these instructions carefully, then you can delete this introductory text. Note that the issue tracker is NOT the place for usage questions and technical assistance; post those at [Discourse](https://discourse.holoviz.org) instead. Issues without the required information below may be closed immediately.
-->
#### ALL software version info
<details>
<summary>Software Version Info</summary>
```plaintext
panel==1.5.2
plotly==5.24.1
```
</details>
#### Description of expected behavior and the observed behavior
When rendering a Plotly geographic map (such as the [`scatter_map`](https://plotly.com/python-api-reference/generated/plotly.express.scatter_map.html)), the copyright notice for the map tile data is rendered _below_ the map - and overlaps with other Panel elements (shown for example here is a markdown header `Some Caption`):
<img width="600" alt="Image" src="https://github.com/user-attachments/assets/8cc84b43-8d4b-4e85-a738-a831c2d2f1aa" />
The same map, when rendered outside of a Panel application is neatly shown _inside_ the map:
<img width="600" alt="Image" src="https://github.com/user-attachments/assets/0bfe6b97-7244-48f3-959d-8e3bd288df0e" />
The relevant element of a Plotly map is named the `maplibregl-ctrl-attrib-button`. While the function that draws the text and box [is defined here](https://github.com/plotly/plotly.js/blob/4097d1c54a291c5b2df0eb9b9f9a3b65eace04f1/src/plots/map/index.js#L109), I could not find why the position is different if the map is rendered via Panel.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import panel as pn
import pandas as pd
import plotly.express as px
pn.extension("plotly")
#
fig = px.scatter_map(
lat=[],
lon=[],
zoom=0,
height=300
)
fig.update_layout(map_style="open-street-map")
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
plotly_pane = pn.Column(
pn.pane.Plotly(fig),
'# Some Caption',
)
plotly_pane.servable()
```
#### Stack traceback and/or browser JavaScript console output
#### Screenshots or screencasts of the bug in action
- [ ] I may be interested in making a pull request to address this
| closed | 2025-01-25T14:45:20Z | 2025-02-14T08:40:33Z | https://github.com/holoviz/panel/issues/7667 | [] | michaelweinold | 3 |
graphistry/pygraphistry | pandas | 18 | Cannot bind nodes/edges in Plotter | `pygraphistry.bind(...).edges(..)` fails because there's both a field `edges` and method `edges`.
- Suggestion 1: make the fields `pygraphistry.bindings.edges`.
- Suggestion 2: make the methods return self on non-undefined set, and and return the binding when no value is passed in.
| closed | 2015-08-08T18:08:59Z | 2015-08-10T21:52:50Z | https://github.com/graphistry/pygraphistry/issues/18 | [
"bug"
] | lmeyerov | 0 |
Lightning-AI/pytorch-lightning | pytorch | 19,751 | Validation does not produce any output in PyTorch Lightning using my UNetTestModel | ### Bug description
I'm trying to validate my model using PyTorch Lightning, but no output or logs are generated during the validation process, despite setting up everything correctly.

And this is my model part:
`class UNetTestModel(pl.LightningModule, HyperparametersMixin):
def __init__(
self,
encoder_name='resnet50',
encoder_weights='imagenet',
in_channels=1,
classes=14,
loss_fn=DiceCELossWithKL(softmax=True, lambda_dice=0.85, lambda_ce=0.15, lambda_kl=2.0, to_onehot_y=True, include_background=True),
loss_function='DiceCELossWithKL',
learning_rate=3e-3,
):
super().__init__()
self.save_hyperparameters()
self.model = smp.Unet(
encoder_name=encoder_name,
encoder_weights=encoder_weights,
in_channels=in_channels,
classes=classes,
)
self.loss_fn = loss_fn
self.val_accuracy = torchmetrics.classification.Accuracy(task="multiclass", num_classes=14, average='macro', ignore_index=0)
self.val_accuracy_classwise = torchmetrics.classification.Accuracy(task="multiclass", num_classes=14, average='none', ignore_index=0)
self.Dice = torchmetrics.classification.Dice(multiclass=True, num_classes=14, average='macro', ignore_index=0)
self.F1 = torchmetrics.classification.MulticlassF1Score(num_classes=14, average="macro", ignore_index=0)
self.Jaccard = torchmetrics.classification.MulticlassJaccardIndex(num_classes=14, average="macro", ignore_index=0)
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_idx):
images, labels = batch
outputs = self.forward(images)
loss = self.loss_fn(outputs, labels.unsqueeze(1))
self.log('train_loss', loss, on_step=True, on_epoch=False, logger=True, prog_bar=True)
return loss
def validation_step(self, batch, batch_idx):
images, labels = batch
outputs = self.forward(images)
loss = self.loss_fn(outputs, labels.unsqueeze(1))
accuracy = self.val_accuracy(outputs, labels)
Dice = self.Dice(outputs, labels)
F1 = self.F1(outputs, labels)
Jaccard = self.Jaccard(outputs, labels)
acc = self.val_accuracy_classwise(outputs, labels)
self.log('val_loss', loss, on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_accuracy', accuracy, on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_F1', F1, on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_Dice', Dice, on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_Jaccard', Jaccard, on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_acc_4', acc[4], on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_acc_5', acc[5], on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_acc_10', acc[10], on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_acc_12', acc[12], on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_acc_13', acc[13], on_step=True, on_epoch=False, logger=True, prog_bar=True)
return {"loss": loss, "accuracy": accuracy}
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_closure, **kwargs):
if self.trainer.global_step < 50:
lr_scale = min(1.0, float(self.trainer.global_step + 1) / 50)
for pg in optimizer.param_groups:
pg["lr"] = lr_scale * self.hparams.learning_rate
optimizer.step(closure=optimizer_closure)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.hparams.learning_rate)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=5, eta_min=0.000001, last_epoch=-1)
return {
'optimizer': optimizer,
'lr_scheduler': {
'scheduler': scheduler,
'interval': 'epoch',
'frequency': 1,
}
}
`
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
To view the bug, you can run the colab notebook cells (not those cells marked with Opt). You can reproduce the bug in CheckMetrics cell. It is reproduce-able in kaggle and colab thus really annoying 😡. It would be sooo much oblidged if anyone could help me with this.
https://colab.research.google.com/#fileId=https%3A//storage.googleapis.com/kaggle-colab-exported-notebooks/smu-dataset-dl-update-with-new-dataset-5df7b4b9-0565-494d-b22a-c0306ec0418e.ipynb%3FX-Goog-Algorithm%3DGOOG4-RSA-SHA256%26X-Goog-Credential%3Dgcp-kaggle-com%2540kaggle-161607.iam.gserviceaccount.com/20240410/auto/storage/goog4_request%26X-Goog-Date%3D20240410T014637Z%26X-Goog-Expires%3D259200%26X-Goog-SignedHeaders%3Dhost%26X-Goog-Signature%3D0e9d4ad91ecc6e2fae0a51622b97399160be483c4737424d1584ebdcba2a80b870f32feacc256675774b85db2c72329819040ffa6c923e20835b331d995cfea132418460df7cfba6d261e6e3381354d8ca92188ddba7e502fa71fee33c63ed5d5246df0964c3766b7a26c92b559e3e359f4bc4e78b78edf3114d0d52ab54244f7c28b560f6a31a14389b27cb86837fcfb0579c6784958ab181af41a2a915a57eaa6e0e80bc9acc55bca97cbc0311caa0e870004659e568e2acae6de0af29ff8f08bbc9ebea6118b8b9d48aea9d20593a1e3516763105e0c296679a649968501b481f722936008f893bacf0856e288c202e3124902da7cdf635d174169c05b27a&scrollTo=HbUeiVEzr21d&line=4&uniqifier=1
```
### Error messages and logs

### Environment
basic Colab env with pip-qqq-accessible lightning
### More info
_No response_ | closed | 2024-04-10T07:10:13Z | 2024-09-30T12:44:30Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19751 | [
"bug",
"needs triage"
] | lgy112112 | 0 |
matterport/Mask_RCNN | tensorflow | 2,990 | ModuleNotFoundError: No module named 'parallel_model' | I am experimenting with MASK RCNN on coco format data and getting errors. Here is the code. `import warnings
warnings.filterwarnings('ignore')
import os
import sys
import json
import datetime
import numpy as np
import skimage.draw
import cv2
import random
import math
import re
import time
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.image as mpimg
from mrcnn import utils
from mrcnn import visualize
from mrcnn.visualize import display_images
from mrcnn.visualize import display_instances
import mrcnn.model as modellib
from mrcnn.model import log
from mrcnn.config import Config
from mrcnn import model as modellib, utils
# Root directory of the project
#ROOT_DIR = "D:\MRCNN_tensorflow2.7_env\Mask-RCNN"
ROOT_DIR = os.getcwd()
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
# Path to trained weights file
COCO_WEIGHTS_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Directory to save logs and model checkpoints, if not provided
# through the command line argument --logs
DEFAULT_LOGS_DIR = os.path.join(ROOT_DIR, "logs")
`
And here is the error log `ModuleNotFoundError Traceback (most recent call last)
Input In [6], in <cell line: 23>()
21 from mrcnn.visualize import display_images
22 from mrcnn.visualize import display_instances
---> 23 import mrcnn.model as modellib
24 from mrcnn.model import log
25 from mrcnn.config import Config
File /workspace/Zahoor/ResWo/QaBiReIn/MASK_RCNN/MASK_RCNN/Practical/Pract 2/Mask-R-CNN-using-Tensorflow2-main/mrcnn/model.py:32, in <module>
30 from mrcnn import parallel_model
31 import sys
---> 32 from parallel_model import ParallelModel
34 # Requires TensorFlow 2.0+
35 from distutils.version import LooseVersion
ModuleNotFoundError: No module named 'parallel_model'` | open | 2023-09-25T08:32:17Z | 2023-09-25T08:50:10Z | https://github.com/matterport/Mask_RCNN/issues/2990 | [] | Zahoor-Ahmad | 1 |
quantmind/pulsar | asyncio | 314 | https://docs.pulsarweb.org/ ERROR 1014 | * **pulsar version**: N/A
* **python version**: N/A
* **platform**: N/A
## Description
It seems that your documentation hosted at https://docs.pulsarweb.org/ is unavailable.
When I try to access it I get the following message:
```
Error 1014 Ray ID: 4598be0f691559cc • 2018-09-13 07:01:25 UTC
CNAME Cross-User Banned
What happened?
You've requested a page on a website that is part of the Cloudflare network. The host is configured as a CNAME across accounts on Cloudflare, which is prohibited by security policy.
```
## Expected behaviour
Documentation page is available.
## Actual behaviour
See description.
## Steps to reproduce
See description.
| open | 2018-09-13T07:12:36Z | 2018-11-08T09:31:21Z | https://github.com/quantmind/pulsar/issues/314 | [] | pierec | 2 |
joouha/euporie | jupyter | 20 | Open in external editor fails | When pressing `e` I'm getting the following warning, and the process never completes.
```python
/1ib/python3.9/site-packages/euporie/commands/base.py:165: RuntimeWarning: coroutine 'edit_in_external_editor' was never awaited
self.handler()
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
```
euporie 1.3.2, OSX, export EDITOR=nvim | closed | 2022-03-26T01:44:52Z | 2022-03-26T14:49:34Z | https://github.com/joouha/euporie/issues/20 | [] | yingzhu146 | 2 |
httpie/http-prompt | api | 162 | --json: error: InvalidSchema: Missing dependencies for SOCKS support. | Happens on my Macbook (Catalina 10.15.2). I couldn't find any documentation on what dependencies I apparently need.
```
$ http-prompt http://httpbin.org
Version: 1.0.0
http://httpbin.org> --proxy http:socks5://localhost:9050
http://httpbin.org> --json
http://httpbin.org> get
--json: error: InvalidSchema: Missing dependencies for SOCKS support.
``` | closed | 2020-01-16T13:15:47Z | 2020-01-16T13:18:05Z | https://github.com/httpie/http-prompt/issues/162 | [] | TheLastProject | 1 |
iperov/DeepFaceLab | deep-learning | 5,205 | Problem with XSeg training | I opened XSeg trainer and initially it gives to me thes exceptions:
https://pastebin.com/EAuGBVnz
After I update Nvidia driver and I reboot pc. Then I try again and it changes error:

It's an error regarding the saving of model files.
In the end, I solved creating paging file in the hdd where DFL is located.
Now problem is that my RAM or VRAM wasn't full.
So I don't understand why I needed to to that. | closed | 2020-12-21T17:46:19Z | 2023-06-21T20:31:32Z | https://github.com/iperov/DeepFaceLab/issues/5205 | [] | Cioscos | 3 |
kennethreitz/responder | flask | 145 | API.run(..., debug=True) no use | API._dispatch or API._dispatch_request catched all exceptions. make uvicorn's _DebugResponder no use.
All error only returned "Application Error" . | closed | 2018-10-24T08:53:06Z | 2018-10-25T22:12:44Z | https://github.com/kennethreitz/responder/issues/145 | [] | sandro-qiang | 9 |
ydataai/ydata-profiling | data-science | 1,456 | Support numpy 1.24 | ### Missing functionality
Support numpy 1.24
| open | 2023-09-22T18:13:29Z | 2023-12-29T01:08:19Z | https://github.com/ydataai/ydata-profiling/issues/1456 | [
"needs-triage"
] | elgalu | 1 |
LibreTranslate/LibreTranslate | api | 747 | Translate to Chinese is fine but Chinese (Traditional) has serious issues | Chinese and Chinese (Traditional) should be same language with difference characters set only. It is hard to understand the translation result has a big difference. | open | 2025-02-23T04:16:19Z | 2025-02-23T04:16:30Z | https://github.com/LibreTranslate/LibreTranslate/issues/747 | [
"model improvement"
] | samuel-lau-hk | 0 |
widgetti/solara | fastapi | 915 | Feature Request: Vue 3 support (via component_vue at least) | ## Feature Request
- [ ] The requested feature would break current behaviour
- [ ] I would be interested in opening a PR for this feature
### What problem would this feature solve? Please describe.
Solara currently uses vuetifyjs for frontend components, but that is tied to vue 2. That is getting a bit outdated, and vue 3 offers more features/flexibility and more concise syntax.
I understand that migrating entirely to vue 3 is a bit of work, since i guess you need to migrate ipyvuetify, maybe reacton and all sorts of libraries also, before migrating solara.
### Describe the solution you'd like
But I am wondering.. if anyone is interested in writing their own vuetify templates via the `@solara.component_vue()` decorator..
Would it be possible to either:
- make a new decorator, example `@solara.component_vue3()` that would be vue 3 compatible
- update the existing decorator, to have a kwarg that says this is vue 3 file. e.g. @solara.component_vue('./mytemplate.vue', vue3=True)
I wonder if such a thing is possible.. or are we stuck we vue 2 until solara 2.0 or so.. which I assume is still quite far in the future?
Maybe what I am asking is not possible/practical, but voicing it just in case
### Documentation, Migration Strategy
Easy to document. If one day solara goes full on vue 3, (which would be a breaking change), then simply remove the "new" decorator or keep it with a deprecation warning.
| open | 2024-12-08T14:15:47Z | 2024-12-12T14:35:53Z | https://github.com/widgetti/solara/issues/915 | [
"enhancement"
] | JovanVeljanoski | 1 |
sczhou/CodeFormer | pytorch | 215 | inference_inpainting.py | use inference_inpainting.py ,but Confused about the result
I used a white brush to modify the picture, but it did not work, and the original image was fixed | open | 2023-04-24T03:14:07Z | 2023-06-12T23:30:35Z | https://github.com/sczhou/CodeFormer/issues/215 | [] | fallbernana123456 | 5 |
waditu/tushare | pandas | 1,756 | 600372.SH、600732.SH 某时段历史复权因子数据缺失 | 数据问题:接口pro_bar()在获取600372.SH、600732.SH 某时段历史(2009-2016期间)的1分钟K线包含复权因子的K线数据报错,看日志和代码应该是未找到相应的复权因子数据,在以下代码段返回None. 2016年后的1min数据没有问题。
if adj is not None:
fcts = api.adj_factor(ts_code=ts_code, start_date=start_date, end_date=end_date)[['trade_date', 'adj_factor']]
if fcts.shape[0] == 0:
return None
tushare id: 697269
| open | 2024-11-27T02:54:02Z | 2024-11-27T02:54:02Z | https://github.com/waditu/tushare/issues/1756 | [] | vvmbit | 0 |
Gerapy/Gerapy | django | 5 | English Language Support Feature | Hi @Germey ,
Hope you are doing great. I am deeply happy to see you continuously working so hard to improve the performance & adding new feature of Gerapy.
I know that this is probably not an ideal question to ask you hereon github issue section but I was wondering if you won't mind to let me know when you are expecting to have English support for such an excellent Framework Gerapy.
"In our earlier conversation", you said that "I'm Chinese from Beijing, China. 😁 If you feel any inconvenience I'm glad to convert it in the next version.".
I am patiently & enthusiastically looking forward to see support for English.
Thank you so much for your dedication, time, effort in building such amazing Framework.
Thank you.
| closed | 2017-10-22T04:13:56Z | 2018-01-19T05:54:04Z | https://github.com/Gerapy/Gerapy/issues/5 | [] | mtaziz | 5 |
ranaroussi/yfinance | pandas | 1,801 | "DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated" in the yfinance.download function | ### Describe bug
When executing the download function for a list of tickers, the following warning is shown:
[c:\....\venv1\Lib\site-packages\yfinance\base.py:279]
(file:///C:/..../venv1/Lib/site-packages/yfinance/base.py:279):
DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
endDt = pd.to_datetime(_datetime.datetime.utcfromtimestamp(end))
Many thanks! :)
### Simple code that reproduces your problem
stockdata = yf.download(ticker_list_r3000,'2021-1-1', interval="1d", group_by="ticker",auto_adjust=True, threads=True)
where ticker_list_r3000 is simply a long list of tickers included in the Russell3000
### Debug log
-
### Bad data proof
-
### `yfinance` version
0.2.33
### Python version
3.12.0
### Operating system
Windows | open | 2023-12-27T11:52:39Z | 2024-01-01T12:27:42Z | https://github.com/ranaroussi/yfinance/issues/1801 | [] | PeterSchober005 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,284 | Remove GAN loss in pix2pix | Hello, I found out that my model perform better after removing GAN loss since the noise are reduced. I am wondering if the model after removing the GAN loss in generator is a GAN model or just a unet or resnet model? Thanks! | closed | 2021-05-24T22:16:25Z | 2021-12-08T21:21:45Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1284 | [] | zzhan127 | 1 |
dask/dask | numpy | 11,412 | arg.divisions == dependencies[0].divisions AssertionError when processing time series data in 1 day divisions | **Describe the issue**:
I'm getting an assert error:
```
File "/home/akos/.local/lib/python3.10/site-packages/dask_expr/_expr.py", line 530, in _divisions
assert arg.divisions == dependencies[0].divisions
AssertionError
```
When trying to process a time series data in 1 day divisions
**Minimal Complete Verifiable Example**:
```python
import pandas as pd
import dask.dataframe as dd
from dask.distributed import Client
# Sample data setup
date_range = pd.date_range(start='2023-01-01', end='2023-01-10', freq='1min')
data = {
'timestamp': date_range,
'value': range(len(date_range)),
'upper_bound_enter': [None] * len(date_range),
'vwap': [None] * len(date_range),
'close': [None] * len(date_range),
'low': [None] * len(date_range),
'valid_timestamp': [True] * len(date_range)
}
df = pd.DataFrame(data)
df.set_index('timestamp', inplace=True)
# Convert to Dask DataFrame
bars_1s_trading_hours = dd.from_pandas(df[['close', 'valid_timestamp']], npartitions=8)
sigma_bounds_df = dd.from_pandas(df[['upper_bound_enter', 'vwap']], npartitions=8)
daily_volatility = dd.from_pandas(df[['value']], npartitions=8)
# Repartition by day
bars_1s_trading_hours = bars_1s_trading_hours.repartition(freq='1D')
sigma_bounds_df = sigma_bounds_df.repartition(freq='1D')
daily_volatility = daily_volatility.repartition(freq='1D')
# Function to be applied
def process_by_day_group(sigma_bounds_df_group, bars_1s_trading_hours, daily_volatility):
sigma_bounds_df_group = sigma_bounds_df_group.compute()
bars_1s_trading_hours = bars_1s_trading_hours.compute()
daily_volatility = daily_volatility.compute()
return pd.DataFrame({
'enter_trade': [False] * len(sigma_bounds_df_group),
'exit_trade': [False] * len(sigma_bounds_df_group),
'entry_size_percent': [0.0] * len(sigma_bounds_df_group)
}, index=sigma_bounds_df_group.index)
# Group by date and process each group in parallel
grouped_by_date = sigma_bounds_df.groupby(sigma_bounds_df.index.dt.date)
meta = pd.DataFrame({
'enter_trade': pd.Series(dtype=bool),
'exit_trade': pd.Series(dtype=bool),
'entry_size_percent': pd.Series(dtype=float)
})
results = grouped_by_date.apply(process_by_day_group, bars_1s_trading_hours, daily_volatility, meta=meta)
# Compute results
results = results.compute()
print(results)
```
I'm getting the following:
```
Traceback (most recent call last):
File "/tmp/dask_bug.py", line 50, in <module>
results = results.compute()
File "/home/akos/.local/lib/python3.10/site-packages/dask_expr/_collection.py", line 480, in compute
out = out.optimize(fuse=fuse)
File "/home/akos/.local/lib/python3.10/site-packages/dask_expr/_collection.py", line 595, in optimize
return new_collection(self.expr.optimize(fuse=fuse))
File "/home/akos/.local/lib/python3.10/site-packages/dask_expr/_expr.py", line 94, in optimize
return optimize(self, **kwargs)
File "/home/akos/.local/lib/python3.10/site-packages/dask_expr/_expr.py", line 3070, in optimize
return optimize_until(expr, stage)
File "/home/akos/.local/lib/python3.10/site-packages/dask_expr/_expr.py", line 3031, in optimize_until
expr = expr.lower_completely()
File "/home/akos/.local/lib/python3.10/site-packages/dask_expr/_core.py", line 447, in lower_completely
new = expr.lower_once(lowered)
File "/home/akos/.local/lib/python3.10/site-packages/dask_expr/_core.py", line 413, in lower_once
new = operand.lower_once(lowered)
File "/home/akos/.local/lib/python3.10/site-packages/dask_expr/_core.py", line 402, in lower_once
out = expr._lower()
File "/home/akos/.local/lib/python3.10/site-packages/dask_expr/_groupby.py", line 962, in _lower
df.npartitions,
File "/home/akos/.local/lib/python3.10/site-packages/dask_expr/_expr.py", line 398, in npartitions
return len(self.divisions) - 1
File "/usr/lib/python3.10/functools.py", line 981, in __get__
val = self.func(instance)
File "/home/akos/.local/lib/python3.10/site-packages/dask_expr/_expr.py", line 382, in divisions
return tuple(self._divisions())
File "/home/akos/.local/lib/python3.10/site-packages/dask_expr/_expr.py", line 530, in _divisions
assert arg.divisions == dependencies[0].divisions
AssertionError
```
**Anything else we need to know?**:
I'm not that familiar with Dask, this may be a naive error.
**Environment**:
- Dask version: 2024.9.1
- Python version: Python 3.10.12
- Operating System: Ubuntu 22.04.5 LTS
- Install method (conda, pip, source): pip
| closed | 2024-10-03T11:10:39Z | 2024-10-08T10:41:41Z | https://github.com/dask/dask/issues/11412 | [
"dask-expr"
] | akosmaroy | 0 |
biolab/orange3 | pandas | 6,880 | TypeError: can't compare offset-naive and offset-aware datetimes | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
When I want to download the plug-in, when I click add-one to enter the page of loading the plug-in, I encounter the following problems:
```
Traceback (most recent call last):
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\orangecanvas\application\addons.py", line 510, in <lambda>
lambda config=config: (config, list_available_versions(config)),
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\orangecanvas\application\utils\addons.py", line 377, in list_available_versions
response = session.get(PYPI_API_JSON.format(name=p))
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests_cache\session.py", line 102, in get
return self.request('GET', url, params=params, **kwargs)
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests_cache\session.py", line 158, in request
return super().request(method, url, *args, headers=headers, **kwargs) # type: ignore
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests_cache\session.py", line 194, in send
actions.update_from_cached_response(cached_response, self.cache.create_key, **kwargs)
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests_cache\policy\actions.py", line 184, in update_from_cached_response
usable_response = self.is_usable(cached_response)
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests_cache\policy\actions.py", line 152, in is_usable
or (cached_response.is_expired and self._stale_while_revalidate is True)
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests_cache\models\response.py", line 149, in is_expired
return self.expires is not None and datetime.utcnow() >= self.expires
TypeError: can't compare offset-naive and offset-aware datetimes
```

**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
After clicking add-one in the settings of version 3.36.2, the pop-up window will prompt the error when loading the plug-in, and then the plug-in that has not been downloaded cannot be loaded.
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system:windows11 22631.4037
- Orange version: 3.36.2
- How you installed Orange: Download the version 3.36.2 zip package from orange3 official website and extract it locally.
| closed | 2024-08-23T03:11:57Z | 2025-01-17T09:29:13Z | https://github.com/biolab/orange3/issues/6880 | [
"bug report"
] | TonyEinstein | 3 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 4,385 | Mass update/inheritance of SMTP configurations | ### Proposal
We have faced a challange with updating SMTP configurations. We run a larger amount of clients on our server, and are using our own SMTP server to get a quicker response. The SMTP is connected to an emailaddress, which requires regular password updates. The update of the password then affects all the running sites. It would be great, if It would be possible to update this piece of information from a single entry. | open | 2025-01-30T14:17:11Z | 2025-01-30T15:32:02Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4385 | [
"C: Client",
"T: Feature"
] | schris-dk | 1 |
axnsan12/drf-yasg | rest-api | 839 | i want to hide default 201 response. pls suggest. | # Feature Request
## Description
<!-- edit: --> A clear and concise description of the problem or missing capability...
## Describe the solution you'd like
<!-- edit: --> If you have a solution in mind, please describe it.
## Describe alternatives you've considered
<!-- edit: --> Have you considered any alternative solutions or workarounds?
| closed | 2023-02-22T17:08:52Z | 2023-02-23T03:52:09Z | https://github.com/axnsan12/drf-yasg/issues/839 | [] | rexbti | 1 |
Anjok07/ultimatevocalremovergui | pytorch | 664 | DJ / Producer in Miami | hi, love UVR5 however recently wanted to just separate drums, as this could be most useful. Got this error:
Last Error Received:
Process: Ensemble Mode
If this error persists, please contact the developers with the error details.
Raw Error Details:
RuntimeError: "Error opening '/Users/paulhimmel/Downloads/Ensembled_Outputs_1689433876/3_Tayllor,.Marasi.-.Verano.(Original.Mix)_htdemucs_(Drums).wav': System error."
Traceback Error: "
File "UVR.py", line 4719, in process_start
File "separate.py", line 537, in seperate
File "separate.py", line 237, in write_audio
File "soundfile.py", line 430, in write
File "soundfile.py", line 740, in __init__
File "soundfile.py", line 1264, in _open
File "soundfile.py", line 1455, in _error_check
"
Error Time Stamp [2023-07-15 12:17:05]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: v4 | htdemucs
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: kuielab_a_drums
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: False
is_primary_stem_only: True
is_secondary_stem_only: False
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
save_format: WAV
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems | open | 2023-07-15T16:22:25Z | 2023-07-15T16:22:25Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/664 | [] | arkitekt330 | 0 |
jonaswinkler/paperless-ng | django | 380 | Feature Request: Expiry dates | For (warranty 1-2 years) invoices, bills or any other documents which have a validity for a certain period, it would be nice to be able and set a "expiry date".
Maybe even a rule that these can be deleted automatically after an XXX period after expiry or receive an "expired/archive" tag, but at least a filter/notification to see all documents that are expired. | open | 2021-01-18T11:33:41Z | 2021-02-01T12:23:28Z | https://github.com/jonaswinkler/paperless-ng/issues/380 | [
"feature request"
] | Flight777 | 3 |
dmlc/gluon-cv | computer-vision | 1,074 | Cannot export when using train_psp.py | Since the script train.py does not work for me #1071, I am using the demo train_psp.py script on GluonCV website to perform training. I have tried two export methods but none suceeded. I am using Windows 10, Python 3.8, CPU only.
First method:
`model.module.hybridize()`
`model.module.export('psp')`

Second method:
`model.module.hybridize()
export_block('psp', model.module, layout = 'HWC', preprocess = None) `

I am training the pspnet model from scratch with only 1 iteration for demo purposes. Please advise how I can proceed to export my trained model successfully. | closed | 2019-12-03T06:24:17Z | 2020-02-04T22:35:00Z | https://github.com/dmlc/gluon-cv/issues/1074 | [] | NamTran838P | 2 |
QuivrHQ/quivr | api | 3,119 | [Feature]: i18n support | ### The Feature
Any plan support i18n for example Chinese Japanese etc.
### Motivation, pitch
more i18n users
### Twitter / LinkedIn details
_No response_ | closed | 2024-08-31T08:33:53Z | 2024-12-04T12:10:19Z | https://github.com/QuivrHQ/quivr/issues/3119 | [
"enhancement",
"Stale"
] | thinker007 | 2 |
Yorko/mlcourse.ai | data-science | 738 | Proofread topic 2 | - Fix issues
- Fix typos
- Correct the translation where needed
- Add images where necessary | closed | 2023-02-04T13:53:11Z | 2024-08-25T07:43:16Z | https://github.com/Yorko/mlcourse.ai/issues/738 | [
"enhancement",
"articles"
] | Yorko | 0 |
gee-community/geemap | jupyter | 374 | Unsupervised Classification using GEE Javascript | I tried to compute the unsupervised classification from April to may for 1999 but the error is coming i.e. image.sample is not a function
in <global>, line 17
in <global>, line 28
I don't know this code is not working.
Here this link of my code
https://code.earthengine.google.com/c683ee8967767d67b1557a59885a6a7d
Please help me to compute this | closed | 2021-03-21T23:14:35Z | 2021-03-22T06:27:38Z | https://github.com/gee-community/geemap/issues/374 | [] | anita-gif | 2 |
flasgger/flasgger | api | 52 | Why docsstring in head method is not working in flasgger ? |
> Docstring in triple quotes is not working in the head method. I am using the package flassger. I am not able to use docstring in head method for swagger ui. However, it is working in patch, post, put, and get methods.
```
@app.route('/flight/<flight_no>', methods=['HEAD'])
def get_flight_exist(flight_no):
"""
show Flight Existence
This resource returns flight exist response
---
tags:
- hello
parameters:
- name: flight_no
in: path
type: string
description: Flight_no
required: true
responses:
'200':
description: Flight data response
schema:
description: Flight object
properties:
flight_name:
type: string
description: name of the flight
flight_no:
type: string
description: flight number
total_seat:
type: integer
required:
- flight_name
- flight_no
- total_seat
'404':
description: Flight not found
"""
flight_data = mongo.db.flight_details
info = flight_data.find_one({'flight_no': flight_no})
if info:
if request.headers['Accept'] == 'application/json':
flight_exist_response = make_response()
flight_exist_response.status_code = 200
flight_exist_response.mimetype = 'application/json'
return flight_exist_response
else:
flight_not_exist_response = make_response()
flight_not_exist_response.status_code = 404
flight_not_exist_response.mimetype = 'application/json'
return flight_not_exist_response
``` | closed | 2017-03-17T07:57:22Z | 2017-03-22T16:57:43Z | https://github.com/flasgger/flasgger/issues/52 | [
"bug"
] | ravibhushan29 | 8 |
pytorch/pytorch | deep-learning | 149,061 | Best way to disable "fx graph cache hit for key"? | I have a possibly niche use case:
* I might rerun the same run a few times
* So I will run into "fx graph cache hit for key"
* I want to see precompilation and autotuning in the logs
* So I want to bypass fx graph cache
* Want to avoid having to C++ compile the kernel again (codecache does that), since C++ compile is long
* So I can't force disable caches
If I run the same run twice, I will see “fx graph cache hit for key” starting the second time. I tried disabling all the cache in configs (autotune_local_cache, autotune_remote_cache, bundled_autotune_remote_cache), but that didn't work.
I can get around it with something like
```
torch._inductor.config.cuda.cutlass_op_denylist_regex = uuid.uuid4().hex
```
since I know that config doesn’t take effect on my run.
Question:
Is there a better way to do it?
Is there any point in adding a knob to control it? Or am I better off sticking to my hack?
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov | open | 2025-03-12T17:39:14Z | 2025-03-13T15:18:15Z | https://github.com/pytorch/pytorch/issues/149061 | [
"triaged",
"module: fx.passes",
"module: inductor"
] | henrylhtsang | 0 |
twopirllc/pandas-ta | pandas | 615 | Understanding ta.vp indicator (Volume Profile). bug? | For testing & understanding purposes I've created a `DataFrame` and run `vp` on the first 10 rows which is the minimum width required by the indicator.
I understand as on [vp.py](https://github.com/twopirllc/pandas-ta/blob/main/pandas_ta/volume/vp.py) close series is evaluated and through the `series.diff(1)` in the `signed_series` function current close is compared to the next close and then assigned a positive or negative value, which then results in either `pos_volume` or `neg_volume `for the `vp` itself.
As I understand, the total volume in a given price range, should be the same as the total `vp` volume should return. Is this correct?
I have checked Issues #74 and #185 looking for an already answered similar question.
Here is the `df.head(10)` :

And here is `df.ta.vp()` where you can see Volume data loss on 2nd and 3rd rows:

As you can see, when close prices are equal on consecutive rows, issues arise as
`df.ta.vp()['total_Volume'].sum() == df.Volume.sum()` evaluates to `False`
What am I missing or maybe not understangid about vp?
| closed | 2022-11-08T18:43:29Z | 2023-05-09T20:45:29Z | https://github.com/twopirllc/pandas-ta/issues/615 | [
"help wanted",
"info",
"feedback"
] | argcast | 2 |
desec-io/desec-stack | rest-api | 99 | Add curl examples to docs | Currently, the docs are split into curl and httpie examples. We should find a way to support both. | closed | 2018-05-03T09:23:24Z | 2019-07-18T19:28:20Z | https://github.com/desec-io/desec-stack/issues/99 | [
"prio: low",
"docs"
] | nils-wisiol | 1 |
OpenBB-finance/OpenBB | machine-learning | 7,005 | [Bug] ProcessLookupError on CLI command from quickstart | **Describe the bug**
Python traceback after CLI command from [quickstart](https://docs.openbb.co/cli/quickstart).
**To Reproduce**
1. openbb
2. equity RET
3. price RET
4. historical --symbol SPY --start_date 2024-01-01 --provider yfinance
**Screenshots**
```
2025 Jan 18, 06:41 (🦋) /equity/price/ $ historical --symbol SPY --start_date 2024-01-01 --provider yfinance
2025 Jan 18, 06:41 (🦋) /equity/price/ $ Exception in callback Process.terminate()
handle: <Handle Process.terminate()>
Traceback (most recent call last):
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/subprocess.py", line 140, in terminate
self._transport.terminate()
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/base_subprocess.py", line 149, in terminate
self._check_proc()eturn to previous menu [e] exit the program [cmd -h] see usage and available options Price (cmd/menu) Documentation
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/base_subprocess.py", line 142, in _check_proc
raise ProcessLookupError()
ProcessLookupError
Exception in callback Process.kill()
handle: <Handle Process.kill()>
Traceback (most recent call last):
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/subprocess.py", line 143, in kill
self._transport.kill()
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/base_subprocess.py", line 153, in kill
self._check_proc()
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/base_subprocess.py", line 142, in _check_proc
raise ProcessLookupError()
ProcessLookupError
Exception in thread Thread-3 (run):
Traceback (most recent call last):
File "/home/pbz/micromamba/envs/obb/lib/python3.10/site-packages/pywry/core.py", line 349, in run_backend
await self.runner.stdin.drain()
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/streams.py", line 371, in drain
await self._protocol._drain_helper()
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/streams.py", line 167, in _drain_helper
raise ConnectionResetError('Connection lost')
ConnectionResetError: Connection lost
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pbz/micromamba/envs/obb/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/home/pbz/micromamba/envs/obb/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/pbz/micromamba/envs/obb/lib/python3.10/site-packages/pywry/core.py", line 384, in run
asyncio.run(self.run_backend())
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/home/pbz/micromamba/envs/obb/lib/python3.10/site-packages/pywry/core.py", line 358, in run_backend
await self.run_backend()
File "/home/pbz/micromamba/envs/obb/lib/python3.10/site-packages/pywry/core.py", line 339, in run_backend
await self.runner.stdin.drain()
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/streams.py", line 371, in drain
await self._protocol._drain_helper()
File "/home/pbz/micromamba/envs/obb/lib/python3.10/asyncio/streams.py", line 167, in _drain_helper
raise ConnectionResetError('Connection lost')
ConnectionResetError: Connection lost
```
**Desktop (please complete the following information):**
- OS: NixOS 24.11
- Python version: 3.10
**Additional context**
Installed from source using `micromamba` for env creation and installed via `python dev_install.py -e --cli`.
| open | 2025-01-18T11:43:42Z | 2025-02-21T17:03:29Z | https://github.com/OpenBB-finance/OpenBB/issues/7005 | [] | fleimgruber | 12 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.