repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
saulpw/visidata | pandas | 1,549 | Does there exist a Slack community? | Hi - thanks for creating this awesome tool. I was wondering if you'd consider starting a Slack community where VisiData users could share tips and learn together.
Thanks,
Roman
| closed | 2022-10-03T06:15:30Z | 2022-10-03T15:52:00Z | https://github.com/saulpw/visidata/issues/1549 | [
"question"
] | rbratslaver | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,046 | proposing accessibility changes for the gui | hi, please use wxpython instead of qt, as wxpython has the highest degree of accessibility to screen readers. Thanks. I would pull request it if I knew enough python, but I'm still in the basics. | open | 2022-03-31T10:35:40Z | 2022-04-10T17:48:15Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1046 | [] | king-dahmanus | 8 |
RobertCraigie/prisma-client-py | asyncio | 799 | Support for most recent prisma version | <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
If you use node prisma > 4.15 you get the following error:
```
✔ Generated Prisma Client (4.16.2 | library) to .\node_modules\@prisma\client in 154ms
1 validation error for PythonData
__root__
Prisma Client Python expected Prisma version: 8fbc245156db7124f997f4cecdd8d1219e360944 but got: 4bc8b6e1b66cb932731fb1bdbbc550d1e010de81
If this is intentional, set the PRISMA_PY_DEBUG_GENERATOR environment variable to 1 and try again.
If you are using the Node CLI then you must switch to v4.15.0, e.g. npx prisma@4.15.0 generate
```
<!-- A clear and concise description of what the bug is. -->
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
No error
## Prisma information
<!-- Your Prisma schema, Prisma Client Python queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
```prisma
```
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: Windows
- Database: PostgreSQL
- Python version: py 3.11
- Prisma version: 4.16.2
```
```
| closed | 2023-07-28T07:50:39Z | 2023-10-22T23:01:43Z | https://github.com/RobertCraigie/prisma-client-py/issues/799 | [] | tobiasdiez | 1 |
ultralytics/ultralytics | machine-learning | 19,696 | May I ask how to mark the key corners of the box? There is semantic ambiguity | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
May I ask how to mark the key corners of the box? There is semantic ambiguity
### Additional
_No response_ | open | 2025-03-14T09:45:40Z | 2025-03-16T04:33:07Z | https://github.com/ultralytics/ultralytics/issues/19696 | [
"question"
] | missTL | 4 |
ultralytics/ultralytics | python | 18,705 | yolov8obb realtime detect and save data | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
I am using yolov8obb to detect angles. There are 33 small angles on a disc, and the angle of each notch is different. When I used obb for detection, I tried many experiments, but I didn't achieve the results I wanted. This is the code I am using now, combining yolov8obb with a USB camera for detection.
```python
# condingstart
import cv2
from ultralytics import YOLO
model = YOLO('yolov8n-obb.pt') # 加载 YOLOv8 OBB 模型
cap = cv2.VideoCapture(0) # 打开 USB 摄像头
while True:
ret, frame = cap.read()
if not ret:
break
# 使用模型进行预测,predict() 方法会返回一个列表(list),其中每个元素都是一个 Results 对象
results = model.predict(frame)
# 由于只有一帧图像,所以列表只包含一个元素,取第 0 个结果即可
result = results[0]
# result.plot() 会在图像上绘制预测框、标签等信息,并返回绘制完毕后的图像
annotated_frame = result.plot()
# 用 OpenCV 显示处理后的图像
cv2.imshow("YOLOv8 OBB Inference", annotated_frame)
# 如果按下 'q' 键,则退出循环
if cv2.waitKey(1) & 0xFF == ord('q'):
break
释放摄像头资源并关闭所有窗口
cap.release()
cv2.destroyAllWindows()
```
The effect I want is to save the detection time, confidence, and detection name while detecting. For example, it is 20250116101213450, 10:12:13 on January 16, 2025, 450 milliseconds + confidence + name. Please help me modify the code, if possible, save it according to the detection time, confidence name.
### Additional
_No response_ | open | 2025-01-16T03:39:20Z | 2025-01-20T20:39:16Z | https://github.com/ultralytics/ultralytics/issues/18705 | [
"question",
"OBB",
"detect"
] | zhang990327 | 9 |
dynaconf/dynaconf | fastapi | 251 | [bug] Exception when getting non existing dotted keys with default value | **Describe the bug**
Using settings.get(key, default) with a non-existing key - depending on the default value - produces an exception.
Am I missing something obvious?
**To Reproduce**
Steps to reproduce the behavior:
1. Having the following folder structure
- app.py
- config.py
2. Having the following config files:
- config.py
3. Having the following app code:
<details>
<summary> Code </summary>
**app.py**
```python
from dynaconf import settings as s
s.load_file("config.py")
print("As dict:")
print(s.as_dict())
print("Existing key:", s.get("foo.bar"))
print("With 0 as default:", s.get("non.existing.key", 0))
print("With list as default:", s.get("non.existing.key", [1, 2, 3]))
```
**config.py**
```python
FOO = {
"bar": "hello"
}
```
</details>
4. Executing under the following environment
```bash
$ python
Python 3.7.3 (default, Mar 27 2019, 16:54:48)
[Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
$ pip freeze
certifi==2019.9.11
chardet==3.0.4
Click==7.0
configobj==5.0.6
dynaconf==2.2.0
entrypoints==0.3
flake8==3.7.8
hvac==0.9.5
idna==2.8
mccabe==0.6.1
pycodestyle==2.5.0
pyflakes==2.1.1
python-box==3.4.5
python-dotenv==0.10.3
PyYAML==5.1.2
redis==3.3.11
requests==2.22.0
six==1.12.0
toml==0.10.0
urllib3==1.25.6
$ python app.py
As dict:
{'FOO': {'bar': 'hello'}}
Existing key: hello
With 0 as default: 0
Traceback (most recent call last):
File "app.py", line 12, in <module>
print("With list as default:", s.get("non.existing.key", [1, 2, 3]))
File "/Users/me/Stuff/dynaconf/pyenv/lib/python3.7/site-packages/dynaconf/base.py", line 296, in get
dotted_key=key, default=default, cast=cast, fresh=fresh
File "/Users/me/Stuff/dynaconf/pyenv/lib/python3.7/site-packages/dynaconf/base.py", line 275, in _dotted_get
return self._dotted_get(".".join(keys), default=default, **kwargs)
File "/Users/me/Stuff/dynaconf/pyenv/lib/python3.7/site-packages/dynaconf/base.py", line 265, in _dotted_get
result = self.get(name, default=default, **kwargs)
File "/Users/me/Stuff/dynaconf/pyenv/lib/python3.7/site-packages/dynaconf/base.py", line 312, in get
data = store.get(key, default)
AttributeError: 'BoxList' object has no attribute 'get'
```
**Expected behavior**
No exception and no dependency on the default value
**Environment (please complete the following information):**
- OS: Linux/Ubuntu18.04 and MacOS 10.14.6
- Dynaconf Version: 2.2.0/wheel
| closed | 2019-10-25T19:30:31Z | 2019-11-13T20:54:03Z | https://github.com/dynaconf/dynaconf/issues/251 | [
"bug",
"in progress",
"Pending Release"
] | sebbegg | 2 |
man-group/arctic | pandas | 817 | Support for mongo transactions | Hi,
Is there any plan of introducing the ability to modify the mongo db atomically(i.e making use of mongo transactions)?
I see into the code the `ArcticTransaction` which is context based, not mongo based and it works for the VersionStore only, .
Thanks! | closed | 2019-09-21T11:15:29Z | 2019-10-09T06:10:51Z | https://github.com/man-group/arctic/issues/817 | [] | scoriiu | 0 |
tensorpack/tensorpack | tensorflow | 614 | imgaug.Saturation | 1. What I did: Use `fbresnet_augmentor` in `tensorpack/examples/Inception/imagenet_utils.py`.
2. What I observed: when using the `fbresnet_augmentor` in the code, some augmented images contain overly brightly colored artifacts:

This is unexpected to me, since the augmented data should resemble natural images.
When I comment out the line `imgaug.Saturation(0.4, rgb=False),` in `fbresnet_augmentor`, then there are no such artifacts:

There might be a problem with the `imgaug.Saturation()` function. | closed | 2018-01-27T19:04:22Z | 2018-05-30T20:59:33Z | https://github.com/tensorpack/tensorpack/issues/614 | [
"bug"
] | dpkingma | 4 |
localstack/localstack | python | 11,870 | bug: Deadlock when attempting to list topics | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Getting a deadlock error when running list-topics:
```
ⵙ awslocal sns list-topics
An error occurred (InternalError) when calling the ListTopics operation (reached max retries: 2): exception while calling sns.ListTopics: Traceback (most recent call last):
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle
handler(self, self.context, response)
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 36, in __call__
return self.require_service(chain, context, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/aws/handlers/service_plugin.py", line 56, in require_service
service_plugin: Service = self.service_manager.require(service_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 284, in require
container = self.get_service_container(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 555, in get_service_container
plugin = self._load_service_plugin(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/bootstrap.py", line 138, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 604, in _load_service_plugin
plugin = self.plugin_manager.load(plugin_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 234, in load
raise container.load_error
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/plux/runtime/manager.py", line 348, in _load_plugin
result = plugin.load(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 347, in load
self.service = self.create_service()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/plugins.py", line 369, in create_service
return self._create_service()
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/providers.py", line 320, in sns
from localstack.services.sns.provider import SnsProvider
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/services/sns/provider.py", line 9, in <module>
from moto.sns import sns_backends
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/moto/sns/__init__.py", line 1, in <module>
from .models import sns_backends # noqa: F401
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/moto/sns/models.py", line 26, in <module>
from moto.sqs import sqs_backends
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/moto/sqs/__init__.py", line 1, in <module>
from .models import sqs_backends # noqa: F401
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1173, in _find_and_load
File "<frozen importlib._bootstrap>", line 171, in __enter__
File "<frozen importlib._bootstrap>", line 116, in acquire
_frozen_importlib._DeadlockError: deadlock detected by _ModuleLock('moto.sqs.models') at 139796743621008
```
### Expected Behavior
Expect to get a list of topics.
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
kubernetes helm
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal sns list-topics
### Environment
```markdown
OS: Linux
LocalStack version: 3.8.2.dev8
LocalStack build date: 2024-10-09
LocalStack build git hash: 639f22d7
```
### Anything else?
_No response_ | closed | 2024-11-19T01:09:42Z | 2024-12-10T01:30:24Z | https://github.com/localstack/localstack/issues/11870 | [
"type: bug",
"status: response required",
"aws:sns",
"status: resolved/stale"
] | stoutlinden | 3 |
cupy/cupy | numpy | 9,028 | `cupy.add` behaves differntly with `numpy.add` on `uint8` | ### Description
`cupy.add` behaves differntly with `numpy.add` on `uint8`
### To Reproduce
```py
import numpy as np
import cupy as cp
print(np.add(np.uint8(255),1))
print(cp.add(cp.uint8(255),1))
```
```bash
0
256
```
### Installation
Wheel (`pip install cupy-***`)
### Environment
```
OS : Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python Version : 3.10.12
CuPy Version : 13.4.0
CuPy Platform : NVIDIA CUDA
NumPy Version : 2.2.3
SciPy Version : 1.15.2
Cython Build Version : 3.0.12
Cython Runtime Version : 3.0.12
CUDA Root : /usr/local/cuda
nvcc PATH : /usr/local/cuda/bin/nvcc
CUDA Build Version : 12080
CUDA Driver Version : 12040
CUDA Runtime Version : 12080 (linked to CuPy) / 12020 (locally installed)
CUDA Extra Include Dirs : []
cuBLAS Version : 120201
cuFFT Version : 11008
cuRAND Version : 10303
cuSOLVER Version : (11, 5, 2)
cuSPARSE Version : 12101
NVRTC Version : (12, 2)
Thrust Version : 200800
CUB Build Version : 200800
Jitify Build Version : <unknown>
cuDNN Build Version : None
cuDNN Version : None
NCCL Build Version : None
NCCL Runtime Version : None
cuTENSOR Version : None
cuSPARSELt Build Version : None
Device 0 Name : NVIDIA GeForce GTX 1650
Device 0 Compute Capability : 75
Device 0 PCI Bus ID : 0000:01:00.0
```
### Additional Information
_No response_ | closed | 2025-03-12T15:00:57Z | 2025-03-21T05:59:35Z | https://github.com/cupy/cupy/issues/9028 | [
"issue-checked"
] | apiqwe | 2 |
MilesCranmer/PySR | scikit-learn | 293 | [Feature]Can i clone SymbolicRegression.jl manually while running pysr.install()? | I always met difficulties about cloning git repos while running pysr.install(). But i can clone those repos manually. So can i put these repos like SymbolicRegression.jl somewhere so that i can skip the step of cloning while runing install()? | closed | 2023-04-18T08:07:09Z | 2023-04-18T11:28:42Z | https://github.com/MilesCranmer/PySR/issues/293 | [
"enhancement"
] | WhiteGL | 1 |
jschneier/django-storages | django | 1,439 | File not found error on Azure | After updating django-storages to 1.14, when trying to upload an image through the admin panel I'm getting a file not found error with the following trace:
```
Internal Server Error: /admin/images/multiple/add/
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/sentry_sdk/integrations/django/views.py", line 90, in sentry_wrapped_callback
return callback(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/views/decorators/cache.py", line 80, in _view_wrapper
response = view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/wagtail/admin/urls/__init__.py", line 178, in wrapper
return view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/wagtail/admin/auth.py", line 151, in decorated_view
response = view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/views/generic/base.py", line 104, in view
return self.dispatch(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/utils/decorators.py", line 48, in _wrapper
return bound_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/views/decorators/vary.py", line 31, in _view_wrapper
response = func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/wagtail/admin/views/generic/multiple_upload.py", line 45, in dispatch
return super().dispatch(request)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/wagtail/admin/views/generic/permissions.py", line 31, in dispatch
return super().dispatch(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/views/generic/base.py", line 143, in dispatch
return handler(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/wagtail/admin/views/generic/multiple_upload.py", line 147, in post
self.object = self.save_object(form)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/wagtail/images/views/multiple.py", line 81, in save_object
image.save()
File "/wagtail/common/models.py", line 779, in save
super().save(*args, **kwargs)
File "/usr/local/lib/python3.12/site-packages/django/db/models/base.py", line 822, in save
self.save_base(
File "/usr/local/lib/python3.12/site-packages/django/db/models/base.py", line 909, in save_base
updated = self._save_table(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/db/models/base.py", line 1071, in _save_table
results = self._do_insert(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/db/models/base.py", line 1112, in _do_insert
return manager._insert(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/db/models/manager.py", line 87, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/db/models/query.py", line 1847, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/db/models/sql/compiler.py", line 1822, in execute_sql
for sql, params in self.as_sql():
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/db/models/sql/compiler.py", line 1747, in as_sql
self.prepare_value(field, self.pre_save_val(field, obj))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/db/models/sql/compiler.py", line 1695, in pre_save_val
return field.pre_save(obj, add=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/db/models/fields/files.py", line 317, in pre_save
file.save(file.name, file.file, save=False)
File "/usr/local/lib/python3.12/site-packages/django/db/models/fields/files.py", line 94, in save
setattr(self.instance, self.field.attname, self.name)
File "/usr/local/lib/python3.12/site-packages/django/db/models/fields/files.py", line 379, in __set__
self.field.update_dimension_fields(instance, force=True)
File "/usr/local/lib/python3.12/site-packages/django/db/models/fields/files.py", line 492, in update_dimension_fields
width = file.width
^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/core/files/images.py", line 21, in width
return self._get_image_dimensions()[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/wagtail/images/models.py", line 225, in _get_image_dimensions
self._dimensions_cache = self.get_image_dimensions()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/wagtail/images/models.py", line 235, in get_image_dimensions
self.open()
File "/usr/local/lib/python3.12/site-packages/django/db/models/fields/files.py", line 79, in open
self.file = self.storage.open(self.name, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/core/files/storage/base.py", line 22, in open
return self._open(name, mode)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django_storage_url/backends/az.py", line 76, in _open
return AzureStorageFile(name, mode, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django_storage_url/backends/az.py", line 14, in __init__
raise IOError("File does not exist: %s" % name)
OSError: File does not exist: original_images/test_image.jpg
```
Checking on azure I see that the file was uploaded successfully, but for some reason wagtail (django) was not able to retrieve it.
This looks very similar to issue #802 from 2019, but I don't see a solution for it.
versions used:
Wagtail 6.1.3 (Django 5.0.7)
django-storages 1.14.1
azure-core: 1.30.2
azure-storage-blob: 12.21.0 | closed | 2024-07-25T17:02:15Z | 2024-08-31T22:15:13Z | https://github.com/jschneier/django-storages/issues/1439 | [] | Morefra | 4 |
NVIDIA/pix2pixHD | computer-vision | 183 | Does the program preprocess the input image? | When I input an image with very low contrast, the output results increase the contrast a lot.
What is the reason for this? | open | 2020-03-06T03:27:14Z | 2020-04-22T11:36:02Z | https://github.com/NVIDIA/pix2pixHD/issues/183 | [] | Kodeyx | 1 |
viewflow/viewflow | django | 266 | [Question] How to automatically assign a user to a task? | If I were creating a workflow that has a step to selects a team member to be assigned to a task base on certain conditions, how do I do that? | closed | 2020-03-26T20:44:18Z | 2020-05-12T03:08:28Z | https://github.com/viewflow/viewflow/issues/266 | [
"request/question",
"dev/flow"
] | variable | 1 |
PokemonGoF/PokemonGo-Bot | automation | 5,491 | catch_pokemon: catch filter does not work | I set to catch above 400 cp or above 0.85iv
but seems bot is catching all
### Your FULL config.json (remove your username, password, gmapkey and any other private info)
<!-- Provide your FULL config file, feel free to use services such as pastebin.com to reduce clutter -->
```
"catch": {
"any": {"catch_above_cp": 400, "catch_above_iv": 0.85, "logic": "or"},
```
...
},
### Output when issue occurred
<!-- Provide a reasonable sample from your output log (not just the error message), feel free to use services such as pastebin.com to reduce clutter -->
[2016-09-16 15:10:45] [PokemonCatchWorker] [INFO] _A wild Pidgeotto appeared!_ (CP: 397) (NCP: 0.33) (Potential 0.51) (A/D/S 1
/13/9)
[2016-09-16 15:10:59] [PokemonCatchWorker] [INFO] Captured Pidgeotto! [CP 397] [NCP 0.33] [Potential 0.51] [1/13/9](367/700)
[+110 exp] [+100 stardust]
Branch:
master
Git Commit:
a8ee31256d412413b107cce81b62059634e8c802
| closed | 2016-09-16T20:17:54Z | 2016-09-17T03:10:27Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5491 | [] | liejuntao001 | 2 |
comfyanonymous/ComfyUI | pytorch | 6,302 | 'ASGD is already registered in optimizer at torch.optim.asgd' is:issue | ### Your question
an error: 'ASGD is already registered in optimizer at torch.optim.asgd' is:issue
### Logs
```powershell
File "E:\ComfyUI-aki\ComfyUI-aki-v1.5\python\lib\site-packages\mmengine\optim\optimizer\builder.py", line 28, in register_torch_optimizers
OPTIMIZERS.register_module(module=_optim)
File "E:\ComfyUI-aki\ComfyUI-aki-v1.5\python\lib\site-packages\mmengine\registry\registry.py", line 661, in register_module
self._register_module(module=module, module_name=name, force=force)
File "E:\ComfyUI-aki\ComfyUI-aki-v1.5\python\lib\site-packages\mmengine\registry\registry.py", line 611, in _register_module
raise KeyError(f'{name} is already registered in {self.name} '
KeyError: 'ASGD is already registered in optimizer at torch.optim.asgd'
```
### Other
_No response_ | closed | 2025-01-01T04:03:33Z | 2025-01-02T00:38:01Z | https://github.com/comfyanonymous/ComfyUI/issues/6302 | [
"User Support",
"Custom Nodes Bug"
] | alexlooks | 1 |
PrefectHQ/prefect | automation | 17,251 | Consider committing the uv lockfile | ### Describe the current behavior
Hi, thank you for making prefect!
I'm considering packaging this for nixos (a linux distribution).
Having the lockfile committed to the project would make this much easier.
I've seen the todo in the dockerfile saying to consider committing it.
Just saying here that if you have time, this would help.
thank you.
### Describe the proposed behavior
Having a lockfile committed to the repo would make it easier for linux distros maintainers to package this.
### Example Use
_No response_
### Additional context
_No response_ | closed | 2025-02-23T09:42:21Z | 2025-03-07T22:48:20Z | https://github.com/PrefectHQ/prefect/issues/17251 | [
"enhancement"
] | happysalada | 3 |
microsoft/hummingbird | scikit-learn | 172 | Support for LGBMRanker? | Hello there,
I am wondering if the current implementation supports convertion from LightGBMRanker which is part of the LightGBM package? Is there anything special to the LightGBMRanker and will the converter for LGBMRegressor works in this case? Here is the error I got when I try to convert the LGBMRanker
```
MissingConverter: Unable to find converter for model type <class 'lightgbm.sklearn.LGBMRanker'>.
It usually means the pipeline being converted contains a
transformer or a predictor with no corresponding converter implemented.
Please fill an issue at https://github.com/microsoft/hummingbird.
```
This is the doc to LGBMRanker https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRanker.html
Thanks in advance! | closed | 2020-06-26T21:18:37Z | 2020-07-21T00:17:16Z | https://github.com/microsoft/hummingbird/issues/172 | [] | go2ready | 5 |
explosion/spaCy | nlp | 13,769 | Bug in Span.sents | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
When a `Doc`'s entity is in the second to the last sentence, and the last sentence consists only of one token, `entity.sents` includes that last 1-token sentence (even though the entity is fully contained by the previous sentence.
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
```
text = "This is a sentence. This is another sentence. Third"
doc = nlp.tokenizer(text)
doc[0].is_sent_start = True
doc[5].is_sent_start = True
doc[10].is_sent_start = True
doc.ents = [('ENTITY', 7, 9)] # "another sentence" phrase in the second sentence
entity = doc.ents[0]
print(f"Entity: {entity}. Sentence: {entity.sent} Sentences: {list(entity.sents)}")
```
Output:
```
Entity: another sentence. Sentence: This is another sentence. Sentences: [This is another sentence., Third]
```
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System:
* Python Version Used:
* spaCy Version Used:
* Environment Information:
| open | 2025-03-12T18:03:13Z | 2025-03-12T18:03:13Z | https://github.com/explosion/spaCy/issues/13769 | [] | nrodnova | 0 |
exaloop/codon | numpy | 308 | Importing codon code from python | Is it possible to build a codon script to a .dylib or .so file to be imported into python for execution? I didn't see it in the documentation. I did see how to build object files but you cannot import those into python. | closed | 2023-03-30T05:17:48Z | 2025-02-27T18:07:13Z | https://github.com/exaloop/codon/issues/308 | [] | NickDatLe | 7 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 630 | Can't load dataset |

datasets_root: None
Warning: you did not pass a root directory for datasets as argument.
The recognized datasets are:
LibriSpeech/dev-clean
LibriSpeech/dev-other
LibriSpeech/test-clean
LibriSpeech/test-other
LibriSpeech/train-clean-100
LibriSpeech/train-clean-360
LibriSpeech/train-other-500
LibriTTS/dev-clean
LibriTTS/dev-other
LibriTTS/test-clean
LibriTTS/test-other
LibriTTS/train-clean-100
LibriTTS/train-clean-360
LibriTTS/train-other-500
LJSpeech-1.1
VoxCeleb1/wav
VoxCeleb1/test_wav
VoxCeleb2/dev/aac
VoxCeleb2/test/aac
VCTK-Corpus/wav48 | closed | 2021-01-18T13:40:04Z | 2021-01-18T19:13:48Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/630 | [] | notluke27 | 2 |
christabor/flask_jsondash | plotly | 104 | Make chart specific documentation show up in UI | What likely needs to happen:
1. Docs moved inside of package (OR linked via setuptools)
2. Docs read and imported via python
3. Docs then parsed and available on a per-widget basis.
The ultimate goal of the above is so that there is never any disconnect between docs and UI. It should always stay in sync.
| open | 2017-05-10T22:26:23Z | 2017-05-14T20:31:45Z | https://github.com/christabor/flask_jsondash/issues/104 | [
"enhancement",
"docs"
] | christabor | 0 |
scrapy/scrapy | web-scraping | 6,451 | SSL error - dh key too small | > https://data.airbreizh.asso.fr/
Fails in Scrapy (default settings) but works fine in a web-browser (new or old).
```
[scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://data.airbreizh.asso.fr> (failed 1 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', '', 'dh key too small')]>]
[scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://data.airbreizh.asso.fr> (failed 2 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', '', 'dh key too small')]>]
[scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://data.airbreizh.asso.fr> (failed 3 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', '', 'dh key too small')]>]
```
> [scrapy.utils.log] INFO: Versions: lxml 5.2.1.0, libxml2 2.11.7, cssselect 1.2.0, parsel 1.9.1, w3lib 2.1.2, Twisted 24.3.0, Python 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:48:03) [MSC v.1928 64 bit (AMD64)], pyOpenSSL 24.1.0 (OpenSSL 3.2.1 30 Jan 2024), cryptography 42.0.5 | closed | 2024-08-01T13:02:38Z | 2024-08-01T14:27:30Z | https://github.com/scrapy/scrapy/issues/6451 | [] | mohmad-null | 1 |
pydata/xarray | numpy | 9,839 | Advanced interpolation returning unexpected shape | ### What happened?
Hi !
Iam using Xarray since quite some time and recently I started using fresh NC files from https://cds.climate.copernicus.eu/ (up to now, using NC4 files that I have from quite some time now). I have an issue regarding Xarray' interpolation with advanced indexing: the returned shape by interp() is not as expected.
basically Iam just doing advanced interpolation as mentioned here https://docs.xarray.dev/en/stable/user-guide/interpolation.html#advanced-interpolation.
It used to work perfectly with my older datasets.
thanks
Vianney
### What did you expect to happen?
I would expect res.shape to be (100,) or similar (and actually I get this result with older datasets. but I don't understand why I get (705, 715, 2) ! this does not corrrespond to anything, as we can see from the below description of the dataset:
### Minimal Complete Verifiable Example
[dataset.zip](https://github.com/user-attachments/files/17960240/dataset.zip)
```Python
# -*- coding: utf-8 -*-
import dask
import numpy as np
import xarray as xa
import datetime as dt
# open dataset (unzip dataset.zip which is attached to bug report)
dataset = xa.open_mfdataset("dataset.nc", decode_times=True, parallel=True)
start_date = dt.datetime(2023, 1, 1)
np.random.seed(0)
# points generation
N = 100
access_date = np.linspace(0, 5000, N) # seconds since start_date
longitudes = -180 + np.random.rand(N)*360
latitudes = -90 + np.random.rand(N)*180
dates_folded = np.datetime64(start_date) + \
access_date.astype("timedelta64[s]").astype("timedelta64[ns]")
arguments = {
"valid_time": xa.DataArray(dates_folded, dims="z"),
"longitude": xa.DataArray(longitudes, dims="z"),
"latitude": xa.DataArray(latitudes, dims="z"),
"method": "linear",
"kwargs": {"bounds_error": False},
}
with dask.config.set(**{"array.slicing.split_large_chunks": True}):
data = dataset['tcc'].interp(**arguments)
res = np.array(data).T
print(res.shape)
```
Output:
```
(705, 715, 2)
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
(705, 715, 2)
```
### Anything else we need to know?
the expected log is (100,)
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:27:10) [MSC v.1938 64 bit (AMD64)]
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 140 Stepping 1, GenuineIntel
byteorder: little
LC_ALL: None
LANG: fr
LOCALE: ('fr_FR', 'cp1252')
libhdf5: 1.14.3
libnetcdf: 4.9.2
xarray: 2024.11.0
pandas: 2.2.2
numpy: 1.26.4
scipy: 1.14.0
netCDF4: 1.6.5
pydap: None
h5netcdf: None
h5py: 3.11.0
zarr: None
cftime: 1.6.4
nc_time_axis: None
iris: None
bottleneck: None
dask: 2024.7.0
distributed: 2024.7.0
matplotlib: 3.9.1
cartopy: 0.23.0
seaborn: None
numbagg: None
fsspec: 2024.6.1
cupy: 13.2.0
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 70.3.0
pip: 24.0
conda: 24.5.0
pytest: 8.2.2
mypy: None
IPython: 8.26.0
sphinx: 7.4.0
</details>
| closed | 2024-11-29T15:52:43Z | 2024-12-14T08:33:12Z | https://github.com/pydata/xarray/issues/9839 | [
"topic-interpolation"
] | vianneylan | 7 |
robotframework/robotframework | automation | 4,787 | How can i add a column to the robot test report? | closed | 2023-06-08T07:53:56Z | 2023-06-09T23:06:09Z | https://github.com/robotframework/robotframework/issues/4787 | [] | liulangdexiaoxin | 1 | |
trevismd/statannotations | seaborn | 54 | Add support for LSD& Tukey multiple comparisons | In ecology study, we often apply LSD or Tukey multiple comparisons, which are vital when labeling a batch of significance of experimental result. Most of multiple comparisons methods have been integrated in statmodels.stats.multicomp. You may get more information of multiple comparisons methods from https://zhuanlan.zhihu.com/p/44880434. | open | 2022-04-18T15:06:15Z | 2022-06-04T23:19:53Z | https://github.com/trevismd/statannotations/issues/54 | [] | lxw748 | 2 |
FactoryBoy/factory_boy | django | 421 | Use django test database when running test | I have my database defined as followed in my django settings:
```python
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'USER': 'mydatabaseuser',
'NAME': 'mydatabase',
'TEST': {
'NAME': 'mytestdatabase',
},
},
}
```
When I run my tests (using `python manage.py test`) I would expect factory boy to create my fixtures using `mytestdatabase` but `mydatabase` is used instead.
This is problematic because classic django ORM operations are using the test database (`mytestdatabase`) and therefore the following test would fail:
```python
def test_factories(self)
UserFactory.create() # User is created in mydatabase
self.assertEqual(User.objects.count(), 1) # this assertion fails because django can't find any user in mytestdatabase
```
I have had a look at the database option on the Meta class of the model factory but it doesn't seem to solve my problem. I would have to create a second database alias in my DATABASES dict and django ORM query will still be executed in the test database of that new alias.
Is there a solution to my problem? | closed | 2017-09-29T13:02:18Z | 2017-10-02T15:06:14Z | https://github.com/FactoryBoy/factory_boy/issues/421 | [] | BrunoGodefroy | 1 |
healthchecks/healthchecks | django | 152 | Twitter DM integration | Twitter DM integration will be awesome to have. Send alerts to user as DM from a standard healthchecks account. | closed | 2018-02-01T06:50:29Z | 2018-08-20T15:23:40Z | https://github.com/healthchecks/healthchecks/issues/152 | [] | thejeshgn | 2 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 398 | provide support for sqlalchemy.ext.automap | SQLAlchemy's automapper provides reflection of an existing database and it's relations.
| closed | 2016-05-28T08:18:58Z | 2020-12-05T20:55:43Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/398 | [] | nexero | 6 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 115 | [BUG] API返回中的official_api中的值需要进行修改 | ***发生错误的平台?***
如:抖音/TikTok
***发生错误的端点?***
如:API-V1/API-V2/Web APP
***提交的输入值?***
如:短视频链接
***是否有再次尝试?***
如:是,发生错误后X时间后错误依旧存在。
***你有查看本项目的自述文件或接口文档吗?***
如:有,并且很确定该问题是程序导致的。
| closed | 2022-12-02T11:25:21Z | 2022-12-02T23:01:45Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/115 | [
"BUG",
"Fixed"
] | Evil0ctal | 1 |
miguelgrinberg/python-socketio | asyncio | 691 | AsyncAioPikaManager connect | https://github.com/miguelgrinberg/python-socketio/blob/2f0d8bbd8c4de43fe26e0f2edcd05aef3c8c71f9/socketio/asyncio_aiopika_manager.py#L68-L76
A connection is built every time it is publish can this place be improved? | closed | 2021-05-25T08:19:24Z | 2021-06-27T19:45:29Z | https://github.com/miguelgrinberg/python-socketio/issues/691 | [
"question"
] | wangjianweiwei | 2 |
AirtestProject/Airtest | automation | 986 | Cannot install pocoui by using poetry on macOS |
**Describe the bug**
Cannot install pocoui by using poetry
```
➜ poetry-demo poetry add pocoui
Creating virtualenv poetry-demo-ZKg_bqkG-py3.8 in /Users/trevorwang/Library/Caches/pypoetry/virtualenvs
Using version ^1.0.84 for pocoui
Updating dependencies
Resolving dependencies... (13.5s)
Writing lock file
Package operations: 26 installs, 0 updates, 0 removals
• Installing certifi (2021.10.8)
• Installing charset-normalizer (2.0.7)
• Installing decorator (5.1.0)
• Installing idna (3.3)
• Installing py (1.11.0)
• Installing urllib3 (1.26.7)
• Installing wrapt (1.13.3)
• Installing cached-property (1.5.2)
• Installing comtypes (1.1.10)
• Installing deprecated (1.2.13)
• Installing markupsafe (2.0.1)
• Installing numpy (1.19.3)
• Installing pillow (8.4.0)
• Installing requests (2.26.0)
• Installing retry (0.9.2)
• Installing six (1.16.0)
• Installing facebook-wda (1.4.3)
• Installing jinja2 (3.0.3)
• Installing mss (4.0.3)
• Installing opencv-contrib-python (4.5.2.54)
• Installing pywin32 (302): Failed
RuntimeError
Unable to find installation candidates for pywin32 (302)
at ~/.pyenv/versions/3.8-dev/lib/python3.8/site-packages/poetry/installation/chooser.py:72 in choose_for
68│
69│ links.append(link)
70│
71│ if not links:
→ 72│ raise RuntimeError(
73│ "Unable to find installation candidates for {}".format(package)
74│ )
75│
76│ # Get the best link
• Installing pywinauto (0.6.3)
Failed to add packages, reverting the pyproject.toml file to its original content.
```
**To Reproduce**
Steps to reproduce the behavior:
poetry install pocoui in terminal on macOS
**Expected behavior**
pocoui & airtest can be installed.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**python version:** `python3.8`
**pocoui version:** `1.0.84` | open | 2021-11-22T02:23:05Z | 2021-11-22T02:23:05Z | https://github.com/AirtestProject/Airtest/issues/986 | [] | trevorwang | 0 |
psf/requests | python | 6,707 | Requests 2.32.0 Not supported URL scheme http+docker | <!-- Summary. -->
Newest version of requests 2.32.0 has an incompatibility with python lib `docker`
```
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/var/lib/jenkins/workspace/Development_sm_master/gravity/.nox/lib/python3.10/site-packages/requests/adapters.py", line 532, in send
INTERNALERROR> conn = self._get_connection(request, verify, proxies=proxies, cert=cert)
INTERNALERROR> File "/var/lib/jenkins/workspace/Development_sm_master/gravity/.nox/lib/python3.10/site-packages/requests/adapters.py", line 400, in _get_connection
INTERNALERROR> conn = self.poolmanager.connection_from_host(
INTERNALERROR> File "/var/lib/jenkins/workspace/Development_sm_master/gravity/.nox/lib/python3.10/site-packages/urllib3/poolmanager.py", line 304, in connection_from_host
INTERNALERROR> return self.connection_from_context(request_context)
INTERNALERROR> File "/var/lib/jenkins/workspace/Development_sm_master/gravity/.nox/lib/python3.10/site-packages/urllib3/poolmanager.py", line 326, in connection_from_context
INTERNALERROR> raise URLSchemeUnknown(scheme)
INTERNALERROR> urllib3.exceptions.URLSchemeUnknown: Not supported URL scheme http+docker
INTERNALERROR>
INTERNALERROR> During handling of the above exception, another exception occurred:
INTERNALERROR>
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/var/lib/jenkins/workspace/Development_sm_master/gravity/.nox/lib/python3.10/site-packages/docker/api/client.py", line 213, in _retrieve_server_version
INTERNALERROR> return self.version(api_version=False)["ApiVersion"]
INTERNALERROR> File "/var/lib/jenkins/workspace/Development_sm_master/gravity/.nox/lib/python3.10/site-packages/docker/api/daemon.py", line 181, in version
INTERNALERROR> return self._result(self._get(url), json=True)
INTERNALERROR> File "/var/lib/jenkins/workspace/Development_sm_master/gravity/.nox/lib/python3.10/site-packages/docker/utils/decorators.py", line 44, in inner
INTERNALERROR> return f(self, *args, **kwargs)
INTERNALERROR> File "/var/lib/jenkins/workspace/Development_sm_master/gravity/.nox/lib/python3.10/site-packages/docker/api/client.py", line 236, in _get
INTERNALERROR> return self.get(url, **self._set_request_timeout(kwargs))
INTERNALERROR> File "/var/lib/jenkins/workspace/Development_sm_master/gravity/.nox/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
INTERNALERROR> return self.request("GET", url, **kwargs)
INTERNALERROR> File "/var/lib/jenkins/workspace/Development_sm_master/gravity/.nox/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
INTERNALERROR> resp = self.send(prep, **send_kwargs)
INTERNALERROR> File "/var/lib/jenkins/workspace/Development_sm_master/gravity/.nox/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
INTERNALERROR> r = adapter.send(request, **kwargs)
INTERNALERROR> File "/var/lib/jenkins/workspace/Development_sm_master/gravity/.nox/lib/python3.10/site-packages/requests/adapters.py", line 534, in send
INTERNALERROR> raise InvalidURL(e, request=request)
INTERNALERROR> requests.exceptions.InvalidURL: Not supported URL scheme http+docker
```
## Expected Result
Normal initalization of docker client
## Actual Result
<!-- What happened instead. -->
Stack trace posted above
## Reproduction Steps
```sh
mkvirtualenv debug_issue
pip install docker
pip install 'requests>=2.32.0'
python
```
```python
import docker
docker.from_env()
```
## System Information
$ python -m requests.help
```json
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "3.3.2"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.7"
},
"implementation": {
"name": "CPython",
"version": "3.10.12"
},
"platform": {
"release": "6.5.0-28-generic",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.31.0"
},
"system_ssl": {
"version": "30000020"
},
"urllib3": {
"version": "2.2.1"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
<!-- This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c). -->
| closed | 2024-05-20T19:36:07Z | 2024-06-04T16:21:25Z | https://github.com/psf/requests/issues/6707 | [] | joshzcold | 16 |
mars-project/mars | numpy | 2,463 | [BUG] Failed to execute query when there are multiple arguments | **Describe the bug**
When executing query with more than three joint arguments, the query failed with SyntaxError.
**To Reproduce**
```python
import numpy as np
import mars.dataframe as md
df = md.DataFrame({'a': np.random.rand(100),
'b': np.random.rand(100),
'c c': np.random.rand(100)})
df.query('a < 0.5 and a != 0.1 and b != 0.2').execute()
```
```
Traceback (most recent call last):
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 131, in visit
visitor = getattr(self, method)
AttributeError: 'CollectionVisitor' object has no attribute 'visit_Series'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/wenjun.swj/miniconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3417, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-f0b7eeac5829>", line 1, in <module>
df.query('a < 0.5 and a != 0.1 and b != 0.2').execute()
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 773, in df_query
predicate = mars_eval(expr, resolvers=(df,), level=level + 1, **kwargs)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 507, in mars_eval
result = visitor.eval(expr)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 112, in eval
return self.visit(node)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 134, in visit
return visitor(node)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 141, in visit_Module
result = self.visit(expr)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 134, in visit
return visitor(node)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 145, in visit_Expr
return self.visit(node.value)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 134, in visit
return visitor(node)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 178, in visit_BoolOp
return reduce(func, node.values)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 177, in func
return self.visit(binop)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 134, in visit
return visitor(node)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 148, in visit_BinOp
left = self.visit(node.left)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 133, in visit
raise SyntaxError('Query string contains unsupported syntax: {}'.format(node_name))
SyntaxError: Query string contains unsupported syntax: Series
``` | closed | 2021-09-17T07:14:18Z | 2021-09-29T16:00:55Z | https://github.com/mars-project/mars/issues/2463 | [
"type: bug",
"good first issue",
"mod: dataframe"
] | wjsi | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 788 | Pre-trained model cannot be downloaded | Can anyone share | open | 2019-10-11T09:00:31Z | 2022-10-23T13:23:08Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/788 | [] | jiaying96 | 4 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,309 | Hid21.1 | open | 2024-07-29T20:24:21Z | 2024-07-29T20:24:21Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1309 | [] | Hid21 | 0 | |
capitalone/DataProfiler | pandas | 414 | Meaning of quantile names | When profiling a numerical column, I get these quantiles:
```json
"quantiles": {
"0": 657.04,
"1": 999.18,
"2": 1335.71
},
```
Based on the data, I guess these are the 25, 50, and 75 percentiles. I feel like 0, 1, and 2 is not meaningful to the human reader, and also not very extendable to other quantiles in the future.
Shouldn't this field be called quartiles (with an *r*)?
PS. Thanks for the quick feedback to my other questions! | open | 2021-09-14T08:26:18Z | 2021-09-14T15:20:18Z | https://github.com/capitalone/DataProfiler/issues/414 | [] | ian-contiamo | 1 |
ansible/awx | django | 15,611 | Support for AAP 2.5 and greater? | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Feature type
Enhancement to Existing Feature
### Feature Summary
CLI support for AAP 2.5? With the [deprecation](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/release_notes/aap-2.5-deprecated-features#deprecated_api_endpoints) of the current API paths for AAP, is there going to be an update for the AWX cli to support AAP 2.5 and greater?
### Select the relevant components
- [ ] UI
- [ ] API
- [ ] Docs
- [ ] Collection
- [X] CLI
- [ ] Other
### Steps to reproduce
Use the AWX cli on AAP 2.5 and attempt inventory or launch actions
### Current results
We received errors since the API path has changed and is inaccessible to the CLI.
### Sugested feature result
CLI should detect the version used and use the corresponding API path to match. eg `/api/v2/inventories` vs `/api/conroller/v2/inventories`
### Additional information
_No response_ | closed | 2024-11-04T18:00:21Z | 2025-02-12T15:42:56Z | https://github.com/ansible/awx/issues/15611 | [
"type:enhancement",
"community"
] | ryancbutler | 4 |
encode/databases | asyncio | 333 | unable to iterate over multiple rows from database using python in celery | I am using the `databases` python package (https://pypi.org/project/databases/) to manage connection to my postgresql database
from the documentation (https://www.encode.io/databases/database_queries/#queries)
it says i can either use
```
# Fetch multiple rows without loading them all into memory at once
query = notes.select()
async for row in database.iterate(query=query):
...
```
or
```
# Fetch multiple rows
query = notes.select()
rows = await database.fetch_all(query=query)
```
**Here is what i have tried:**
```
def check_all_orders():
query = "SELECT * FROM orders WHERE shipped=True"
return database.fetch_all(query)
...
...
...
@app.task
async def check_orders():
query = await check_all_orders()
today = datetime.utcnow()
for q in query:
if q.last_notification is not None:
if (today - q.last_notification).total_seconds() < q.cooldown:
continue
```
and
```
@app.task
async def check_orders():
query = "SELECT * FROM orders WHERE shipped=True"
today = datetime.utcnow()
async for q in database.iterate(query=query):
if q.last_notification is not None:
if (today - q.last_notification).total_seconds() < q.cooldown:
continue
```
**i have used both** but getting the following error:
>raise TypeError(f'Object of type {o.__class__.__name__} '
>kombu.exceptions.EncodeError: Object of type coroutine is not JSON serializable
full error below
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 472, in trace_task
mark_as_done(
File "/usr/local/lib/python3.9/site-packages/celery/backends/base.py", line 154, in mark_as_done
self.store_result(task_id, result, state, request=request)
File "/usr/local/lib/python3.9/site-packages/celery/backends/base.py", line 434, in store_result
self._store_result(task_id, result, state, traceback,
File "/usr/local/lib/python3.9/site-packages/celery/backends/base.py", line 856, in _store_result
self._set_with_state(self.get_key_for_task(task_id), self.encode(meta), state)
File "/usr/local/lib/python3.9/site-packages/celery/backends/base.py", line 324, in encode
_, _, payload = self._encode(data)
File "/usr/local/lib/python3.9/site-packages/celery/backends/base.py", line 328, in _encode
return dumps(data, serializer=self.serializer)
File "/usr/local/lib/python3.9/site-packages/kombu/serialization.py", line 220, in dumps
payload = encoder(data)
File "/usr/local/lib/python3.9/contextlib.py", line 135, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.9/site-packages/kombu/serialization.py", line 53, in _reraise_errors
reraise(wrapper, wrapper(exc), sys.exc_info()[2])
File "/usr/local/lib/python3.9/site-packages/kombu/exceptions.py", line 21, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.9/site-packages/kombu/serialization.py", line 49, in _reraise_errors
yield
File "/usr/local/lib/python3.9/site-packages/kombu/serialization.py", line 220, in dumps
payload = encoder(data)
File "/usr/local/lib/python3.9/site-packages/kombu/utils/json.py", line 65, in dumps
return _dumps(s, cls=cls or _default_encoder,
File "/usr/local/lib/python3.9/json/__init__.py", line 234, in dumps
return cls(
File "/usr/local/lib/python3.9/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/lib/python3.9/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/local/lib/python3.9/site-packages/kombu/utils/json.py", line 55, in default
return super().default(o)
File "/usr/local/lib/python3.9/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
kombu.exceptions.EncodeError: Object of type coroutine is not JSON serializable
```
| closed | 2021-05-12T18:45:49Z | 2021-05-13T12:43:58Z | https://github.com/encode/databases/issues/333 | [] | encryptblockr | 0 |
koxudaxi/fastapi-code-generator | pydantic | 350 | Allow generation of subdirs when using '--template-dir' | Hi.
I'd love to be able to generate sub directories when using the `--template-dir` option. Currently this is not possible as in [fastapi_code_generator/__main__](https://github.com/koxudaxi/fastapi-code-generator/blob/master/fastapi_code_generator/__main__.py#L179) `for target in template_dir.rglob("*"):` is used instead of `for target in template_dir.rglob("*.jinja2"):` and parsing a directory path to the code gen throws an error.
Furthermore it would be cool to automatically generate subpaths if they do not exist.
## Example:
```
templates:
generated:
router.jinja2
main.jinja2
```
should result in
`main.py`, `generated/router.py`
This would help me, structuring the generated code further.
I'd like to make a pull request for this, but would love some input first :) | open | 2023-05-17T11:17:55Z | 2023-05-17T11:23:59Z | https://github.com/koxudaxi/fastapi-code-generator/issues/350 | [] | ThisIsANiceName | 0 |
jmcnamara/XlsxWriter | pandas | 545 | Feature request: Chart legend border and legend styling. | Hi,
There are few options for `chart.set_legend()`.
I think it should be great if there are styling options like border.
Thank you. | closed | 2018-08-03T07:27:20Z | 2018-08-23T23:18:31Z | https://github.com/jmcnamara/XlsxWriter/issues/545 | [
"feature request",
"short term"
] | Pusnow | 3 |
biolab/orange3 | scikit-learn | 6,323 | add RangeSlider gui component | <!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
<!-- In other words, what's your pain point? -->
Orange3 does not currently support RangeSlider gui components [see here](https://doc.qt.io/qt-6/qml-qtquick-controls2-rangeslider.html)
<!-- Is your request related to a problem, or perhaps a frustration? -->
Not a problem. More a frustration.
<!-- Tell us the story that led you to write this request. -->
I am building an Orange addon for text analysis that calculates scores for words based on certain metrics. I want to be able to filter words which have scores within a certain range. I am aware that there are other ways to specify numerical ranges using two [QSpinBox](https://www.tutorialspoint.com/pyqt/pyqt_qspinbox_widget.htm) components for example for the min and max values. However, I think it is more user-friendly and intuitive to use a range slider because then the user can more quickly and easily slide through different ranges to dynamically update the filtered results. Rather than manually typing arbitrary numbers for the min and max values for the QSpinBox components.
**What's your proposed solution?**
<!-- Be specific, clear, and concise. -->
Port Range Slider widget to PyQt like in [this thread](https://stackoverflow.com/questions/47342158/porting-range-slider-widget-to-pyqt5).
**Are there any alternative solutions?**
I don't know.
| closed | 2023-01-31T11:01:26Z | 2023-02-17T07:51:32Z | https://github.com/biolab/orange3/issues/6323 | [] | kodymoodley | 3 |
widgetti/solara | flask | 14 | Support for route in solara.ListItem() | Often you will need to point a listitem to solara route. We can have a `path_or_route` param to the `ListItem()`. to support vue route (similar to `solara.Link()`). | open | 2022-09-01T10:41:58Z | 2022-09-01T10:42:21Z | https://github.com/widgetti/solara/issues/14 | [] | prionkor | 0 |
DistrictDataLabs/yellowbrick | scikit-learn | 660 | Update default ranking algorithm in Rank2D documentation | In the documentation for the `Rank2D` visualizer, it says `covariance` is the default ranking algorithm when it is actually `pearson`.

@DistrictDataLabs/team-oz-maintainers
| closed | 2018-11-30T15:08:15Z | 2018-12-03T16:38:53Z | https://github.com/DistrictDataLabs/yellowbrick/issues/660 | [
"type: documentation"
] | Kautumn06 | 0 |
modin-project/modin | data-science | 6,766 | read_pickle: AttributeError: 'Series' object has no attribute 'columns' | Modin version: b8323b5f46e25d257c08f55745c018197f7530f5
Reproducer:
```python
import pandas
import modin.pandas as pd
pandas.Series([1,2,3,4]).to_pickle("test_series.pkl")
pd.read_pickle("test_series.pkl") # <- failed
```
Output:
```python
UserWarning: Ray execution environment not yet initialized. Initializing...
To remove this warning, run the following python code before doing dataframe operations:
import ray
ray.init()
2023-11-23 16:30:13,597 INFO worker.py:1642 -- Started a local Ray instance.
UserWarning: `read_pickle` is not currently supported by PandasOnRay, defaulting to pandas implementation.
Please refer to https://modin.readthedocs.io/en/stable/supported_apis/defaulting_to_pandas.html for explanation.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "...\modin\modin\utils.py", line 485, in wrapped
return func(*params.args, **params.kwargs)
File "...\modin\modin\logging\logger_decorator.py", line 129, in run_and_log
return obj(*args, **kwargs)
File "...\modin\modin\pandas\io.py", line 586, in read_pickle
return DataFrame(query_compiler=FactoryDispatcher.read_pickle(**kwargs))
File "...\modin\modin\core\execution\dispatching\factories\dispatcher.py", line 262, in read_pickle
return cls.get_factory()._read_pickle(**kwargs)
File "...\modin\modin\core\execution\dispatching\factories\factories.py", line 322, in _read_pickle
return cls.io_cls.read_pickle(**kwargs)
File "...\modin\modin\core\io\io.py", line 420, in read_pickle
return cls.from_pandas(
File "...\modin\modin\core\io\io.py", line 84, in from_pandas
return cls.query_compiler_cls.from_pandas(df, cls.frame_cls)
File "...\modin\modin\logging\logger_decorator.py", line 129, in run_and_log
return obj(*args, **kwargs)
File "...\modin\modin\core\storage_formats\pandas\query_compiler.py", line 296, in from_pandas
return cls(data_cls.from_pandas(df))
File "...\modin\modin\logging\logger_decorator.py", line 129, in run_and_log
return obj(*args, **kwargs)
File "...\modin\modin\core\dataframe\pandas\dataframe\dataframe.py", line 4003, in from_pandas
new_columns = df.columns
File "...\Miniconda3\envs\modin\lib\site-packages\pandas\core\generic.py", line 6204, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'Series' object has no attribute 'columns'
``` | open | 2023-11-23T15:33:54Z | 2023-11-23T15:33:54Z | https://github.com/modin-project/modin/issues/6766 | [
"bug 🦗",
"pandas.io",
"P1"
] | anmyachev | 0 |
pytorch/vision | machine-learning | 8,118 | missing labels in FER2013 test data | ### 🐛 Describe the bug
The file **test.csv** has no label column, so the labels in the test split all have value None:
```
from torchvision.datasets import FER2013
dat = FER2013(root='./', split='test')
print(dat[0][1])
```
Adding labels to the file raises a RuntimeError, presumably because of a resulting different md5 hash. The code above assumes the data has been downloaded from kaggle, as described in the [source code](https://github.com/pytorch/vision/blob/main/torchvision/datasets/fer2013.py).
### Versions
PyTorch version: 2.1.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.0-26-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650 Ti
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 165
Model name: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz
Stepping: 2
CPU MHz: 1944.273
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 5199.98
Virtualization: VT-x
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 1.5 MiB
L3 cache: 12 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Vulnerable: No microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.1.1
[pip3] torchaudio==2.1.1
[pip3] torchvision==0.16.1
[pip3] triton==2.1.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.26.0 py311h08b1b3b_0
[conda] numpy-base 1.26.0 py311hf175353_0
[conda] pytorch 2.1.1 py3.11_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.1.1 py311_cu118 pytorch
[conda] torchtriton 2.1.0 py311 pytorch
[conda] torchvision 0.16.1 py311_cu118 pytorch
cc @pmeier | closed | 2023-11-15T09:01:24Z | 2024-06-04T10:21:51Z | https://github.com/pytorch/vision/issues/8118 | [
"enhancement",
"help wanted",
"module: datasets"
] | dtafler | 8 |
aimhubio/aim | tensorflow | 2,347 | Bring all history w/o sampling by cli convert wandb | ## Proposed refactoring or deprecation
(Not sure if it's okay with this category: Feature suggestion instead..?)
In `aim/cli/convert/processors/wandb.py`, https://github.com/aimhubio/aim/blob/8441a58ac1e39b8cd5a5b802794c9c944effdfd4/aim/cli/convert/processors/wandb.py#L45
we've been trying bringing **SAMPLED** history of existing wandb metric logs.
I would carefully propose to bring **ALL** history without sampling.
It seems gonna be easily implemented using `wandb_run.scan_history()` with a little mod rather than using `wandb_run.history()`.
The latter has default args `samples=500` to have roughly 500 records returned. ([ref](https://github.com/wandb/wandb/blob/e85ba97972308091d22f34e275cd58fea4b774dc/wandb/apis/public.py#L2027))
### Motivation
In #2067, an awesome feature had been added: wandb to aim log converter.
But I felt lossless log transfers would be much more preferrable than the implicit sampling.
And it seems pretty easy to implement by replacing a single line of code.
### Pitch
Though [`wandb_run.history()`](https://docs.wandb.ai/ref/python/public-api/run#history) gives *simpler and faster* results, but it's sampled to loss some information eventually.
I propose it to be lossless fetch, which is easy to implement as well.
### Additional context
I've already had remote branch with fix commits, but after checking contributing guidelines, I've got to come here first.
---
By the way, I have a remaining question:
- I'm gonna have to add `import pandas` in `aim/cli/convert/processors/wandb.py`, but there is no `pandas` requirements in this project yet.
- I found it anyhow requires `pandas` package when I simply run the scripts below:
```shell
pip install aim wandb
aim convert wandb --entity ... --project ...
# Error: "Unable to load pandas, ..." (from wandb api, not from aim, though)
pip install pandas
# Now it's working
```
- By `wandb` itself: https://github.com/wandb/wandb/blob/e85ba97972308091d22f34e275cd58fea4b774dc/wandb/apis/public.py#L2058-L2063
- Seems `wandb` also has no requirements asking `pandas`
- So, should `pandas` be gracefully added in requirements of this project? | closed | 2022-11-15T06:40:13Z | 2022-12-06T19:47:06Z | https://github.com/aimhubio/aim/issues/2347 | [
"area / integrations",
"type / code-health",
"phase / shipped"
] | hjoonjang | 3 |
home-assistant/core | asyncio | 140,888 | 100% CPU usage after updating | ### The problem
After updating my HA instance, I noticed the fans were cranked up to max. ssh-ed, looked at top and saw the following:
9735 root 20 0 3929.6m 225.3m 200.0 1.4 0:11.90 S /usr/bin/java -Xmx1g -jar /app/api/app.jar
9765 root 20 0 3915.5m 172.9m 194.7 1.1 0:04.72 S /usr/bin/java -Xmx1g -jar /app/admin/app.jar
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
_No response_
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
###
```yaml
```
### Anything in the logs that might be useful for us?
```txt
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
9735 root 20 0 3929.6m 225.3m 200.0 1.4 0:11.90 S /usr/bin/java -Xmx1g -jar /app/api/app.jar
9765 root 20 0 3915.5m 172.9m 194.7 1.1 0:04.72 S /usr/bin/java -Xmx1g -jar /app/admin/app.jar
```
### Additional information
_No response_ | closed | 2025-03-18T18:04:07Z | 2025-03-18T19:41:56Z | https://github.com/home-assistant/core/issues/140888 | [] | fearlesschicken | 1 |
keras-team/keras | tensorflow | 20,748 | Inconsistent validation data handling in Keras 3 for Language Model fine-tuning | ## Issue Description
When fine-tuning language models in Keras 3, there are inconsistencies in how validation data should be provided. The documentation suggests validation_data should be in (x, y) format, but the actual requirements are unclear and the behavior differs between training and validation phases.
## Current Behavior & Problems
### Issue 1: Raw text arrays are not accepted for validation
```python
train_texts = ["text1", "text2", ...]
val_texts = ["val1", "val2", ...]
# This fails with ValueError:
model.fit(
train_texts,
validation_data=val_texts
)
# Error:
ValueError: Data is expected to be in format `x`, `(x,)`, `(x, y)`, or `(x, y, sample_weight)`, found: ("text1", "text2", ...)
```
### Issue 2: Pre-tokenized validation fails
```python
# Trying to provide tokenized data:
val_tokenized = [tokenizer(text) for text in val_texts]
val_padded = np.array([pad_sequence(seq, max_len) for seq in val_tokenized])
val_input = val_padded[:, :-1]
val_target = val_padded[:, 1:]
model.fit(
train_texts,
validation_data=(val_input, val_target)
)
# Error:
TypeError: Input 'input' of 'SentencepieceTokenizeOp' Op has type int64 that does not match expected type of string.
```
The error suggests the tokenizer is being applied again to already tokenized data. I understand there is the preprocessor=None parameter, but I don't want to preprocess train data manually.
### Working Solution (But Needs Documentation)
The working approach is to provide prompt-completion pairs:
```python
# Prepare validation data as prompts and expected outputs
val_inputs = [format_prompt(text) for text in val_input_texts]
val_outputs = [format_output(text) for text in val_output_texts]
val_inputs = np.array(val_inputs)
val_outputs = np.array(val_outputs)
model.fit(
train_texts,
validation_data=(val_inputs, val_outputs)
)
```
### Expected Behavior
1. The documentation should clearly state that validation data for language models should be provided as prompt-completion pairs
2. The validation data handling should be consistent with how training data is processed
3. It should be clear whether token shifting is handled internally or needs to be done manually
### Environment
* Keras Version: 3.x
* Python Version: 3.10
* Model: Gemma LLM (but likely affects other LLMs too)
### Additional Context
While there is a working solution using prompt-completion pairs, this differs from traditional language model training where each token predicts the next token. The documentation should clarify this architectural choice and explain the proper way to provide validation data. | closed | 2025-01-10T14:52:19Z | 2025-02-14T02:01:32Z | https://github.com/keras-team/keras/issues/20748 | [
"type:support",
"stat:awaiting response from contributor",
"stale",
"Gemma"
] | che-shr-cat | 4 |
seleniumbase/SeleniumBase | web-scraping | 2,790 | Time Zone Based on IP ,Time Zone,Time From IP,Time From Javascript | IP based time zone, local time zone, IP based time, local time, WebGL, WebGL Report, WebGPU Report,
=================================================================================
some issues were found. Unable to set these parameters on your own. Some parameters, such as WebGL, WebGL Report, and WebGPU Report, are fixed and cannot be randomly generated every time the target website is visited


And in UC mode, the xpath uc selector cannot be used

| closed | 2024-05-20T10:08:12Z | 2024-05-21T14:41:24Z | https://github.com/seleniumbase/SeleniumBase/issues/2790 | [
"duplicate",
"external",
"UC Mode / CDP Mode"
] | xipeng5 | 6 |
HIT-SCIR/ltp | nlp | 706 | PyTorch和LTP安装问题 | **
> pip install -i https://pypi.tuna.tsinghua.edu.cn/simple torch transformers
**
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: torch in e:\anaconda\lib\site-packages (2.3.1+cu118)
Requirement already satisfied: transformers in e:\anaconda\lib\site-packages (4.42.3)
Requirement already satisfied: filelock in e:\anaconda\lib\site-packages (from torch) (3.13.1)
Requirement already satisfied: typing-extensions>=4.8.0 in e:\anaconda\lib\site-packages (from torch) (4.11.0)
Requirement already satisfied: sympy in e:\anaconda\lib\site-packages (from torch) (1.12)
Requirement already satisfied: networkx in e:\anaconda\lib\site-packages (from torch) (3.2.1)
Requirement already satisfied: jinja2 in e:\anaconda\lib\site-packages (from torch) (3.1.4)
Requirement already satisfied: fsspec in e:\anaconda\lib\site-packages (from torch) (2024.2.0)
Requirement already satisfied: mkl<=2021.4.0,>=2021.1.1 in e:\anaconda\lib\site-packages (from torch) (2021.4.0)
Requirement already satisfied: huggingface-hub<1.0,>=0.23.2 in e:\anaconda\lib\site-packages (from transformers) (0.23.4)
Requirement already satisfied: numpy<2.0,>=1.17 in e:\anaconda\lib\site-packages (from transformers) (1.26.3)
Requirement already satisfied: packaging>=20.0 in e:\anaconda\lib\site-packages (from transformers) (23.2)
Requirement already satisfied: pyyaml>=5.1 in e:\anaconda\lib\site-packages (from transformers) (6.0.1)
Requirement already satisfied: regex!=2019.12.17 in e:\anaconda\lib\site-packages (from transformers) (2024.5.15)
Requirement already satisfied: requests in e:\anaconda\lib\site-packages (from transformers) (2.32.2)
Requirement already satisfied: safetensors>=0.4.1 in e:\anaconda\lib\site-packages (from transformers) (0.4.3)
Requirement already satisfied: tokenizers<0.20,>=0.19 in e:\anaconda\lib\site-packages (from transformers) (0.19.1)
Requirement already satisfied: tqdm>=4.27 in e:\anaconda\lib\site-packages (from transformers) (4.66.4)
Requirement already satisfied: intel-openmp==2021.* in e:\anaconda\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch) (2021.4.0)
Requirement already satisfied: tbb==2021.* in e:\anaconda\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch) (2021.11.0)
Requirement already satisfied: colorama in e:\anaconda\lib\site-packages (from tqdm>=4.27->transformers) (0.4.6)
Requirement already satisfied: MarkupSafe>=2.0 in e:\anaconda\lib\site-packages (from jinja2->torch) (2.1.3)
Requirement already satisfied: charset-normalizer<4,>=2 in e:\anaconda\lib\site-packages (from requests->transformers) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in e:\anaconda\lib\site-packages (from requests->transformers) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in e:\anaconda\lib\site-packages (from requests->transformers) (2.2.1)
Requirement already satisfied: certifi>=2017.4.17 in e:\anaconda\lib\site-packages (from requests->transformers) (2024.6.2)
Requirement already satisfied: mpmath>=0.19 in e:\anaconda\lib\site-packages (from sympy->torch) (1.3.0)
**
> C:\Users>pip install -i https://pypi.tuna.tsinghua.edu.cn/simple ltp ltp-core ltp-extension
**
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: ltp in e:\anaconda\lib\site-packages (4.2.14)
Requirement already satisfied: ltp-core in e:\anaconda\lib\site-packages (0.1.4)
Requirement already satisfied: ltp-extension in e:\anaconda\lib\site-packages (0.1.13)
Requirement already satisfied: huggingface-hub>=0.8.0 in e:\anaconda\lib\site-packages (from ltp) (0.23.4)
Requirement already satisfied: torch>=1.6.0 in e:\anaconda\lib\site-packages (from ltp-core) (2.3.1+cu118)
Requirement already satisfied: transformers>=4.0.0 in e:\anaconda\lib\site-packages (from ltp-core) (4.42.3)
Requirement already satisfied: filelock in e:\anaconda\lib\site-packages (from huggingface-hub>=0.8.0->ltp) (3.13.1)
Requirement already satisfied: fsspec>=2023.5.0 in e:\anaconda\lib\site-packages (from huggingface-hub>=0.8.0->ltp) (2024.2.0)
Requirement already satisfied: packaging>=20.9 in e:\anaconda\lib\site-packages (from huggingface-hub>=0.8.0->ltp) (23.2)
Requirement already satisfied: pyyaml>=5.1 in e:\anaconda\lib\site-packages (from huggingface-hub>=0.8.0->ltp) (6.0.1)
Requirement already satisfied: requests in e:\anaconda\lib\site-packages (from huggingface-hub>=0.8.0->ltp) (2.32.2)
Requirement already satisfied: tqdm>=4.42.1 in e:\anaconda\lib\site-packages (from huggingface-hub>=0.8.0->ltp) (4.66.4)
Requirement already satisfied: typing-extensions>=3.7.4.3 in e:\anaconda\lib\site-packages (from huggingface-hub>=0.8.0->ltp) (4.11.0)
Requirement already satisfied: sympy in e:\anaconda\lib\site-packages (from torch>=1.6.0->ltp-core) (1.12)
Requirement already satisfied: networkx in e:\anaconda\lib\site-packages (from torch>=1.6.0->ltp-core) (3.2.1)
Requirement already satisfied: jinja2 in e:\anaconda\lib\site-packages (from torch>=1.6.0->ltp-core) (3.1.4)
Requirement already satisfied: mkl<=2021.4.0,>=2021.1.1 in e:\anaconda\lib\site-packages (from torch>=1.6.0->ltp-core) (2021.4.0)
Requirement already satisfied: numpy<2.0,>=1.17 in e:\anaconda\lib\site-packages (from transformers>=4.0.0->ltp-core) (1.26.3)
Requirement already satisfied: regex!=2019.12.17 in e:\anaconda\lib\site-packages (from transformers>=4.0.0->ltp-core) (2024.5.15)
Requirement already satisfied: safetensors>=0.4.1 in e:\anaconda\lib\site-packages (from transformers>=4.0.0->ltp-core) (0.4.3)
Requirement already satisfied: tokenizers<0.20,>=0.19 in e:\anaconda\lib\site-packages (from transformers>=4.0.0->ltp-core) (0.19.1)
Requirement already satisfied: intel-openmp==2021.* in e:\anaconda\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch>=1.6.0->ltp-core) (2021.4.0)
Requirement already satisfied: tbb==2021.* in e:\anaconda\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch>=1.6.0->ltp-core) (2021.11.0)
Requirement already satisfied: colorama in e:\anaconda\lib\site-packages (from tqdm>=4.42.1->huggingface-hub>=0.8.0->ltp) (0.4.6)
Requirement already satisfied: MarkupSafe>=2.0 in e:\anaconda\lib\site-packages (from jinja2->torch>=1.6.0->ltp-core) (2.1.3)
Requirement already satisfied: charset-normalizer<4,>=2 in e:\anaconda\lib\site-packages (from requests->huggingface-hub>=0.8.0->ltp) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in e:\anaconda\lib\site-packages (from requests->huggingface-hub>=0.8.0->ltp) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in e:\anaconda\lib\site-packages (from requests->huggingface-hub>=0.8.0->ltp) (2.2.1)
Requirement already satisfied: certifi>=2017.4.17 in e:\anaconda\lib\site-packages (from requests->huggingface-hub>=0.8.0->ltp) (2024.6.2)
Requirement already satisfied: mpmath>=0.19 in e:\anaconda\lib\site-packages (from sympy->torch>=1.6.0->ltp-core) (1.3.0)
但是PCcharm上显示没有torch软件包,ltp也是一样。如下图


| closed | 2024-07-06T11:32:46Z | 2024-07-08T06:52:06Z | https://github.com/HIT-SCIR/ltp/issues/706 | [] | gbl7212 | 0 |
zihangdai/xlnet | nlp | 36 | Default parameters for base model | Hi @kimiyoung and @zihangdai (and all others from the xlnet team),
thanks for sharing the implementation and pre-trained model(s) :heart:
I've some questions regarding to pre-training XLNet:
* Could you provide the default parameters for the `train.py` and `train_gpu.py` script when training a **base** XLNet model? The current readme only shows parameters for a large model.
* Do you think a single TPU v3 is sufficient to pre-train a base model?
Thanks so much,
Stefan | open | 2019-06-23T22:28:30Z | 2019-06-23T22:37:52Z | https://github.com/zihangdai/xlnet/issues/36 | [] | stefan-it | 0 |
tqdm/tqdm | pandas | 1,457 | Appearance in VSCode using a dark theme. | Hi,
I'm really not sure why this bothers me but nevertheless it does.

This is a fresh Conda environment. I tried to follow the workaround from [this](https://stackoverflow.com/questions/71534901/make-tqdm-bar-dark-in-vscode-jupyter-notebook) stackoverflow post. I managed to change the background color, but not the text. So it didn't help because you then cant read the text.
Is there a way to change the appearance of tqdm to fit in with a dark theme in VSCode?
Thanks
| open | 2023-03-27T11:57:23Z | 2024-09-05T05:22:02Z | https://github.com/tqdm/tqdm/issues/1457 | [] | Chris888-CMD | 2 |
joeyespo/grip | flask | 271 | Internal server error | I issue a command 'grip README.md' and I get a response
```
* Running on http://localhost:6419
```
When I use Chrome to go to that URL I get a error 500. Internal server error.
I look at the terminal window where I invoked the 'grip' command and I see a lot of output but most significantly I see:
```
UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 0: invalid start byte
```
From the output this appears to be in python2.7/codecs.py self._buffer_decode method.
Thank you. | open | 2018-05-24T00:54:57Z | 2018-06-02T00:40:32Z | https://github.com/joeyespo/grip/issues/271 | [
"bug"
] | KevinBurton | 0 |
itamarst/eliot | numpy | 162 | types documentation gives broken Field examples | https://github.com/ClusterHQ/eliot/blob/master/docs/source/types.rst#message-types says `USERNAME = Field.for_types("username", [str])` (and gives several other examples of `Field.for_types` with two arguments). The method requires a third argument, the field description, so the examples don't work as-is.
| open | 2015-05-08T14:25:39Z | 2018-09-22T20:59:17Z | https://github.com/itamarst/eliot/issues/162 | [
"documentation"
] | exarkun | 0 |
matplotlib/matplotlib | data-science | 29,234 | TODO spotted: Separating xy2 and slope | I was reading through the `_axes.pyi` file when I spotted a [TODO comment](https://github.com/matplotlib/matplotlib/blame/84fbae8eea3bb791ae9175dbe77bf5dee3368275/lib/matplotlib/axes/_axes.pyi#L142)
The aim was to separate the `xy2` and `slope` declarations.
Thought I should point it out | open | 2024-12-05T07:14:55Z | 2024-12-05T07:58:31Z | https://github.com/matplotlib/matplotlib/issues/29234 | [] | adityaraute | 3 |
QingdaoU/OnlineJudge | django | 19 | 测试部署说明中命令有拼写错误 | http://qingdaou.github.io/OnlineJudge/easy_install.html
cp OnlineJudge/oj/custom_setting.example.py OnlineJudge/oj/custom_setting.py
这条命令中,文件名的setting都应该是settings
| closed | 2016-03-01T06:25:55Z | 2016-03-01T06:29:35Z | https://github.com/QingdaoU/OnlineJudge/issues/19 | [] | zzh1996 | 1 |
apify/crawlee-python | web-scraping | 908 | Request handler should be able to timeout blocking sync code in user defined handler | Current purely async implementation of request handler is not capable of triggering timeout for blocking sync code. Such code is created by users and we have no control over it. So we can't expect only async blocking code.
Following simple user defined handler can't currently trigger timeout even if it should.
```
@crawler.router.default_handler
async def handler(context: BasicCrawlingContext) -> None:
time.sleep(some_big_time)
``` | open | 2025-01-15T11:00:09Z | 2025-01-20T10:22:00Z | https://github.com/apify/crawlee-python/issues/908 | [
"bug",
"t-tooling"
] | Pijukatel | 1 |
yzhao062/pyod | data-science | 278 | No verbose mode for MO_GAAL and SO_GAAL | MO_GAAL and SO_GAAL print a lot during their work: which epoch number is being trained, which one is being tested and so on. While it's useful for some developers, for people who simply consume the functionality the only effect is exploding of their logs.
I would like to request to add "verbose" parameter, similarly to many other models.
While the implementation is trivial for some diagnostic output (just look for print() statements in the code), the rest apparently comes from packages PYOD uses, and propagation of verbose=False mode to them might be more challenging. | open | 2021-02-02T15:37:03Z | 2021-04-12T15:34:25Z | https://github.com/yzhao062/pyod/issues/278 | [] | GBR-613 | 1 |
fastapi-users/fastapi-users | fastapi | 302 | Weird response from /auth/register | When making a request to /auth/register through curl or python requests I have no problem registering a user. When I run it through Jquery Ajax I get back the error message:
> Expecting value: line 1 column 1 (char 0)
I looked through some of the code to find out why I would get this back without results.
Here is my Ajax call:
`$.ajax({`
` "url": "/auth/register",`
` "method": "post",`
` "headers": {`
` "accept": "application/json",`
` "Content-Type": "application/json"`
` },`
` "data": {`
` "email": 'testpassword12@gmail.com',`
` "password": 'somestring',`
` },`
` success: function(resp) {`
` console.log(resp);`
` },`
` error: function(err) {`
` console.log(err);`
` }`
` });`
I didn't have any issues with login and ajax. How can I fix this, and is there a way to make it more obvious how to fix it if someone gets this issue too? | closed | 2020-08-17T15:37:52Z | 2020-09-09T13:11:01Z | https://github.com/fastapi-users/fastapi-users/issues/302 | [
"question",
"stale"
] | Houjio | 4 |
clovaai/donut | nlp | 250 | dataset script missing error | Hi,
I'm using this project with my own custom dataset. I created a sample data in the dataset folder as specified in the README.md with a folder for (test/validation/train) with the metadata
Then i ran this command:
python train.py --config config/train_cord.yaml --pretrained_model_name_or_path "naver-clova-ix/donut-base" --dataset_name_or_paths 'C:\ocr\2\donut\dataset' --exp_version "test_experiment"
But i'm getting this error:
```
File "C:\ocr\2\donut\train.py", line 176, in <module>
train(config)
File "C:\ocr\2\donut\train.py", line 104, in train
DonutDataset(
File "C:\ocr\2\donut\donut\util.py", line 64, in __init__
self.dataset = load_dataset(dataset_name_or_path, split=self.split)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\thaba\miniconda3\Lib\site-packages\datasets\load.py", line 2129, in load_dataset
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\thaba\miniconda3\Lib\site-packages\datasets\load.py", line 1815, in load_dataset_builder
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\thaba\miniconda3\Lib\site-packages\datasets\load.py", line 1508, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at C:\ocr\2\donut\C\C.py or any data file in the same directory. Couldn't find 'C' on the Hugging Face Hub either: FileNotFoundError: Dataset 'C' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
Anyone know how to resolve this? | open | 2023-09-14T12:06:50Z | 2023-11-04T17:52:35Z | https://github.com/clovaai/donut/issues/250 | [] | segaranp | 1 |
ets-labs/python-dependency-injector | asyncio | 166 | Update Cython to 0.27 | There was a new release of Cython (0.27), so we need to re-compile current code on it, re-run testing on all currently supported Python versions. | closed | 2017-10-03T21:52:07Z | 2017-10-10T22:13:35Z | https://github.com/ets-labs/python-dependency-injector/issues/166 | [
"enhancement"
] | rmk135 | 0 |
vitalik/django-ninja | django | 1,177 | Some way to deal with auth | - | closed | 2024-05-28T17:01:14Z | 2024-05-28T17:49:28Z | https://github.com/vitalik/django-ninja/issues/1177 | [] | Rey092 | 0 |
PaddlePaddle/models | nlp | 4,955 | DCGAN中infer.py问题:: | 在DCGAN中运行infer.py,

data中只有mnist,怎么还是加载的celeba?


| open | 2020-11-16T12:06:59Z | 2024-02-26T05:09:54Z | https://github.com/PaddlePaddle/models/issues/4955 | [] | jackie8310 | 2 |
OpenInterpreter/open-interpreter | python | 717 | Programmatic chat initial prompt | ### Is your feature request related to a problem? Please describe.
My feature is related to an initial prompt that would set up Interpreter's behavior
### Describe the solution you'd like
Can `interpreter.chat()` have some kind of initial setup for "role", "content" (basic rules of communication)?
Just like for ChatGPT bot.
I need to limit it for a conversation regarding narrow topics.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | closed | 2023-10-30T12:53:55Z | 2023-10-30T17:25:37Z | https://github.com/OpenInterpreter/open-interpreter/issues/717 | [] | SuperMaximus1984 | 2 |
saulpw/visidata | pandas | 1,911 | (Unnecessary) hidden dependency on setuptools; debian package broken | **Small description**
When installing visidata from the package-manager without having python3-setuptools installed, the following error appears when starting visidata:
```
Traceback (most recent call last):
File "/usr/bin/vd", line 3, in <module>
import visidata.main
File "/usr/lib/python3/dist-packages/visidata/__init__.py", line 102, in <module>
import visidata.main
File "/usr/lib/python3/dist-packages/visidata/main.py", line 19, in <module>
from pkg_resources import resource_filename
ModuleNotFoundError: No module named 'pkg_resources'
```
**Expected result**
visidata starts normally
**Additional context**
Python 3.11.2
Visidata 2.11
Debian Sid ARM64 on Crostini (Chromebook)
The dependency is only invoked in main.py and help.py, mostly for displaying the provided help file. I don't think this justifies importing the whole setuptools universe, as the function is only used to find files in the installation directory.
**Possible fixes**
This has previously been fixed for Arch by adding setuptools to the dependencies.
Long term fix is probably not to rely on resource_filename instead, a possible backup for when setuptools is not installed is using something like https://stackoverflow.com/questions/5137497/find-the-current-directory-and-files-directory to figure out the directory the files are in to invoke the right help files. | closed | 2023-06-05T13:29:50Z | 2023-07-31T03:59:47Z | https://github.com/saulpw/visidata/issues/1911 | [
"bug",
"fixed"
] | LaPingvino | 1 |
explosion/spaCy | machine-learning | 13,595 | Document good practices for caching spaCy models in CI setup | I use spaCy in a Jupyter book which currently downloads multiple spaCy models on every CI run, which wastes time and bandwidth.
The best solution would be to download and cache the models once, and get them restored on subsequent CI runs.
Are there any bits of documentation covering this concern somewhere? I could not find any in the official documentation.
Cheers. | open | 2024-08-13T15:19:09Z | 2024-08-13T15:19:09Z | https://github.com/explosion/spaCy/issues/13595 | [] | ghisvail | 0 |
wsvincent/awesome-django | django | 239 | Awesome-django | 장고 어썸 | closed | 2023-11-29T05:41:46Z | 2023-11-29T13:40:32Z | https://github.com/wsvincent/awesome-django/issues/239 | [] | Jaewook-github | 0 |
dfm/corner.py | data-visualization | 192 | module 'corner' has no attribute 'corner' | I am having trouble getting corner to run.
I've instaled it using:
`python -m pip install corner`
I am trying to run the code from the getting started section of the docs page.
I've tried importing as:
`from corner import corner` or
`import corner as cor`
I keep getting the error:
`AttributeError: module 'corner' has no attribute 'corner'`
Any help would be much appreciated. | closed | 2022-02-10T16:16:36Z | 2022-02-11T15:19:56Z | https://github.com/dfm/corner.py/issues/192 | [] | FluidTop3 | 3 |
chatanywhere/GPT_API_free | api | 308 | 免费版的4omini如果对话内容稍微长一点就会报key无效,重新开个对话就又好了 | 一开始我以为挂了,后来发现是这个问题,只要对话长一点就会报错,3.5turbo倒是正常,是什么原因呢? | closed | 2024-10-23T01:50:57Z | 2024-10-27T18:37:13Z | https://github.com/chatanywhere/GPT_API_free/issues/308 | [] | sofs2005 | 1 |
deepfakes/faceswap | machine-learning | 542 | Possible AI contributor here with ethical concerns | closed | 2018-12-07T21:49:21Z | 2018-12-08T00:21:49Z | https://github.com/deepfakes/faceswap/issues/542 | [] | AndrasEros | 0 | |
charlesq34/pointnet | tensorflow | 231 | Error when training sem_seg model with my own data | I have created hdf5 files and when I ran train.py, it displayed this:
/home/junzheshen/pointnet/JS PointNet/PointNet 1000 ptsm2 without error/provider.py:91: H5pyDeprecationWarning: The default file mode will change to 'r' (read-only) in h5py 3.0. To suppress this warning, pass the mode you need to h5py.File(), or set the global default h5.get_config().default_file_mode, or set the environment variable H5PY_DEFAULT_READONLY=1. Available modes are: 'r', 'r+', 'w', 'w-'/'x', 'a'. See the docs for details.
f = h5py.File(h5_filename)
Traceback (most recent call last):
File "train.py", line 70, in <module>
data_batch, label_batch = provider.loadDataFile(h5_filename)
File "/home/junzheshen/pointnet/JS PointNet/PointNet 1000 ptsm2 without error/provider.py", line 97, in loadDataFile
return load_h5(filename)
File "/home/junzheshen/pointnet/JS PointNet/PointNet 1000 ptsm2 without error/provider.py", line 91, in load_h5
f = h5py.File(h5_filename)
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 408, in __init__
swmr=swmr)
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 199, in make_fid
fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 88, in h5py.h5f.open
ValueError: Invalid file name (invalid file name)
| open | 2020-02-28T16:31:21Z | 2020-06-14T03:32:59Z | https://github.com/charlesq34/pointnet/issues/231 | [] | shinguncher | 3 |
strawberry-graphql/strawberry | asyncio | 3,171 | Mypy doesn't see attributes on @strawberry.type | <!-- Provide a general summary of the bug in the title above. -->
Mypy throws error: 'Unexpected keyword argument...' on instance creation for @strawberry.type.
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
<!-- A clear and concise description of what the bug is. -->
minimal example scratch_9.py:
```python
import strawberry
@strawberry.type
class Dog:
name: str
reksio = Dog(
name="Reksio",
)
```
`mypy scratch_9.py`
output:
...scratch_9.py:9: error: Unexpected keyword argument "name" for "Dog" [call-arg]
## System Information
- Operating system: MacOs Ventura
- Strawberry version: using 0.211.0 (any 0.185.0+)
- mypy version: using 1.6.1 (any 1.5.0+)
- mypy-extensions==1.0.0
- setup:
[mypy]
plugins = strawberry.ext.mypy_plugin
**follow_imports = skip** _. . . // updated @ 10.11.2024_
<!--
## Additional Context
Add any other relevant information about the problem here.
--> | closed | 2023-10-25T12:54:40Z | 2025-03-20T15:56:26Z | https://github.com/strawberry-graphql/strawberry/issues/3171 | [
"bug"
] | ziehlke | 4 |
matplotlib/matplotlib | data-science | 29,760 | [Bug]: Changing the array returned by get_data affects future calls to get_data but does not change the plot on canvas.draw. | ### Bug summary
I use get_ydata for simplicity. I call it twice, with orig=True and False, respectively, storing the returned arrays as yT and yF, which are then changed. The drawn position is the original one, seems to be a third entity, maybe cached.
Using set_ydata moves the marker, but does not touch the arrays yT and yF.
If not a bug, it is a usability issue: My application calls set_data conditionally after changing the arrays, and if not, the result of future calls to get_data didn't reflect the marker position, which took some time to debug.
### Code for reproduction
```Python
import numpy as np
from matplotlib.pyplot import subplots
fig, ax = subplots()
h = ax.plot(1., 1., 'bo')[0]
ax.set_ylim(-2, 4)
fig.show()
yT = h.get_ydata(orig=True)
yF = h.get_ydata(orig=False)
print(yT, yF) # [1.] [1.]
yT += 1
yF -= 1
print(h.get_ydata(orig=True), # [2.], instead of the original value.
h.get_ydata(orig=False)) # [0.], as expected.
fig.canvas.draw() # Still at the truly original position.
# 1/0 # "breakpoint" to verify the above comment.
if False:
h.set_ydata((3,))
print(yT, yF) # set_ydata had no effect on yF, yT, still [2.] [0.]
fig.canvas.draw() # but on the marker.
print(h.get_ydata(orig=True), # (3,), the tuple passed to set_data.
h.get_ydata(orig=False)) # [3.], a new array object.
else:
yT = h.get_ydata(orig=True)
yF = h.get_ydata(orig=False)
yT += 1
yF -= 1
print(yT, yF) # [3.] [-1.], while the marker would still be drawn at 1.
```
### Actual outcome
Given as comments in the code.
### Expected outcome
Either `get_data` shall return a copy, which can be changed independently of the artist, or the canvas shall reflect changes to the artist's state.
The documentation of the `orig` parameter is insufficient.
### Additional information
#24790 links a pull request mentioning caching.
### Operating system
Windows 10
### Matplotlib Version
3.10.0
### Matplotlib Backend
tkagg
### Python version
3.13.1
### Jupyter version
_No response_
### Installation
pip | closed | 2025-03-15T19:18:59Z | 2025-03-19T18:00:05Z | https://github.com/matplotlib/matplotlib/issues/29760 | [] | Rainald62 | 3 |
ageitgey/face_recognition | python | 1,352 | If I send Image as File Object from React Application, it is not able to find the locations | * face_recognition version: 1.3.0
* Python version: 3.9
* Operating System: Windows 10
### Description
If I send Image as File Object from React Application, it is not able to find the locations. But if I save that image again by just opening and saving using paint, it works fine.
| open | 2021-08-06T12:56:37Z | 2021-08-06T12:56:37Z | https://github.com/ageitgey/face_recognition/issues/1352 | [] | mihir2510 | 0 |
nonebot/nonebot2 | fastapi | 3,002 | Plugin: ZXPM插件管理 | ### PyPI 项目名
nonebot-plugin-zxpm
### 插件 import 包名
nonebot_plugin_zxpm
### 标签
[{"label":"小真寻","color":"#fbe4e4"},{"label":"多平台适配","color":"#ea5252"},{"label":"插件管理","color":"#456df1"}]
### 插件配置项
```dotenv
zxpm_db_url="sqlite:data/zxpm/db/zxpm.db"
zxpm_notice_info_cd=300
zxpm_ban_reply="才不会给你发消息."
zxpm_ban_level=5
zxpm_switch_level=1
zxpm_admin_default_auth=5
zxpm_font="msyh.ttc"
```
| closed | 2024-10-05T23:27:23Z | 2024-10-08T02:24:42Z | https://github.com/nonebot/nonebot2/issues/3002 | [
"Plugin"
] | HibiKier | 6 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 637 | Toolbox launches but doesn't work | I execute the demo_toolbox.py and the toolbox launches. I click Browse and pull in a wav file. The toolbox shows the wav file loaded and a mel spectrogram displays. The program then says "Loading the encoder \encoder\saved_models\pretrained.pt
but that is where the program just goes to sleep until I exit out. I verify the pretrained.pt file is there so I don't understand why the program stops. Any help would be greatly appreciated.


Edlaster58@gmail.com | closed | 2021-01-22T21:56:38Z | 2021-01-23T00:36:12Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/637 | [] | bigdog3604 | 4 |
onnx/onnx | scikit-learn | 6,004 | There is an error in converting LLaVA model. It should not be adapted to this model. How should this model be converted to onnx | # Ask a Question
### Question
<!-- Explain your question here. -->
### Further information
```
`The argument `from_transformers` is deprecated, and will be removed in optimum 2.0. Use `export` instead
Framework not specified. Using pt to export the model.
Traceback (most recent call last):
File "/root/llava-v1.5-7b-hf/2onnx2.py", line 8, in <module>
model = ORTModelForCausalLM.from_pretrained(model_id, from_transformers=True, export=True)
File "/root/miniconda3/envs/llava/lib/python3.10/site-packages/optimum/onnxruntime/modeling_ort.py", line 662, in from_pretrained
return super().from_pretrained(
File "/root/miniconda3/envs/llava/lib/python3.10/site-packages/optimum/modeling_base.py", line 399, in from_pretrained
return from_pretrained_method(
File "/root/miniconda3/envs/llava/lib/python3.10/site-packages/optimum/onnxruntime/modeling_decoder.py", line 603, in _from_transformers
main_export(
File "/root/miniconda3/envs/llava/lib/python3.10/site-packages/optimum/exporters/onnx/__main__.py", line 279, in main_export
model = TasksManager.get_model_from_task(
File "/root/miniconda3/envs/llava/lib/python3.10/site-packages/optimum/exporters/tasks.py", line 1867, in get_model_from_task
model = model_class.from_pretrained(model_name_or_path, **kwargs)
File "/root/miniconda3/envs/llava/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.`
```
- Is this issue related to a specific model?
**Model name**: LLaVA
### Notes
<!-- Any additional information, code snippets. -->
| closed | 2024-03-08T02:26:34Z | 2024-03-09T00:41:18Z | https://github.com/onnx/onnx/issues/6004 | [
"question",
"topic: converters"
] | woaichixihong | 1 |
widgetti/solara | fastapi | 84 | Reported occational crash of solara.dev | Reported at https://www.reddit.com/r/Python/comments/13fegbp/comment/jjva0xp/?utm_source=reddit&utm_medium=web2x&context=3

```
11vue.runtime.esm.js:1897 Error: Cannot sendat l.send (solara-widget-manager8.min.js:391:640623)at t.send (solara-widget-manager8.min.js:23:32156)at ee.send (solara-widget-manager8.min.js:23:18578)at VueRenderer.js:183:11at Ye (vue.runtime.esm.js:1863:26)at s.n (vue.runtime.esm.js:2188:14)at Ye (vue.runtime.esm.js:1863:26)at e.$emit (vue.runtime.esm.js:3903:9)at s.click (vuetify.js:2477:12)at Ye (vue.runtime.esm.js:1863:26)
``` | open | 2023-05-12T13:19:13Z | 2023-05-12T13:19:13Z | https://github.com/widgetti/solara/issues/84 | [] | maartenbreddels | 0 |
donnemartin/system-design-primer | python | 244 | Article suggestion: Web architecture 101 | I thought this was an exceptionally well written high-level overview of modern web architecture, and short too (10 min)
https://engineering.videoblocks.com/web-architecture-101-a3224e126947
Perhaps worth putting next to the CS075 course notes? | open | 2018-12-29T18:39:27Z | 2019-01-20T20:02:09Z | https://github.com/donnemartin/system-design-primer/issues/244 | [
"needs-review"
] | stevenqzhang | 0 |
napari/napari | numpy | 7,105 | file menu error on closing/opening napari from jupyter | ### 🐛 Bug Report
When running napari from a notebook, after closing the viewer and reopening it, just opening the ```File``` menu triggers an error and makes it then impossible to open a new viewer. The bug appeared in napari 5.0.
### 💡 Steps to Reproduce
1. Open a viewer:
```
import napari
viewer = napari.Viewer()
```
2. Close it.
3. Make a new viewer
```
viewer = napari.Viewer()
```
4. Go to the file menu
Generates the following error message in the notebook:
```python
Traceback (most recent call last):
File "[/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/expressions/_expressions.py", line 187](http://localhost:8887/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/expressions/_expressions.py#line=186), in eval
return cast(T, eval(code, {}, context))
^^^^^^^^^^^^^^^^^^^^^^^
File "<Expr>", line 1, in <module>
NameError: name 'new_layer_empty' is not defined
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "[/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/napari/_qt/qt_main_window.py", line 838](http://localhost:8887/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/napari/_qt/qt_main_window.py#line=837), in _update_file_menu_state
self._update_menu_state('file_menu')
File "[/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/napari/_qt/qt_main_window.py", line 835](http://localhost:8887/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/napari/_qt/qt_main_window.py#line=834), in _update_menu_state
menu_model.update_from_context(get_context(layerlist))
File "[/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py", line 90](http://localhost:8887/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py#line=89), in update_from_context
_update_from_context(self.actions(), ctx, _recurse=_recurse)
File "[/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py", line 352](http://localhost:8887/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py#line=351), in _update_from_context
menu.update_from_context(ctx)
File "[/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py", line 151](http://localhost:8887/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py#line=150), in update_from_context
super().update_from_context(ctx, _recurse=_recurse)
File "[/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py", line 90](http://localhost:8887/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py#line=89), in update_from_context
_update_from_context(self.actions(), ctx, _recurse=_recurse)
File "[/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py", line 350](http://localhost:8887/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py#line=349), in _update_from_context
action.update_from_context(ctx)
File "[/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qaction.py", line 184](http://localhost:8887/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qaction.py#line=183), in update_from_context
self.setVisible(expr.eval(ctx) if (expr := self._menu_item.when) else True)
^^^^^^^^^^^^^^
File "[/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/expressions/_expressions.py", line 366](http://localhost:8887/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/expressions/_expressions.py#line=365), in eval
return super().eval(context=context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/expressions/_expressions.py", line 190](http://localhost:8887/Users/gw18g940/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/expressions/_expressions.py#line=189), in eval
raise NameError(
NameError: Names required to eval this expression are missing: {'new_layer_empty'}
```
At this point the viewer is usable, i.e. one can add a labels layer, draw in it etc. However, upon closing it and opening yet another one, one gets a new error:
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[4], line 1
----> 1 viewer = napari.Viewer()
File [~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/napari/viewer.py:66](http://localhost:8887/lab/tree/book/~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/napari/viewer.py#line=65), in Viewer.__init__(self, title, ndisplay, order, axis_labels, show, **kwargs)
62 from napari.window import Window
64 _initialize_plugins()
---> 66 self._window = Window(self, show=show)
67 self._instances.add(self)
File [~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/napari/_qt/qt_main_window.py:680](http://localhost:8887/lab/tree/book/~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/napari/_qt/qt_main_window.py#line=679), in Window.__init__(self, viewer, show)
677 # import and index all discovered shimmed npe1 plugins
678 index_npe1_adapters()
--> 680 self._add_menus()
681 # TODO: the dummy actions should **not** live on the layerlist context
682 # as they are unrelated. However, we do not currently have a suitable
683 # enclosing context where we could store these keys, such that they
684 # **and** the layerlist context key are available when we update
685 # menus. We need a single context to contain all keys required for
686 # menu update, so we add them to the layerlist context for now.
687 if self._qt_viewer._layers is not None:
File [~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/napari/_qt/qt_main_window.py:909](http://localhost:8887/lab/tree/book/~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/napari/_qt/qt_main_window.py#line=908), in Window._add_menus(self)
905 self._main_menu_shortcut.activated.connect(
906 self._toggle_menubar_visible
907 )
908 # file menu
--> 909 self.file_menu = build_qmodel_menu(
910 MenuId.MENUBAR_FILE, title=trans._('&File'), parent=self._qt_window
911 )
912 self._setup_npe1_samples_menu()
913 self.file_menu.aboutToShow.connect(
914 self._update_file_menu_state,
915 )
File [~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/napari/_qt/_qapp_model/_menus.py:32](http://localhost:8887/lab/tree/book/~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/napari/_qt/_qapp_model/_menus.py#line=31), in build_qmodel_menu(menu_id, title, parent)
14 """Build a QModelMenu from the napari app model
15
16 Parameters
(...)
28 QMenu subclass populated with all items in `menu_id` menu.
29 """
30 from napari._app_model import get_app
---> 32 return QModelMenu(
33 menu_id=menu_id, app=get_app(), title=title, parent=parent
34 )
File [~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py:53](http://localhost:8887/lab/tree/book/~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py#line=52), in QModelMenu.__init__(self, menu_id, app, title, parent)
51 self._app = Application.get_or_create(app) if isinstance(app, str) else app
52 self.setObjectName(menu_id)
---> 53 self.rebuild()
54 self._app.menus.menus_changed.connect(self._on_registry_changed)
55 self.destroyed.connect(self._disconnect)
File [~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py:96](http://localhost:8887/lab/tree/book/~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py#line=95), in QModelMenu.rebuild(self, include_submenus, exclude)
92 def rebuild(
93 self, include_submenus: bool = True, exclude: Collection[str] | None = None
94 ) -> None:
95 """Rebuild menu by looking up self._menu_id in menu_registry."""
---> 96 _rebuild(
97 menu=self,
98 app=self._app,
99 menu_id=self._menu_id,
100 include_submenus=include_submenus,
101 exclude=exclude,
102 )
File [~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py:323](http://localhost:8887/lab/tree/book/~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qmenu.py#line=322), in _rebuild(menu, app, menu_id, include_submenus, exclude)
321 cast("QMenu", menu).addMenu(submenu)
322 elif item.command.id not in _exclude:
--> 323 action = QMenuItemAction.create(item, app=app, parent=menu)
324 menu.addAction(action)
325 if n < n_groups - 1:
File [~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qaction.py:175](http://localhost:8887/lab/tree/book/~/mambaforge/envs/napari5bug/lib/python3.11/site-packages/app_model/backends/qt/_qaction.py#line=174), in QMenuItemAction.create(cls, menu_item, app, parent)
173 if cache_key in cls._cache:
174 res = cls._cache[cache_key]
--> 175 res.setParent(parent)
176 return res
178 cls._cache[cache_key] = obj = cls(menu_item, app, parent)
RuntimeError: wrapped C[/C](http://localhost:8887/C)++ object of type QMenuItemAction has been deleted
```
### 💡 Expected Behavior
It should be possible to open and close the viewer.
### 🌎 Environment
napari: 0.5.0
Platform: macOS-14.5-arm64-arm-64bit
System: MacOS 14.5
Python: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:34:54) [Clang 16.0.6 ]
Qt: 5.15.8
PyQt5: 5.15.9
NumPy: 2.0.0
SciPy: 1.14.0
Dask: 2024.7.0
VisPy: 0.14.3
magicgui: 0.8.3
superqt: 0.6.7
in-n-out: 0.2.1
app-model: 0.2.7
npe2: 0.7.6
OpenGL:
- GL version: 2.1 Metal - 88.1
- MAX_TEXTURE_SIZE: 16384
- GL_MAX_3D_TEXTURE_SIZE: 2048
Screens:
- screen 1: resolution 1512x982, scale 2.0
Optional:
- numba: 0.60.0
- triangle not installed
Settings path:
- /Users/gw18g940/Library/Application Support/napari/napari5bug_b2501c9bab335f04457c00d9c75c5501857c5b41/settings.yaml
Plugins:
- napari: 0.5.0 (81 contributions)
- napari-console: 0.0.9 (0 contributions)
- napari-svg: 0.2.0 (2 contributions)
### 💡 Additional Context
_No response_ | closed | 2024-07-18T23:09:38Z | 2024-07-22T08:45:55Z | https://github.com/napari/napari/issues/7105 | [
"bug",
"priority:high"
] | guiwitz | 6 |
matplotlib/matplotlib | data-visualization | 29,259 | [Bug]: No module named pyplot | ### Bug summary
No module named pyplot
### Code for reproduction
```Python
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import solve_ivp
# Parámetros
miumax = 0.65
ks = 12.8
a = 1.12
b = 1
Sm = 98.3
Pm = 65.2
Yxs = 0.067
m = 0.230
alfa = 7.3
beta = 0 # 0.15
si = 54.45
xi = 0.05
pi = 0
vi = 1.5
Vf = 4
Flux = 0.4
s0 = 180
tmax = 47 # horas
smin = 20
# Variables iniciales
var_ini = [xi, si, pi, vi] # x, s, p, v
# Perfil del flujo
F_perfil = []
# Definir funciones del modelo
def batch(t, var):
x, s, p, v = var
mui = miumax * (s / (s + ks)) * (1 - (s / Sm)**a) * (1 - (p / Pm)**b)
muis = (1 / Yxs) * mui + m
miup = alfa * mui + beta
dxdt = mui * x
dsdt = - muis * x
dpdt = miup * x
dvdt = 0
return [dxdt, dsdt, dpdt, dvdt]
def batchali(t, var, F):
x, s, p, v = var
mui = miumax * (s / (s + ks)) * (1 - (s / Sm)**a) * (1 - (p / Pm)**b)
muis = (1 / Yxs) * mui + m
miup = alfa * mui + beta
dxdt = x * (mui - (F / v))
dsdt = (F / v) * (s0 - s) - muis * x
dpdt = miup * x - (F / v) * p
dvdt = F
return [dxdt, dsdt, dpdt, dvdt]
# Función dinámica con control de flujo
def switch(t, var):
x, s, p, v = var
# Bucle simulado para controlar flujo y detener el sistema
while v <= Vf:
if s <= smin: # Si s >= si, apagar flujo
F = Flux
F_perfil.append((t, F)) # Registrar flujo
return batchali(t, var, F)
else: # Caso intermedio
F = 0
F_perfil.append((t, F)) # Registrar flujo
return batch(t, var)
if s >= si: # Si s <= smin, activar flujo
F = 0
F_perfil.append((t, F)) # Registrar flujo
return batch(t, var)
else: # Caso intermedio
F = Flux
F_perfil.append((t, F)) # Registrar flujo
return batchali(t, var, F)
# Resolver el sistema
t_span = (0, tmax)
t_eval = np.linspace(0, tmax, 500)
sol = solve_ivp(switch, t_span, var_ini, t_eval=t_eval, method='RK45')
# Convertir el perfil de flujo en arreglos para graficar
F_perfil = np.array(F_perfil)
F_time = F_perfil[:, 0]
F_values = F_perfil[:, 1]
# Graficar resultados
labels = ['x (Biomasa)', 's (Sustrato)', 'p (Producto)', 'v (Volumen)']
plt.figure(figsize=(10, 6))
for i in range(3):
plt.plot(sol.t, sol.y[i], label=labels[i])
plt.xlabel('Tiempo (h)')
plt.ylabel('Concentraciones y Volumen')
plt.title('Simulación dinámica del sistema biológico')
plt.legend()
plt.grid()
plt.show()
# Graficar el perfil de flujo
plt.figure(figsize=(10, 4))
plt.plot(F_time, F_values, color='purple', label='Flujo (F)')
plt.xlabel('Tiempo (h)')
plt.ylabel('Flujo (L/h)')
plt.title('Perfil del flujo de alimentación')
plt.legend()
plt.grid()
plt.show()
plt.plot(sol.t, sol.y[3], label='v')
plt.xlabel('Tiempo')
plt.ylabel('volumen')
plt.title('volumen vs t')
plt.legend()
plt.grid()
plt.show()
```
### Actual outcome
No run code is available. Compilation error.
### Expected outcome
It should run with no error messages in compilation time.
### Additional information
_No response_
### Operating system
Windows 11
### Matplotlib Version
3.9.3
### Matplotlib Backend
module://backend_interagg
### Python version
Python 3.11.0
### Jupyter version
Not using this environment (Pycharm is used instead)
### Installation
pip | closed | 2024-12-08T21:01:23Z | 2024-12-09T16:15:33Z | https://github.com/matplotlib/matplotlib/issues/29259 | [
"Community support"
] | abelardogit | 3 |
miguelgrinberg/Flask-SocketIO | flask | 1,451 | SocketIO Chat Web app showing wierd errors | Hi All,
I Am Getting some wierd error when i am running a flask-socketio chat web app on a development server. Refer Below.
`127.0.0.1 - - [04/Jan/2021 19:23:18] code 400, message Bad request version ('úú\x13\x01\x13\x02\x13\x03À+À/À,À0̨̩À\x13À\x14\x00\x9c\x00\x9d\x00/\x005\x01\x00\x01\x93\x8a\x8a\x00\x00\x00\x17\x00\x00ÿ\x01\x00\x01\x00\x00')
C;D0Ñ^☻Vcdè^ñÛèmÀçI)WQÄ☺é¶7 vYÇq¦¯OÅn↓Õæ?§yßAnr×Ó¡åx? úú‼☺‼☻‼♥À+À/À,À0̨̩À‼À¶↨ÿ☺☺" 400 -
127.0.0.1 - - [04/Jan/2021 19:23:18] code 400, message Bad request version ('B\x94\x91ú¨\x19\x11Yè\x06\x11\x05ÛRÉ\x97\x15\x7fÜ\x00"ZZ\x13\x01\x13\x02\x13\x03À+À/À,À0̨̩À\x13À\x14\x00\x9c\x00\x9d\x00/\x005\x00')
127.0.0.1 - - [04/Jan/2021 19:23:18] "▬♥☺☻☺☺ü♥♥i¾9→PÀÕÿ@@^,∟Q*♣ö]àÖÒ
e::0Fï#z©?↑¸°²¡ B
ú¨↓◄Yè♠◄♣ÛRɧ⌂Ü"ZZ‼☺‼☻‼♥À+À/À,À0̨̩À‼À¶
9NË↔PÌÌ¿À1"jj‼☺‼☻‼♥À+À/À,À0̨̩À‼À¶¡2_à♣¨7²Îþ♂ÆûfØ+ËXÁ_u0G¤¤u ×·y¥ç1âw¥♣/9[►ÙÌ►
127.0.0.1 - - [04/Jan/2021 19:23:30] code 400, message Bad request version ('ªª\x13\x01\x13\x02\x13\x03À+À/À,À0̨̩À\x13À\x14\x00\x9c\x00\x9d\x00/\x005\x01\x00\x01\x93\x9a\x9a\x00\x00\x00\x17\x00\x00ÿ\x01\x00\x01\x00\x00')
127.0.0.1 - - [04/Jan/2021 19:23:30] "▬♥☺☻☺☺ü♥♥LÐñ$ØÃ~eeÓ↓ÄÔA<û»↓Àxdü¾¹ÿa(Ì#3c=u=§@1> ªª‼☺‼☻‼♥À+À/À,À0̨̩À‼À¶
↨ÿ☺☺" 400 -
127.0.0.1 - - [04/Jan/2021 19:23:30] code 400, message Bad request version ('·wu+\x9a2íâ\x00"ÊÊ\x13\x01\x13\x02\x13\x03À+À/À,À0̨̩À\x13À\x14\x00\x9c\x00\x9d\x00/\x005\x00')
K©ÿ¿Ä'ã ëÅ¿!C¬5yn/20 ·wu+2íâ"ÊÊ‼☺‼☻‼♥À+À/À,À0̨̩À‼À¶
↓¶fµÂëoì!*Ôfz| ªª‼☺‼☻‼♥À+À/À,À0̨̩À‼À¶zz↨ÿ☺☺" 400 -0-mb¡0¥ºä◄¶E¯TZ HÒD?Å`\A
127.0.0.1 - - [04/Jan/2021 19:23:36] code 400, message Bad request version ('@À*9âG\x00\x00"ªª\x13\x01\x13\x02\x13\x03À+À/À,À0̨̩À\x13À\x14\x00\x9c\x00\x9d\x00/\x005\x00')
127.0.0.1 - - [04/Jan/2021 19:23:36] "▬♥☺☻☺☺ü♥♥Þ4;yçb↑,ÈÓ%u$?ßÏËW$♦- U¶FãyÒ
vxù↕Q¥8F♣* &
=U ∟@À*9âG"ªª‼☺‼☻‼♥À+À/À,À0̨̩À‼À¶`
Any Ideas Why? | closed | 2021-01-04T13:56:34Z | 2021-04-06T13:16:39Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1451 | [
"question"
] | AAYU6H | 4 |
deepset-ai/haystack | nlp | 8,741 | DocumentSplitter updates Document's meta data after initializing the Document | **Describe the bug**
`_create_docs_from_splits` of the `DocumentSplitter` initializes a new document and then changes its meta data afterward. This means that the document's ID is created without taking into account the additional meta data. Documents that have the same content and only differ in page number will receive the same Document ID and thus might be unwittingly treated as duplicates in a later stage of the pipeline.
Instead of the current
```python
meta = deepcopy(meta)
doc = Document(content=txt, meta=meta)
doc.meta["page_number"] = splits_pages[i]
doc.meta["split_id"] = i
doc.meta["split_idx_start"] = split_idx
documents.append(doc)
```
we should change the code to
```python
meta = deepcopy(meta)
meta["page_number"] = splits_pages[i]
meta["split_id"] = i
meta["split_idx_start"] = split_idx
doc = Document(content=txt, meta=meta)
documents.append(doc)
```
| closed | 2025-01-17T09:43:10Z | 2025-01-20T08:51:49Z | https://github.com/deepset-ai/haystack/issues/8741 | [
"P2"
] | julian-risch | 0 |
dunossauro/fastapi-do-zero | pydantic | 299 | Quiz aula 6 | ## Bug encontrado:

Era pra ser 'get_current_user' | closed | 2025-02-07T07:34:34Z | 2025-02-09T06:17:34Z | https://github.com/dunossauro/fastapi-do-zero/issues/299 | [] | matheussricardoo | 0 |
ageitgey/face_recognition | python | 1,371 | facerec_from_webcam.py crashing | * face_recognition version: 1.3.0
* Python version: 3.x
* Operating System: Mac OS BS
### Description
Testing this https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam.py
### What I Did
Same code as above link
Camera becomes active, that is visible (green light on), but then this happens:
```
runfile('/my-file.py', wdir='/my-dir')
objc[22704]: Class RunLoopModeTracker is implemented in both /my-user/opt/anaconda3/lib/libQt5Core.5.9.7.dylib (0x1089c1a80) and /my-user/opt/anaconda3/lib/python3.8/site-packages/cv2/.dylibs/QtCore (0x12b1bc7f0). One of the two will be used. Which one is undefined.
Restarting kernel...
```
And camera shuts off.
No Frame is ever produced with recon process.
I tried both the fast and the default version of the example
All other examples work fine.
Ideas? | open | 2021-09-19T14:52:29Z | 2021-09-19T14:52:29Z | https://github.com/ageitgey/face_recognition/issues/1371 | [] | smileBeda | 0 |
dpgaspar/Flask-AppBuilder | flask | 1,628 | KeyError using SQLAlchemy Automap with HStore, JSON Fields | Hii folks,
I'm using Sqlalchemy Automap with Flask-AppBuilder to create an internal dashboard - The database table has fields like HSTORE, JSON which are probably throwing a KeyError. I'm getting this Error log: `:ERROR:flask_appbuilder.forms:Column attachments Type not supported` Here's the KeyError [traceback](https://pastebin.com/Admwswpa)
views.py

models.py

Thanks a lot :D | closed | 2021-05-01T15:34:55Z | 2022-04-17T16:24:29Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1628 | [
"stale"
] | sameerkumar18 | 2 |
holoviz/panel | plotly | 7,611 | Bokeh Categorical Heatmap not getting updated when used in a panel application | <!--
Thanks for contacting us! Please read and follow these instructions carefully, then you can delete this introductory text. Note that the issue tracker is NOT the place for usage questions and technical assistance; post those at [Discourse](https://discourse.holoviz.org) instead. Issues without the required information below may be closed immediately.
-->
#### ALL software version info
(this library, plus any other relevant software, e.g. bokeh, python, notebook, OS, browser, etc should be added within the dropdown below.)
<details>
<summary>Software Version Info</summary>
```plaintext
OS: Ubuntu 24.10
Browser: Firefox 134.0
python: 3.12.7
bokeh: 3.6.2
panel: 1.5.5
notebook: 7.3.2
```
</details>
#### Description of expected behavior and the observed behavior
I'm attempting to modify the coloring applied to a Bokeh heatmap contained in a panel application. I believe there is some bug in `pn.pane.Bokeh` because by updating the appropriate values, the visualization is wrong.
#### Complete, minimal, self-contained example code that reproduces the issue
This is a modified version of [Bokeh's categorical heatmaps example](https://docs.bokeh.org/en/latest/docs/user_guide/topics/categorical.html#categorical-heatmaps). The dataframe has 4 columns: ``Year, month, rate, new_var``. The coloring of the heatmap depends on the values of ``rate`` or ``new_var``. User is be able to select which column to use for coloring.
Here is the Bokeh code:
```py
from math import pi
import numpy as np
import pandas as pd
from bokeh.models import BasicTicker, PrintfTickFormatter
from bokeh.plotting import figure, show
from bokeh.sampledata.unemployment1948 import data
from bokeh.transform import linear_cmap
from bokeh.io import output_notebook
output_notebook()
data['Year'] = data['Year'].astype(str)
data = data.set_index('Year')
data.drop('Annual', axis=1, inplace=True)
data.columns.name = 'Month'
years = list(data.index)
months = list(reversed(data.columns))
# reshape to 1D array or rates with a month and year for each row.
df = pd.DataFrame(data.stack(), columns=['rate']).reset_index()
df["new_var"] = np.random.random(len(df)) * 15 - 5
# this is the colormap from the original NYTimes plot
colors = ["#75968f", "#a5bab7", "#c9d9d3", "#e2e2e2", "#dfccce", "#ddb7b1", "#cc7878", "#933b41", "#550b1d"]
TOOLS = "hover,save,pan,box_zoom,reset,wheel_zoom"
p = figure(title=f"US Unemployment ({years[0]} - {years[-1]})",
x_range=years, y_range=months,
x_axis_location="above", width=900, height=400,
tools=TOOLS, toolbar_location='below',
tooltips=[('date', '@Month @Year'), ('rate', '@rate%'), ("new_var", "@new_var")])
p.grid.grid_line_color = None
p.axis.axis_line_color = None
p.axis.major_tick_line_color = None
p.axis.major_label_text_font_size = "7px"
p.axis.major_label_standoff = 0
p.xaxis.major_label_orientation = pi / 3
r = p.rect(x="Year", y="Month", width=1, height=1, source=df,
fill_color=linear_cmap("rate", colors, low=df.rate.min(), high=df.rate.max()),
line_color=None)
# NOTE: Modify this line of code to select the appropriate column for coloring
v = "rate"
r.glyph.fill_color.field = v
r.glyph.fill_color.transform.update(low=df[v].min(), high=df[v].max())
show(p)
```

By changing the variable `v = "new_var"` we will see a different coloring:

Now, let's move to a panel application where a selector allows the user to choose the coloring:
```py
import param
import panel as pn
pn.extension()
class application(pn.viewable.Viewer):
sel = param.Selector(label="Coloring", default="rate", objects={"rate", "new_var"})
def __init__(self, **params):
super().__init__(**params)
p = figure(title=f"US Unemployment ({years[0]} - {years[-1]})",
x_range=years, y_range=months,
x_axis_location="above", width=900, height=400,
tools=TOOLS, toolbar_location='below',
tooltips=[('date', '@Month @Year'), ('rate', '@rate%'), ("new_var", "@new_var")])
p.grid.grid_line_color = None
p.axis.axis_line_color = None
p.axis.major_tick_line_color = None
p.axis.major_label_text_font_size = "7px"
p.axis.major_label_standoff = 0
p.xaxis.major_label_orientation = pi / 3
r = p.rect(x="Year", y="Month", width=1, height=1, source=df,
fill_color=linear_cmap("rate", colors, low=df.rate.min(), high=df.rate.max()),
line_color=None)
self.figure = p
@param.depends("sel", watch=True)
def update_coloring(self):
self.figure.renderers[0].glyph.fill_color.field = self.sel
self.figure.renderers[0].glyph.fill_color.transform.update(low=df[self.sel].min(), high=df[self.sel].max())
def __panel__(self):
return pn.Row(self.param.sel, self.figure)
a = application()
a
```
As we launch the application, everything looks fine:

Once we choose a different variable for coloring, something is not getting update. Note the lack of green colors.

#### Stack traceback and/or browser JavaScript console output
#### Screenshots or screencasts of the bug in action
- [ ] I may be interested in making a pull request to address this
| closed | 2025-01-10T10:05:57Z | 2025-01-16T17:25:23Z | https://github.com/holoviz/panel/issues/7611 | [
"bokeh"
] | Davide-sd | 2 |
paperless-ngx/paperless-ngx | machine-learning | 8,118 | [BUG] Creating a superuser password in docker compose drops all characters after "$" | ### Description
Starting up a new paperless-ngx instance with a set admin/superuser password in your environment variables does not respect your password in quotes, all characters after a dollar sign ($) in your password will be disregarded, including the dollar sign. Trying to log in will tell you the password is wrong until you cut off the part from the dollar sign on.
### Steps to reproduce
1. Set up a new paperless-ngx docker-compose with standard values
2. Set `PAPERLESS_ADMIN_PASSWORD` to a password with a dollar sign in it ($), put it in quotes
3. Start the container
4. Try to login with the full password you just put in the variables
### Webserver logs
```bash
[2024-10-30 18:50:20,183] [INFO] [paperless.auth] Login failed for user `admin` from IP `127.0.0.1`.
```
### Browser logs
_No response_
### Paperless-ngx version
2.13.2
### Host OS
Ubuntu 20.04
### Installation method
Docker - official image
### System status
_No response_
### Browser
_No response_
### Configuration changes
PAPERLESS_ADMIN_PASSWORD=eohwef0ijd324g$1jgfos
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-10-30T18:04:53Z | 2024-11-30T03:13:44Z | https://github.com/paperless-ngx/paperless-ngx/issues/8118 | [
"not a bug"
] | DM2602 | 2 |
jupyter/docker-stacks | jupyter | 1,959 | [ENH] - Document correct way to persist conda packages | ### What docker image(s) is this feature applicable to?
base-notebook
### What change(s) are you proposing?
Documenting a "blessed" way of persisting installed python packages, such that they are not removed after the container is recreated.
Ideally this should be a way which doesn't require persisting the packages automatically installed by the container
### How does this affect the user?
As seen in
- #1836
its not currently obvious how to correctly do this. Finding this issue is also not easy.
a documentation section for this would help users achieve this quickly without a lot of searching, and allow maintainers to choose one way of persisting python packages (entire conda env or just relevant folders? docker host mount or volume?)
### Anything else?
_No response_ | closed | 2023-08-04T11:40:00Z | 2023-08-20T16:19:54Z | https://github.com/jupyter/docker-stacks/issues/1959 | [
"type:Enhancement",
"tag:Documentation"
] | laundmo | 6 |
MaartenGr/BERTopic | nlp | 1,816 | semi-supervised topic modelling with multiple labels per document | Hi there,
I have a dataset with 2000 participants who reported their most negative daily event for 90 days. They completed three questions related to their most negative event:
(1) an open-ended written question "what was you most negative event today?
(2) which categories does this event belong to (select all that apply) (e.g., mental health, physical health, relationship with family, etc)
(3) how negative was this event (i.e., 7-point likert with 1 - not at all negative and 7 - very negative)
The known topic categories are rather coarse, so the aim of using BERTopic is find a more fine-grained understanding topics participants' wrote about. From https://github.com/MaartenGr/BERTopic/issues/826#issuecomment-1306746031 I understand that the semi-supervised labelling does not support multiple labels.
I have also read https://github.com/MaartenGr/BERTopic/issues/1725#issuecomment-1879975094. However I'm not sure if there are alternatives to the suggestions provided there.
In a nutshell, I'm wondering what would be the best way to combine the above features (written response, known categories, and participant ratings) to improve the performance of BERTopic in finding topics. Any help in how to proceed would be greatly appreciated!
Thanks,
Justin
| open | 2024-02-17T01:40:00Z | 2024-02-19T16:55:01Z | https://github.com/MaartenGr/BERTopic/issues/1816 | [] | justin-boldsen | 2 |
deepfakes/faceswap | machine-learning | 1,386 | Error with tensorflow. | It gives me this message when i try to run it.
I reinstalled everything from drivers, to cuda, and tensorflowm but still doesnt work.
H:\faceswap\faceswap>"C:\ProgramData\Miniconda3\scripts\activate.bat" && conda activate "faceswap" && python "H:\faceswap\faceswap/faceswap.py" gui
Setting Faceswap backend to NVIDIA
05/01/2024 16:38:55 INFO Log level set to: INFO
05/01/2024 16:38:55 ERROR There was an error importing Tensorflow. This is most likely because you do not have TensorFlow installed, or you are trying to run tensorflow-gpu on a system without an Nvidia graphics card. Original import error: cannot import name 'builder' from 'google.protobuf.internal' (C:\ProgramData\miniconda3\envs\faceswap\lib\site-packages\google\protobuf\internal\__init__.py)
05/01/2024 16:38:55 INFO Press "ENTER" to dismiss the message and close FaceSwap | closed | 2024-05-01T19:41:12Z | 2024-05-10T11:45:48Z | https://github.com/deepfakes/faceswap/issues/1386 | [] | donleo78 | 1 |
erdewit/ib_insync | asyncio | 286 | Trailing Stop Limit Percentage order | Is it possible to create Trailing Stop Limit Percentage orders? Could you point me to part of the code to create this kind of order? Thanks. | closed | 2020-08-04T19:21:26Z | 2020-09-20T13:19:26Z | https://github.com/erdewit/ib_insync/issues/286 | [] | minhnhat93 | 1 |
keras-team/keras | python | 20,723 | Add `keras.ops.rot90` for `tf.image.rot90` | tf api: https://www.tensorflow.org/api_docs/python/tf/image/rot90
torch api: https://pytorch.org/docs/stable/generated/torch.rot90.html
jax api: https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.rot90.html
In keras, it provides [RandomRotation](https://keras.io/api/layers/preprocessing_layers/image_augmentation/random_rotation/) and probably not replacable with tf.image.90. | closed | 2025-01-04T17:42:20Z | 2025-01-13T02:16:35Z | https://github.com/keras-team/keras/issues/20723 | [
"type:feature"
] | innat | 4 |
ydataai/ydata-profiling | pandas | 1,479 | Add a report on outliers | ### Missing functionality
I'm missing an easy report to see outliers.
### Proposed feature
An outlier to me is some value more than 3 std dev away from the mean.
I calculate this as:
```python
mean = X.mean()
std = X.std()
lower, upper = mean - 3*std, mean + 3*std
outliers = X[(X < lower) | (X > upper)]
100 * outliers.count() / X.count()
```
It would be nice if there is an interactive report added with the outliers
### Alternatives considered
See code above :)
### Additional context
_No response_ | open | 2023-10-15T08:25:24Z | 2023-10-16T20:56:52Z | https://github.com/ydataai/ydata-profiling/issues/1479 | [
"feature request 💬"
] | svaningelgem | 1 |
stanfordnlp/stanza | nlp | 1,362 | Stanza 1.8.1 failing to split sentence apart | **Describe the bug**
We've encountered a sentence pattern where Stanza fails to split apart two sentences. It appears when certain names are used (e.g. Max, Anna) but not with others (e.g. Ann).
**To Reproduce**
Steps to reproduce the behavior:
1. Go to http://stanza.run/ or input into stanza either sentence:
```
Max has the map? No. Max has no map.
Anna has the map? No. Anna has no map.
```
2. See error – stanza fails to split the "No." into a separate sentence.

**Expected behavior**
The parse returns `No.` as a separate sentence.
**Environment (please complete the following information):**
- OS: MacOS Ventura 13.4
- Python version: Python 3.12.2 using Poetry 1.8.2
- Stanza version: 1.6.1
**Additional context**
This issue also appears in Stanza 1.8.1. Have not tested it with Stanza 1.7.x. Screenshot is from Stanza 1.6.1.
| open | 2024-03-06T21:07:11Z | 2024-03-11T22:31:26Z | https://github.com/stanfordnlp/stanza/issues/1362 | [
"bug"
] | khannan-livefront | 2 |
amdegroot/ssd.pytorch | computer-vision | 563 | Questions about `RandSampleCrop` in `augmentation.py` | Hi, In lines 259 and 260 of `augmentation.py`, the code is
```
left = random.uniform(width - w)
top = random.uniform(height - h)
```
I don't understand why the lower limit is set for 'left' and 'top', not the upper limit like
```
left = random.uniform(high=width - w)
top = random.uniform(high=height - h)
``` | closed | 2021-11-11T08:58:00Z | 2021-11-11T10:20:05Z | https://github.com/amdegroot/ssd.pytorch/issues/563 | [] | zhiyiYo | 1 |
deeppavlov/DeepPavlov | nlp | 1,367 | Review analytics and prepare list of all models that we want to support or deprecate | Moved to internal Trello | closed | 2021-01-12T11:13:22Z | 2021-11-30T10:09:06Z | https://github.com/deeppavlov/DeepPavlov/issues/1367 | [] | danielkornev | 4 |
lexiforest/curl_cffi | web-scraping | 351 | Import problem(for real) |

I accidently deleted the "Session" file's containing codes, but when I put it back and run the code again, it still couldn't work.
| closed | 2024-07-16T13:36:10Z | 2024-07-22T11:33:48Z | https://github.com/lexiforest/curl_cffi/issues/351 | [
"bug"
] | Jeremyyen0978 | 3 |
huggingface/datasets | pandas | 7,001 | Datasetbuilder Local Download FileNotFoundError | ### Describe the bug
So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError.
I debug the code and it seems there is a bug there:
So first it creates a .incomplete folder and before moving its contents the following code deletes the directory
[Code](https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/builder.py#L984)
hence as a result I face with:
``` FileNotFoundError: [Errno 2] No such file or directory: '~/data/Parquet/.incomplete '```
### Steps to reproduce the bug
```
from datasets import load_dataset_builder
from pathlib import Path
parquet_dir = "~/data/Parquet/"
Path(parquet_dir).mkdir(parents=True, exist_ok=True)
builder = load_dataset_builder(
"rotten_tomatoes",
)
builder.download_and_prepare(parquet_dir, file_format="parquet")
```
### Expected behavior
Downloads the files and saves as parquet
### Environment info
Ubuntu,
Python 3.10
```
datasets 2.19.1
``` | open | 2024-06-25T15:02:34Z | 2024-06-25T15:21:19Z | https://github.com/huggingface/datasets/issues/7001 | [] | purefall | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.