repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 893 | Empty JSON Output for prompt using LLM | I want to extract all providers listed in this url https://www.aetna.com/dsepublic/#/contentPage?page=providerResults¶meters=searchText%3D'Primary%20Care%20Physician%20(PCP)';isGuidedSearch%3Dtrue&site_id=asa&language=en
I can use selenium, Bsoup etc., but came across this tool
I'm getting empty array as output
Found providers ['openai', 'azure_openai'] for model gpt-3.5-turbo-0125, using openai.
If it was not intended please specify the model provider in the graph configuration
--- Executing Fetch Node ---
--- (Fetching HTML from: https://www.aetna.com/dsepublic/#/contentPage?page=providerResults¶meters=searchText%3D'Primary%20Care%20Physician%20(PCP)';isGuidedSearch%3Dtrue&site_id=asa&language=en) ---
--- Executing ParseNode Node ---
Error occurred: not enough values to unpack (expected 2, got 0) | open | 2025-01-15T16:11:39Z | 2025-02-16T10:45:36Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/893 | [
"bug"
] | SumanthMeenan | 12 |
tqdm/tqdm | pandas | 1,520 | Cli: fence-posting problem | - [x] I have marked all applicable categories:
+ [x] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [x] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [ ] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
## Description
In order to first initialize the `tqdm` progress bar in the shell environment there must be something output so that the tqdm starts to output to console. But if the commands in the loop are slow, it would not create the initial progress bar for a while.
## Suggestion
There could be 2 ways around it
- have tqdm cli run once outside the loop with or without stdin but with the same format to create the initial progress-bar (fencepost issue)
- option to parse stdin through regex to trigger an increment or not, e.g. `r"Done (?P<it>\d+)/(?P<total>\d+)"`. Note the group names that can be used to extract other data for tqdm
## Consideration
This approach would not be efficient, but the usecase here is especially for slow processes that are needed to be monitored, so this would not be an issue.
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
| open | 2023-10-05T20:27:28Z | 2023-10-05T20:27:28Z | https://github.com/tqdm/tqdm/issues/1520 | [] | LecrisUT | 0 |
ultralytics/ultralytics | computer-vision | 19,196 | Full-resolution sized ram caching (not linked to training size) | ### Search before asking
- [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
From memory, yolov5 used to have a "cache the image on disk/ram" in full resolution.
Here if the image training size is for example 640px, but we use augmentations like zoom/distortion, the lowered resolution (compared to original 2048px (for example)) will suffer from quality degradation and pixelization after zooming.
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-02-12T06:57:54Z | 2025-02-12T06:58:23Z | https://github.com/ultralytics/ultralytics/issues/19196 | [
"enhancement"
] | ExtReMLapin | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,218 | How can to creat TF_XLA_FLAGS=--tf_xla_enable_xla_devices | Hi,
I have recently upgraded my system to the following configuration:
OS: Windows 10
cuda: 11.0
cuDNN:8.0.5.39
Tensorflow:2.3.0
My GPU spec: device: 0, name: NVIDIA Quadro GV100 32GB
Once Tensorflow installation is completed, i checked the following cpde:
with tf.device('/gpu:0'): a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) with tf.Session() as sess: print (sess.run(c))
When I execute it in a terminal, I find the following:
RuntimeError: The Session graph is empty. Add operations to the graph before calling run().
2020-12-29 17:16:22.302861: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2020-12-29 17:16:24.125381: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2020-12-29 17:16:24.126097: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll
2020-12-29 17:16:24.146346: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:65:00.0 name: Quadro GV100 computeCapability: 7.0
coreClock: 1.627GHz coreCount: 80 deviceMemorySize: 32.00GiB deviceMemoryBandwidth: 810.62GiB/s
2020-12-29 17:16:24.147917: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2020-12-29 17:16:24.158299: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2020-12-29 17:16:24.158322: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2020-12-29 17:16:24.162197: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll
2020-12-29 17:16:24.163382: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll
2020-12-29 17:16:24.171546: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll
2020-12-29 17:16:24.174531: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll
2020-12-29 17:16:24.175259: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll
2020-12-29 17:16:24.175337: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2020-12-29 17:16:24.175904: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-12-29 17:16:24.178463: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:65:00.0 name: Quadro GV100 computeCapability: 7.0
coreClock: 1.627GHz coreCount: 80 deviceMemorySize: 32.00GiB deviceMemoryBandwidth: 810.62GiB/s
2020-12-29 17:16:24.178505: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2020-12-29 17:16:24.178517: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2020-12-29 17:16:24.178527: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2020-12-29 17:16:24.178536: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll
2020-12-29 17:16:24.178546: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll
2020-12-29 17:16:24.178554: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll
2020-12-29 17:16:24.178562: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll
2020-12-29 17:16:24.178570: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll
2020-12-29 17:16:24.178633: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2020-12-29 17:16:24.814037: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-12-29 17:16:24.814072: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0
2020-12-29 17:16:24.814154: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N
2020-12-29 17:16:24.814332: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 29496 MB memory) -> physical GPU (device: 0, name: Quadro GV100, pci bus id: 0000:65:00.0, compute capability: 7.0)
2020-12-29 17:16:24.814922: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2020-12-29 17:16:25.986660: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2020-12-29 17:16:26.352308: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2020-12-29 17:16:26.354257: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:65:00.0 name: Quadro GV100 computeCapability: 7.0
coreClock: 1.627GHz coreCount: 80 deviceMemorySize: 32.00GiB deviceMemoryBandwidth: 810.62GiB/s
2020-12-29 17:16:26.354281: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2020-12-29 17:16:26.354291: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2020-12-29 17:16:26.354299: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2020-12-29 17:16:26.354306: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll
2020-12-29 17:16:26.354313: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll
2020-12-29 17:16:26.354319: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll
2020-12-29 17:16:26.354326: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll
2020-12-29 17:16:26.354333: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll
2020-12-29 17:16:26.354360: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2020-12-29 17:16:26.354407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-12-29 17:16:26.354414: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0
2020-12-29 17:16:26.354419: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N
2020-12-29 17:16:26.354493: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 29496 MB memory) -> physical GPU (device: 0, name: Quadro GV100, pci bus id: 0000:65:00.0, compute capability: 7.0)
2020-12-29 17:16:26.354510: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
| closed | 2020-12-29T16:04:54Z | 2021-11-22T06:08:23Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1218 | [] | omid-ghozatlou | 1 |
gunthercox/ChatterBot | machine-learning | 2,052 | [Question] Whose are the statements of learn_response() of ChatBot | Hello there!
I want to know if current statement and previous statement are bot's and human's or both bot's?
e.g. in this conversation:
> human: Hello
> bot: Hi
> human: How are you?
> bot: Doing well
with having line 4 as the current statement, the previous one is line 3 or 2? | open | 2020-10-02T23:04:26Z | 2020-10-02T23:04:39Z | https://github.com/gunthercox/ChatterBot/issues/2052 | [] | farooqkz | 0 |
WeblateOrg/weblate | django | 13,300 | %% is not properly escaped in the java-printf-format check | ### Describe the issue
The `java-printf-format` check checks whether the source and translation strings have the same number of format arguments. But, `%%` escapes the `%` character in Java format strings. Therefore, `foo` and `bar %%s` have the same number of format specifiers because the `%` character before the `s` is escaped by another `%` character before it.
However, `%%` is not recognized as a formatting code (which is later filtered out), and therefore it finds the `%s` at the end of `bar %%s` and thinks it is a formatting code which is wrong.
### I already tried
- [X] I've read and searched [the documentation](https://docs.weblate.org/).
- [X] I've searched for similar filed issues in this repository.
### Steps to reproduce the behavior
- Enable the `java-printf-format` check in a component (or project, or globally)
- Go to a string with no format specifiers, and type `%%s` into the translation field somewhere
- Save the translation
- Notice that the check erroneously triggers
### Expected behavior
Even though it's a bit weird to see strings like `%%s` in the translation and not source string, this particular check should not trigger
### Screenshots

### Exception traceback
_No response_
### How do you run Weblate?
Docker container
### Weblate versions
* Weblate: 5.8.4
* Django: 5.1.3
* siphashc: 2.5
* translate-toolkit: 3.14.1
* lxml: 5.3.0
* pillow: 11.0.0
* nh3: 0.2.18
* python-dateutil: 2.9.0.post0
* social-auth-core: 4.5.4
* social-auth-app-django: 5.4.2
* django-crispy-forms: 2.3
* oauthlib: 3.2.2
* django-compressor: 4.5.1
* djangorestframework: 3.15.2
* django-filter: 24.3
* django-appconf: 1.0.6
* user-agents: 2.2.0
* filelock: 3.16.1
* RapidFuzz: 3.10.1
* openpyxl: 3.1.5
* celery: 5.4.0
* django-celery-beat: 2.7.0
* kombu: 5.4.2
* translation-finder: 2.19
* weblate-language-data: 2024.14
* html2text: 2024.2.26
* pycairo: 1.27.0
* PyGObject: 3.50.0
* diff-match-patch: 20241021
* requests: 2.32.3
* django-redis: 5.4.0
* hiredis: 3.0.0
* sentry-sdk: 2.18.0
* Cython: 3.0.11
* mistletoe: 1.4.0
* GitPython: 3.1.43
* borgbackup: 1.4.0
* pyparsing: 3.2.0
* ahocorasick_rs: 0.22.1
* python-redis-lock: 4.0.0
* charset-normalizer: 3.4.0
* cyrtranslit: 1.1.1
* drf-spectacular: 0.27.2
* Python: 3.12.7
* Git: 2.39.5
* psycopg: 3.2.3
* psycopg-binary: 3.2.3
* phply: 1.2.6
* ruamel.yaml: 0.18.6
* tesserocr: 2.7.1
* boto3: 1.35.65
* aeidon: 1.15
* iniparse: 0.5
* mysqlclient: 2.2.6
* google-cloud-translate: 3.18.0
* openai: 1.54.5
* Mercurial: 6.8.2
* git-svn: 2.39.5
* git-review: 2.4.0
* PostgreSQL server: 17.2
* Database backends: django.db.backends.postgresql
* PostgreSQL implementation: psycopg3 (binary)
* Cache backends: default:RedisCache, avatar:FileBasedCache
* Email setup: django.core.mail.backends.smtp.EmailBackend: smtp.eu.mailgun.org
* OS encoding: filesystem=utf-8, default=utf-8
* Celery: redis://cache:6379/1, redis://cache:6379/1, regular
* Platform: Linux 5.15.0-117-generic (x86_64)
### Weblate deploy checks
```shell
System check identified some issues:
WARNINGS:
?: (security.W004) You have not set a value for the SECURE_HSTS_SECONDS setting. If your entire site is served only over SSL, you may want to consider setting a value and enabling HTTP Strict Transport Security. Be sure to read the documentation first; enabling HSTS carelessly can cause serious, irreversible problems.
?: (security.W008) Your SECURE_SSL_REDIRECT setting is not set to True. Unless your site should be available over both SSL and non-SSL connections, you may want to either set this setting True or configure a load balancer or reverse-proxy server to redirect all connections to HTTPS.
?: (security.W012) SESSION_COOKIE_SECURE is not set to True. Using a secure-only session cookie makes it more difficult for network traffic sniffers to hijack user sessions.
INFOS:
?: (weblate.I021) Error collection is not set up, it is highly recommended for production use
HINT: https://docs.weblate.org/en/weblate-5.8.4/admin/install.html#collecting-errors
?: (weblate.I028) Backups are not configured, it is highly recommended for production use
HINT: https://docs.weblate.org/en/weblate-5.8.4/admin/backup.html
System check identified 5 issues (12 silenced).
```
### Additional context
_No response_ | closed | 2024-12-15T21:28:25Z | 2024-12-28T07:30:16Z | https://github.com/WeblateOrg/weblate/issues/13300 | [
"bug"
] | Earthcomputer | 7 |
gradio-app/gradio | data-science | 10,600 | gradio cli cannot reload scripts with utf8-bom encoding | ### Describe the bug
As saying in this [documentation](https://www.gradio.app/guides/developing-faster-with-reload-mode):
> By default, the Gradio use UTF-8 encoding for scripts.
But it's not going to work with UTF-8 BOM encoding scripts.
``` bash
> gradio webui.py
Traceback (most recent call last):
File "D:\miniconda3\envs\voicelab\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "D:\miniconda3\envs\voicelab\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "D:\miniconda3\envs\voicelab\lib\site-packages\gradio\utils.py", line 284, in watchfn
no_reload_source_code = _remove_if_name_main_codeblock(str(reloader.demo_file))
File "D:\miniconda3\envs\voicelab\lib\site-packages\gradio\utils.py", line 202, in _remove_if_name_main_codeblock
tree = ast.parse(code)
File "D:\miniconda3\envs\voicelab\lib\ast.py", line 50, in parse
return compile(source, filename, mode, flags,
File "<unknown>", line 1
#-*- coding: utf-8 -*-
^
SyntaxError: invalid non-printable character U+FEFF
* Running on local URL: http://127.0.0.1:8000
To create a public link, set `share=True` in `launch()`.
Keyboard interruption in main thread... closing server.
```
`U+FEFF` is the identification of UTF-8 BOM encoding.
So, it's a bug?
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
#-*- coding: utf-8 -*-
import gradio as gr
```
Save it in UTF-8 BOM encoding and use `gradio` to run it.
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.16.0
gradio_client version: 1.7.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.8
ffmpy: 0.5.0
gradio-client==1.7.0 is not installed.
httpx: 0.28.1
huggingface-hub: 0.28.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 10.4.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.6
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2025.2.0
httpx: 0.28.1
huggingface-hub: 0.28.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 11.0.3
```
### Severity
I can work around it | open | 2025-02-16T13:43:19Z | 2025-02-27T21:51:26Z | https://github.com/gradio-app/gradio/issues/10600 | [
"bug"
] | IceSandwich | 2 |
flasgger/flasgger | api | 68 | Create APISpec example | Using the example code from: http://apispec.readthedocs.io/en/latest/
```python
from apispec import APISpec
from flask import Flask, jsonify
from marshmallow import Schema, fields
# Create an APISpec
spec = APISpec(
title='Swagger Petstore',
version='1.0.0',
plugins=[
'apispec.ext.flask',
'apispec.ext.marshmallow',
],
)
# Optional marshmallow support
class CategorySchema(Schema):
id = fields.Int()
name = fields.Str(required=True)
class PetSchema(Schema):
category = fields.Nested(CategorySchema, many=True)
name = fields.Str()
# Optional Flask support
app = Flask(__name__)
@app.route('/random')
def random_pet():
"""A cute furry animal endpoint.
---
get:
description: Get a random pet
responses:
200:
description: A pet to be returned
schema: PetSchema
"""
pet = get_random_pet()
return jsonify(PetSchema().dump(pet).data)
ctx = app.test_request_context()
ctx.push()
# Register entities and paths
spec.definition('Category', schema=CategorySchema)
spec.definition('Pet', schema=PetSchema)
spec.add_path(view=random_pet)
```
should be possible to use Flasgger as:
```python
Swagger(app, template=spec.to_dict())
```
and with the above flasgger will generate the /apidocs using the apispec. | closed | 2017-03-28T15:15:39Z | 2017-03-30T21:36:15Z | https://github.com/flasgger/flasgger/issues/68 | [] | rochacbruno | 1 |
chainer/chainer | numpy | 7,923 | Fix to `F.max_pooling_2d` test input would make the test fail | ERROR: type should be string, got "https://github.com/chainer/chainer/blob/v7.0.0b2/tests/chainer_tests/functions_tests/pooling_tests/test_max_pooling_2d.py#L52\r\n```\r\n numpy.random.shuffle(x)\r\n```\r\n\r\nThis line should be `numpy.random.shuffle(x.ravel())` because `shuffle` only shuffles along the first axis.\r\nBut that fix would cause the test to fail (`test_double_backward` with cuDNN enabled).\r\n\r\n`pytest -m 'cudnn' -v -rfEX --tb=short tests/chainer_tests/functions_tests/pooling_tests/test_max_pooling_2d.py -k 'test_double_backward'`\r\n\r\n```\r\nE chainer.testing.function_link.FunctionTestError: Parameterized test failed.\r\nE\r\nE Base test method: TestMaxPooling2D_use_chainerx_false__chainerx_device_None__use_cuda_true__cuda_device_0__use_cudnn_always__cudnn_deterministic_false__autotune_false__cudnn_fast_batch_normalization_false__use_ideep_never.test_doubl$_backward\r\nE Test parameters:\r\nE contiguous: C\r\nE cover_all: False\r\nE dtype: <class 'numpy.float64'>\r\nE\r\nE\r\nE (caused by)\r\nE FunctionTestError: double backward is not implemented correctly\r\n\r\n:\r\n:\r\n\r\nE gradients (numeric): -0.005565301559232316\r\nE gradients (backward): 0.14605092276952303\r\nE\r\nE x: numeric gradient, y: backward gradient\r\nE Not equal to tolerance rtol=0.001, atol=0.0001\r\nE\r\nE Mismatch: 100%\r\nE Max absolute difference: 0.15161622\r\nE Max relative difference: 1.03810521\r\nE x: array(-0.005565)\r\nE y: array(0.146051)\r\nE\r\nE assert_allclose failed:\r\nE shape: () ()\r\nE dtype: float64 float64\r\nE i: (0,)\r\nE x[i]: -0.005565301559232316\r\nE y[i]: 0.14605092276952303\r\nE relative error[i]: 1.0381052132619162\r\nE absolute error[i]: 0.15161622432875535\r\nE relative tolerance * |y[i]|: 0.00014605092276952304\r\nE absolute tolerance: 0.0001\r\nE total tolerance: 0.00024605092276952306\r\nE x: -0.0055653\r\nE y: 0.14605092\r\n```" | closed | 2019-08-13T17:29:28Z | 2019-10-29T07:20:54Z | https://github.com/chainer/chainer/issues/7923 | [
"cat:test",
"prio:high"
] | niboshi | 2 |
jupyterhub/repo2docker | jupyter | 791 | [doc] conda export instructions reference the root environment | The [instructions for exporting a conda environment](https://repo2docker.readthedocs.io/en/latest/howto/export_environment.html#the-solution) state that the user should
> use conda env export -n root to print the environment
But the "user" dependencies (i.e. those not related to Binder) seem to be installed in the `notebook` environment (or maybe any environment referenced by `$CONDA_DEFAULT_ENV`?)
Should the export instructions be updated to print the dependencies for the `$CONDA_DEFAULT_ENV` environment?
| open | 2019-09-11T11:33:38Z | 2019-09-12T06:31:42Z | https://github.com/jupyterhub/repo2docker/issues/791 | [] | rprimet | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 966 | make_depth_scale.py - No module named 'read_write_model' | I created a depth image and tried to run `make_depth_scale.py`.
However, I got the following error
`ModuleNotFoundError: No module named 'joblib'`
I did `conda install joblib` to install the module.
However, when I ran python again, I got the following error
`ModuleNotFoundError: No module named 'read_write_model'`
The module was not found by either `conda install` or `pip install`.
Where can I get this from? | closed | 2024-08-31T19:22:31Z | 2024-09-06T06:37:28Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/966 | [] | lileaLab | 2 |
jumpserver/jumpserver | django | 15,043 | [Bug] web应用部署成功,节点负载正常,website提示没有可使用的链接方式。 | ### Product Version
3.10.17
### Product Edition
- [x] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [x] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
OS: win2019
CPU: 4C
Mem: 8G
HD: 100G
### 🐛 Bug Description
发布机安装部署成功,应用部署成功, 在使用website时,提示没有可用的链接方式




### Recurrence Steps
chrome从1.0升级到1.1之后。
1.尝试过重新安装系统
2.尝试过卸载后重新部署
### Expected Behavior
可以通过website链接
### Additional Information
_No response_
### Attempted Solutions
_No response_ | closed | 2025-03-17T07:00:37Z | 2025-03-18T00:37:25Z | https://github.com/jumpserver/jumpserver/issues/15043 | [
"🐛 Bug"
] | JzpWorkspace | 3 |
K3D-tools/K3D-jupyter | jupyter | 460 | plot.display does not show graphics |
* K3D version:2.17.0
* Python version:3.12
* Operating System: centos
### Description
I do:
import k3d
plot = k3d.plot()
plot.display()
I expect to see a Graph, but I only see:
Plot(antialias=3, axes=['x', 'y', 'z'], axes_helper=1.0, axes_helper_colors=[16711680, 65280, 255], background_color=16777215, camera_animation=[], camera_fov=60.0, camera_mode='trackball', camera_pan_speed=0.3, camera_rotate_speed=1.0, camera_up_axis='none', camera_zoom_speed=1.2, fps=25.0, fps_meter=False, grid=[-1, -1, -1, 1, 1, 1], grid_color=15132390, height=512, label_color=4473924, lighting=1.5, manipulate_mode='translate', minimum_fps=-1.0, mode='view', screenshot_scale=2.0, snapshot_type='full')
Output()
I think its the __repr__ of the plot object
Yes, it works for ipyKernel, but i have created a new Python kernel , how can i get it working for mine ?
thank you | open | 2025-01-06T22:26:08Z | 2025-01-06T22:26:08Z | https://github.com/K3D-tools/K3D-jupyter/issues/460 | [] | gsohler | 0 |
plotly/dash | data-visualization | 2,566 | [BUG] Opacity update using Patch() for px.density_mapbox, not working. | Using python 3.11.3
```
dash 2.10.2
dash-core-components 2.0.0
dash-html-components 2.0.0
```
- OS: [macos]
- Browser [Chrome]
- Version [113.0.5672.92]
Bug Discription:
Using the new Patch() feature, Opacity attribute for px.density_mapbox will not update.
Here is a MWE
```
from configparser import ConfigParser
import dash
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from dash import Input, Output, Patch, State, callback, dcc, html
cf = ConfigParser()
cf.read("config.ini")
MAPBOX_API_KEY = cf["mapbox"]["api_key"]
def get_heatmap(dff):
"""
This function generates a density heatmap using latitude, longitude, and size data and returns the
resulting plot.
"""
heatmap = px.density_mapbox(
data_frame=dff,
lat="lat",
lon="long",
z="size",
opacity=0.2,
)
heatmap.update_layout(
height=900,
mapbox=dict(
accesstoken=MAPBOX_API_KEY,
center=go.layout.mapbox.Center(lat=38, lon=-96),
zoom=4,
),
)
return heatmap
df = pd.DataFrame(
{
"lat": [40, 41, 42],
"long": [-100, -101, -102],
"size": [3000, 5000, 2000],
}
)
main_map = dcc.Graph(
id="main_map",
figure=get_heatmap(df),
)
# Using Fig State
opacity_slider_with_fig_state = dcc.Slider(
id="opacity_slider_with_fig_state",
min=0,
max=1,
step=0.2,
value=0.2,
)
@callback(
Output("main_map", "figure", allow_duplicate=True),
Input("opacity_slider_with_fig_state", "value"),
State("main_map", "figure"),
prevent_initial_call=True,
)
def update_map_opacity_using_fig_state(opacity, current_fig):
"""
This function updates the opacity of a map using the current figure state.
"""
current_fig["data"][0]["opacity"] = opacity
return current_fig
# Using Patch
opacity_slider_with_patch = dcc.Slider(
id="opacity_slider_with_patch",
min=0,
max=1,
step=0.2,
value=0.2,
)
@callback(
Output("main_map", "figure"),
Input("opacity_slider_with_patch", "value"),
prevent_initial_call=True,
)
def update_map_opacity_using_patch(opacity):
"""
This function updates the opacity of a patch in a figure.
"""
patched_fig = Patch()
patched_fig["data"][0]["opacity"] = opacity
return patched_fig
app = dash.Dash()
app.layout = html.Div(
[
main_map,
html.H3("Opacity Slider using Figure State"),
opacity_slider_with_fig_state,
html.H3("Opacity Slider using Patch"),
opacity_slider_with_patch,
]
)
if __name__ == "__main__":
app.run_server(
debug=True,
)
```
Expected Behavior:
Opacity of the data should change when updated with Patch(). This is the desired behavior. However, it will not update.
The only workaround at the moment, is the old approach of acquiring the figure State, changing opacity attribute, then pushing figure to Output.
| closed | 2023-06-15T07:37:20Z | 2024-07-25T13:18:23Z | https://github.com/plotly/dash/issues/2566 | [] | OMBeau | 4 |
ploomber/ploomber | jupyter | 192 | Set extract_product default to True | closed | 2020-07-15T02:52:39Z | 2020-07-20T22:00:37Z | https://github.com/ploomber/ploomber/issues/192 | [] | edublancas | 0 | |
ets-labs/python-dependency-injector | asyncio | 177 | Question on testing with python-dependency-injector | Hello and thank you for such a great library.
I have a question regarding of how to test application, that uses python-dependency-injector library.
Lets take simple usecase:
```
class EmailSender:
def send(self, email):
pass
class SmtpEmailSender:
# implementation with use of smpt library
class EchoingEmailSender:
# logging / printing to stdout implementation
def notify_users(email_sender, email):
email_sender.send(email)
```
In production i want to use SmtpEmailSender, but in tests only EchoingEmailSender.
I have configured a container which provides me with production-ready class of EmailSender and using it like:
```
Services.notify_users(email)
```
So, notify_users get production-ready dependency injected.
So the question is: how do i switch implementation in tests?
Surely i can override this specific dependency, and it will work okay, but what if i have 10 containers with different providers, that is used by application, should i override them in every test i write?
I think it can become an error-prone approach.
Thanks. | closed | 2018-01-14T07:08:37Z | 2018-01-17T13:47:12Z | https://github.com/ets-labs/python-dependency-injector/issues/177 | [
"question"
] | asyncee | 9 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 403 | Will a larger value for partials_n_frames be better? | Hi, all.
In theory, using more frames as input to the model will include more information about the speaker.
So, I set the partials_n_frames to 160(exp1) and 300(exp2) respectively, and trained on the same training set.
On training set, the loss and EER of exp2 is slightly lower than that of exp1.But on test set (we record with phones), EER is higher on exp2.
So, I am really confused:
1. partials_n_frames=160 is the best configuration? Did you try any other values?
2. EER is lower on training, but higher on test set. It seems that it is not caused by overfitting. In fact, on other experiments (eg. Bi-LSTM instead of Uni-LSTM), this also occurs.
Do you have any advice? | closed | 2020-07-06T02:46:40Z | 2020-08-13T06:23:41Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/403 | [] | Coastchb | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,105 | How to make the perfect training data set? | I want to use this repo to clone my voice. How do I create the perfect training data set? Here are some ideas I have:
-Read a set of words that contains every letter in the alphabet
-Read a set of words in a normal tone, then read those same words with more energy/power, then read the same with more inflection, then sarcasm, then with inquisitiveness, etc.
-Read continuously with no long pauses
-Read words for a minimum of 3 minutes or X number of words, where X is some relatively large number
-Minimize external noise, so maybe reserve some time in a professional audio production studio
@CorentinJ any tips here on how to make the best possible training data set? | open | 2022-08-26T23:00:02Z | 2022-08-26T23:00:02Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1105 | [] | CodingRox82 | 0 |
pennersr/django-allauth | django | 3,939 | headless support for custom form fields? | I'm just in the middle of integrating headless with DRF + Simple JWT for API-based signup/login etc.
I've used a custom adapter using `ACCOUNT_ADAPTER` to save a profile model linked to the User record created on signup, but I would like to pass and validate additional fields as part of the signup API call, e.g. first_name.
I found plenty of docs on how to use a custom class extending `SignupForm` and using the setting
`ACCOUNT_FORMS = {"signup": "my.UserSignupForm"}` but this does not seem to get used.
I debugged out the class type for the `form` param passed to my custom adapter's `save_user` method
```
Message: 'AccountAdapter.save'
api-web-1 | Arguments: (<class 'allauth.headless.account.inputs.SignupInput'>,)
```
And see that it's not using the custom `UserSignupForm`.
I've had a look in this repository, particularly at whether there is different configuration for inputs vs. forms, but I didn't see much.
My question is whether the custom forms configuration is meant to take effect for headless APIs, and if so, what the approach should be, and why I might be not seeing the right class coming into the custom adapter.
Thanks for your time! | closed | 2024-07-01T21:55:37Z | 2024-07-02T18:50:25Z | https://github.com/pennersr/django-allauth/issues/3939 | [] | PorridgeBear | 2 |
dfm/corner.py | data-visualization | 214 | Misplaced **hist_kwargs? | It seems that [this](https://github.com/dfm/corner.py/blob/e65dd4cdeb7a9f7f75cbcecb4f07a07de65e2cea/src/corner/core.py#L240) line inside core.py contains **hist_kwargs argument, even though it is a matplotlib.pyplot function that is called. Should instead this be passed a few lines above to the np.histogram call, i.e. [here](https://github.com/dfm/corner.py/blob/main/src/corner/core.py#L236)? Possibly there could be two different kwargs passed for the np.histogram and the pyplot? | open | 2022-11-03T13:31:15Z | 2022-12-03T19:36:21Z | https://github.com/dfm/corner.py/issues/214 | [] | Cosmicstring | 4 |
huggingface/transformers | nlp | 36,439 | Apply dualpipe from deepseek-v3 to a trainer or model | ### Feature request
Applying deepseek's dual-pipe to transformers or accelrate
### Motivation
Recently, deepseek released the proposed [dual-pipe code](https://github.com/deepseek-ai/DualPipe) in the [DeepSeek-V3](https://arxiv.org/html/2412.19437v1) Technical Report.
Looking at the code structure, it is 100% python, and it seems easy to apply.
### Your contribution
We can modify the code of dual-pipe and apply it to transformer. | closed | 2025-02-27T02:44:42Z | 2025-02-27T05:59:56Z | https://github.com/huggingface/transformers/issues/36439 | [
"Feature request"
] | jp1924 | 0 |
strawberry-graphql/strawberry | fastapi | 2,826 | Microservices | ### Discussed in https://github.com/strawberry-graphql/strawberry/discussions/2803
<div type='discussions-op-text'>
<sup>Originally posted by **Roman-Ch-32** June 2, 2023</sup>
Вопрос, как наладить взаимодействие между сервисами. Проблема в том, что мне нужно перенаправить запрос на другой сервер, получить ответ и привести его к стравберри схемам.
Каким образом в Сервисе предусмотрена возможно создавать микросервисы?
**сервисы написаны на разных языка**х
URL = "http://192.168.50.30:3002/graphql"
query = """query {sys_users {
id
name
roles{
id
name
}
avatar}}"""
async def get_user() -> list[Manager]:
async with httpx.AsyncClient() as client:
req = await client.request(method='post', url=URL, json={'query': query},
headers={"allow_origins": "allow_origins", "access-control-allow-origin": "*"})
return [Manager(**i) for i in req.json()['data']['sys_users']]
unpacking with ** does not allow you to look into the depths of the answer. And the schemes accept only the stove keyword arguments. At the same time, it is also unrealistic to transfer the list to another resolver, because this is prohibited by the service. Also, if you use another resolver in the schema field, the Resolver is initiated at an incomprehensible moment. Also, the scheme ceases to pay attention to the field in which the resolver is specified
@strawberry.type
class UserRole:
id: int
name: str
@strawberry.type
class Manager:
id: strawberry.ID | None = None
name: str | None = None
avatar: str | None = None
surname: str | None = None
patronymic: str | None = None
email: str | None = None
roles: List[UserRole] = strawberry.field(resolver=resolver())
</div>
graph Company
@strawberry.federation.type(keys=["id", "managerId"], extend=True)
class Company:
id: strawberry.ID | None = None
company_name: str | None = None
company_full_name: str | None = None
info_type_buisness: Optional[CompanyBuisnesType] | None = None
autos: list[CompanyAuto] | None = None
autos_in_company: int | None = None
contacts: list[CompanyKontact] | None = None
inn: Optional[str]
manager_id: int | None = strawberry.federation.field(external=True)
graph Orders
@strawberry.federation.type(keys=["id", "managerId"])
class Company:
id: strawberry.ID | None = None
manager_id: int | None = strawberry.federation.field()
orders: List["Order"] = strawberry.field(resolver=get_order)
https://github.com/classmethod
async def resolve_reference(cls, id: strawberry.ID, info: Info, manager_id: int, **kwargs) -> "Company":
print(info.variable_values)
return Company(id=id, manager_id=2)
{'representations': [{'id': '1', 'info': GraphQLResolveInfo(field_name='_entities', field_nodes=[FieldNode at 53:168], return_type=<GraphQLNonNull <GraphQLList <GraphQLUnionType '_Entity'>>>, parent_type=<GraphQLObjectType 'Query'>, path=Path(prev=None, key='_entities', typename='Query'), schema=<graphql.type.schema.GraphQLSchema object at 0x7fc35bbdd720>, fragments={}, root_value=None, operation=OperationDefinitionNode at 0:169, variable_values={...}, context={'request': <starlette.requests.Request object at 0x7fc35b90db40>, 'background_tasks': <starlette.background.BackgroundTasks object at 0x7fc35b90dc60>, 'response': <starlette.responses.Response object at 0x7fc35b90db70>}, is_awaitable=<function is_awaitable at 0x7fc35d8cc3a0>)}, {'id': '14', 'info': GraphQLResolveInfo(field_name='_entities', field_nodes=[FieldNode at 53:168], return_type=<GraphQLNonNull <GraphQLList <GraphQLUnionType '_Entity'>>>, parent_type=<GraphQLObjectType 'Query'>, path=Path(prev=None, key='_entities', typename='Query'), schema=<graphql.type.schema.GraphQLSchema object at 0x7fc35bbdd720>, fragments={}, root_value=None, operation=OperationDefinitionNode at 0:169, variable_values={...}, context={'request': <starlette.requests.Request object at 0x7fc35b90db40>, 'background_tasks': <starlette.background.BackgroundTasks object at 0x7fc35b90dc60>, 'response': <starlette.responses.Response object at 0x7fc35b90db70>}, is_awaitable=<function is_awaitable at 0x7fc35d8cc3a0>)}, {'id': '19', 'info': GraphQLResolveInfo(field_name='_entities', field_nodes=[FieldNode at 53:168], return_type=<GraphQLNonNull <GraphQLList <GraphQLUnionType '_Entity'>>>, parent_type=<GraphQLObjectType 'Query'>, path=Path(prev=None, key='_entities', typename='Query'), schema=<graphql.type.schema.GraphQLSchema object at 0x7fc35bbdd720>, fragments={}, root_value=None, operation=OperationDefinitionNode at 0:169, variable_values={...}, context={'request': <starlette.requests.Request object at 0x7fc35b90db40>, 'background_tasks': <starlette.background.BackgroundTasks object at 0x7fc35b90dc60>, 'response': <starlette.responses.Response object at 0x7fc35b90db70>}, is_awaitable=<function is_awaitable at 0x7fc35d8cc3a0>)}, {'id': '20', 'info': GraphQLResolveInfo(field_name='_entities', field_nodes=[FieldNode at 53:168], return_type=<GraphQLNonNull <GraphQLList <GraphQLUnionType '_Entity'>>>, parent_type=<GraphQLObjectType 'Query'>, path=Path(prev=None, key='_entities', typename='Query'), schema=<graphql.type.schema.<starlette.responses.Response object at 0x7fc35b90db70>}, is_awaitable=<function is_awaitable at 0x7fc35d8cc3a0>)}, {'id': '24', 'info': GraphQLResolveInfo(field_name='_entities', field_nodes=[FieldNode at 53:168], return_type=<GraphQLNonNull <GraphQLList <GraphQLUnionType '_Entity'>>>, parent_type=<GraphQLObjectType 'Query'>, path=Path(prev=None, key='_entities', typename='Query'), schema=<graphql.type.schema.GraphQLSchema object at 0x7fc35bbdd720>, fragments={}, root_value=None, operation=OperationDefinitionNode at 0:169, variable_values={...}, context={'request': <starlette.requests.Request object at 0x7fc35b90db40>, 'background_tasks': <starlette.background.BackgroundTasks object at 0x7fc35b90dc60>, 'response': <starlette.responses.Response object at 0x7fc35b90db70>}, is_awaitable=<function is_awaitable at 0x7fc35d8cc3a0>)}]}
only one field and only the ID is passed through the keys. an attempt to pass other fields causes the error " "Unable to resolve reference for <class 'query_schemas.Order'>"" | closed | 2023-06-08T12:24:54Z | 2025-03-20T15:56:12Z | https://github.com/strawberry-graphql/strawberry/issues/2826 | [] | Roman-Ch-32 | 1 |
mwaskom/seaborn | matplotlib | 3,609 | seaborn issue | dataset are not load in seaborn library | closed | 2024-01-02T06:01:47Z | 2024-01-10T11:55:52Z | https://github.com/mwaskom/seaborn/issues/3609 | [] | ponishadevi | 2 |
unionai-oss/pandera | pandas | 1,352 | Pandera timezone-agnostic datetime type | **Is your feature request related to a problem? Please describe.**
When defining a class that inherits from DataFrameModel, I want to define a field whose values are datetimes. Moreover, those values will have timezones. However, I will not be able to define during the class definition what timezone that may be. In other words, in dataframe A, they may be datetimes with tz="America/New_York. In dataframe B, they may be datetiems with tz="America/Los_Angeles". As far as I can tell, there is no type that I can assign that will allow me to pass datetimes with timezones, but not specify which timezone within the type hint.
**Describe the solution you'd like**
I would like there to be a type that I can use to say "this field will be datetimes, but I can't say what the timezone will be."
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
When setting the type of the field to datetime.datetime, pandera.dtypes.DateTime, etc. I get a pandera SchemaError that the series was expected to have type datetime64[ns], but got datetime64[ns, America/New_York] (for example).
I have also tried with DatetimeTZDtype, but that won't work because I need to specify the timezone I want (which I can't do upfront).
**Additional context**
Example Schema:
class MySchema(DataFrameModel):
local_datetime: <what type do I set here?> | open | 2023-09-26T21:50:44Z | 2024-08-23T22:44:58Z | https://github.com/unionai-oss/pandera/issues/1352 | [
"enhancement"
] | max-raphael | 8 |
hanwenlu2016/web-ui | pytest | 11 | 咨询个控制台输出 | 1、若直接运行debugrun console的输出会有乱码 ctrl shlft F10 运行的
2、若python run.py 运行 terminal输出 没有乱码 但是日志的格式也跟单独运行test_XXX的不一样
3、若直接运行方法,则日志格式输出正常
另外 yaml 编码格式都是 gbk的 如果编辑器 改成gkb的编码 那console输出都是乱码,智能是utf-8 然后把yaml单独reload成gkb 或者直接把yaml文件转成gkb编码的 | closed | 2021-11-10T02:53:19Z | 2022-02-25T09:25:54Z | https://github.com/hanwenlu2016/web-ui/issues/11 | [] | kadingzz1 | 5 |
zhiyiYo/Fluent-M3U8 | dash | 24 | [Bug]: 带参数的URL无法下载? | ### What happened?
版本:0.10.0.0
问题:下面这种带参数的连接贴入地址栏后,“下载”按钮不亮,无法点击。
https://kttvquizmz.cdn-centaurus.com/hls2/01/04120/65yc0r0erwg5_,n,h,x,.urlset/index-f3-v1-a1.m3u8?t=QBd_bR3CHGD5NaUL-2pNk2Ulg86aXptlurEGXBRU3lM&s=1741803549&e=129600&f=20602380&srv=rup99sdwt652&i=0.4&sp=500&p1=rup99sdwt652&p2=rup99sdwt652&asn=16509
注:此链接用IDM可以下载
### Operation System
Windows10
### Fluent-M3U8 Version
0.3.0
### How to Reproduce?
在地址栏中贴入:
https://kttvquizmz.cdn-centaurus.com/hls2/01/04120/65yc0r0erwg5_,n,h,x,.urlset/index-f3-v1-a1.m3u8?t=QBd_bR3CHGD5NaUL-2pNk2Ulg86aXptlurEGXBRU3lM&s=1741803549&e=129600&f=20602380&srv=rup99sdwt652&i=0.4&sp=500&p1=rup99sdwt652&p2=rup99sdwt652&asn=16509
### m3u8 URL
https://kttvquizmz.cdn-centaurus.com/hls2/01/04120/65yc0r0erwg5_,n,h,x,.urlset/index-f3-v1-a1.m3u8?t=QBd_bR3CHGD5NaUL-2pNk2Ulg86aXptlurEGXBRU3lM&s=1741803549&e=129600&f=20602380&srv=rup99sdwt652&i=0.4&sp=500&p1=rup99sdwt652&p2=rup99sdwt652&asn=16509 | closed | 2025-03-12T19:22:58Z | 2025-03-13T04:51:47Z | https://github.com/zhiyiYo/Fluent-M3U8/issues/24 | [
"bug"
] | doggybread | 1 |
yuka-friends/Windrecorder | streamlit | 186 | feat: Recognize multiple OCR languages simultaneously | Current situation: Only one language can be specified for Windows Media OCR to recognize. In the case of multiple languages, all texts cannot be recognized. This may cause inconvenience for multilingual users or learners.
Goal: Support users to configure multiple OCR languages for simultaneous recognition.
Specific implementation: Complete the integration of multilingual recognition results by executing Windows Media OCR multiple times.
User voice: https://github.com/yuka-friends/Windrecorder/discussions/168
UI reference:

| closed | 2024-06-15T10:00:58Z | 2024-08-03T07:56:56Z | https://github.com/yuka-friends/Windrecorder/issues/186 | [
"enhancement"
] | Antonoko | 3 |
dynaconf/dynaconf | flask | 984 | [bug] KeyError: '_bypass_evaluation' in Dynaconf 3.2.1 | **Describe the bug**
The change introduced in PR #966 also introduced a KeyError in our tests.
**To Reproduce**
Steps to reproduce the behavior:
Not currently clear without digging further into the cause, but this was triggered in our tests using multi-threading. You can see the failures in the github action link here: https://github.com/SatelliteQE/broker/actions/runs/5928071069/job/16074442428?pr=227
**Expected behavior**
This KeyError should not be triggered. Instead, there should likely be a check that the key exists before attempting to pop or give a default pop value of None.
```python
self._store._box_config.pop("_bypass_evaluation", None)
```
| closed | 2023-08-21T19:26:21Z | 2023-08-22T12:53:40Z | https://github.com/dynaconf/dynaconf/issues/984 | [
"bug"
] | JacobCallahan | 3 |
localstack/localstack | python | 12,139 | bug: Step function -> "Get S3 Object" Action's error handling not matching real AWS behaviour | ### Is there an existing issue for this?
- [x] I have searched the existing issues
### Current Behavior
I am using the attached state machine (simplified it for this issue reporting). It relies on a bucket named `test-bucket` (needs to be created to run the state machine).
When it is executed, I expected localstack to return exactly same response as AWS, particularly the `cause` field. However it is different as you can see below.
### AWS Response:
> aws stepfunctions describe-execution --execution-arn "arn:aws:states:eu-west-1:038610054328:execution:Leena-failing-s3:2a7e79d7-892d-4f3a-9669-8a5648a07750"
> {
> "executionArn": "arn:aws:states:eu-west-1:038610054328:execution:Leena-failing-s3:2a7e79d7-892d-4f3a-9669-8a5648a07750",
> "stateMachineArn": "arn:aws:states:eu-west-1:038610054328:stateMachine:Leena-failing-s3",
> "name": "2a7e79d7-892d-4f3a-9669-8a5648a07750",
> "status": "FAILED",
> "startDate": "2025-01-14T19:30:54.206000+00:00",
> "stopDate": "2025-01-14T19:30:54.442000+00:00",
> "input": "{}",
> "inputDetails": {
> "included": true
> },
> "error": "S3.NoSuchKeyException",
> "cause": "{\"errorMessage\":\"object not found\",\"statusCode\":404}",
> "redriveCount": 0,
> "redriveStatus": "REDRIVABLE"
> }
[failing-s3-step-function.json](https://github.com/user-attachments/files/18415609/failing-s3-step-function.json)
### Localstack Response:
> awslocal stepfunctions describe-execution --execution-arn "arn:aws:states:eu-west-1:000000000000:execution:TestSM:dc8b6ce9-09cb-4190-8ae3-49d60a45e0ec"
> {
> "executionArn": "arn:aws:states:eu-west-1:000000000000:execution:TestSM:dc8b6ce9-09cb-4190-8ae3-49d60a45e0ec",
> "stateMachineArn": "arn:aws:states:eu-west-1:000000000000:stateMachine:TestSM",
> "name": "dc8b6ce9-09cb-4190-8ae3-49d60a45e0ec",
> "status": "FAILED",
> "startDate": 1736884055.929379,
> "stopDate": 1736884056.021036,
> "input": "{}",
> "inputDetails": {
> "included": true
> },
> "error": "S3.NoSuchKeyException",
> "cause": "{\"errorMessage\":\"The specified key does not exist. (Service: S3, Status Code: 404, Request ID: 101a0238-f620-4639-856b-fea39c8bb520, Extended Request ID: s9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234=)\",\"statusCode\":500}"
> }
### Expected Behavior
"S3.NoSuchKeyException" Error handling against the GetObject action should be honoured.
> {
> "executionArn": "arn:aws:states:eu-west-1:038610054328:execution:Leena-failing-s3:2a7e79d7-892d-4f3a-9669-8a5648a07750",
> "stateMachineArn": "arn:aws:states:eu-west-1:038610054328:stateMachine:Leena-failing-s3",
> "name": "2a7e79d7-892d-4f3a-9669-8a5648a07750",
> "status": "FAILED",
> "startDate": "2025-01-14T19:30:54.206000+00:00",
> "stopDate": "2025-01-14T19:30:54.442000+00:00",
> "input": "{}",
> "inputDetails": {
> "included": true
> },
> "error": "S3.NoSuchKeyException",
> "cause": "{\"errorMessage\":\"object not found\",\"statusCode\":404}"
> }
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run -v ~/mycode/:/mycode -d localstack/localstack:4.0.3
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
Executed the below client commands in localstack container itself:
cd /mycode/state-machine #this is where I had my `failing-s3-step-function.json` file
awslocal s3api create-bucket --bucket "test-bucket" --region "eu-west-1" --create-bucket-configuration LocationConstraint="eu-west-1"
aws --endpoint-url=http://localhost:4566 stepfunctions create-state-machine \
--name TestSM \
--role-arn arn:aws:iam::123456789012:role/service-role/TestSMRole \
--definition file://failing-s3-step-function.json
awslocal stepfunctions start-execution --state-machine-arn 'arn:aws:states:eu-west-1:000000000000:stateMachine:TestSM'
aws stepfunctions describe-execution --execution-arn "arn:aws:states:euwest-1:000000000000:execution:TestSM:dc8b6ce9-09cb-4190-8ae3-49d60a45e0ec"
### Environment
```markdown
- OS: macOS Sequoia 15.1
- LocalStack: localstack/localstack
LocalStack version: 4.0.3
LocalStack Docker image sha:
LocalStack build date: 2024-11-29
LocalStack build git hash: aa795ed1c
```
### Anything else?
_No response_ | closed | 2025-01-14T20:05:26Z | 2025-01-24T07:49:39Z | https://github.com/localstack/localstack/issues/12139 | [
"type: bug",
"aws:stepfunctions"
] | lagarwal-uic | 5 |
zappa/Zappa | flask | 1,151 | Remove python3.6 lambda runtime support | ## Context
AWS will depreciate the lambda `python3.6` runtime, where no lambdas will be able to create or update using `python3.6` from Aug 17, 2022 (this year).
https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html
## Possible Fix
Remove python3.6 as a supported runtime from zappa
| closed | 2022-07-16T04:09:45Z | 2022-08-05T10:34:14Z | https://github.com/zappa/Zappa/issues/1151 | [] | monkut | 1 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,502 | Can model sharing be realized for two webui projects? I need to test the effect of different versions, now I have two versions of webui, but my model files need to be migrated a lot, is there any configuration file to support model path sharing? Like comfyui | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Can model sharing be realized for two webui projects? I need to test the effect of different versions, now I have two versions of webui, but my model files need to be migrated a lot, is there any configuration file to support model path sharing? Like comfyui
### Steps to reproduce the problem
Can model sharing be realized for two webui projects? I need to test the effect of different versions, now I have two versions of webui, but my model files need to be migrated a lot, is there any configuration file to support model path sharing? Like comfyui
### What should have happened?
Can model sharing be realized for two webui projects? I need to test the effect of different versions, now I have two versions of webui, but my model files need to be migrated a lot, is there any configuration file to support model path sharing? Like comfyui
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
Can model sharing be realized for two webui projects? I need to test the effect of different versions, now I have two versions of webui, but my model files need to be migrated a lot, is there any configuration file to support model path sharing? Like comfyui
### Console logs
```Shell
Can model sharing be realized for two webui projects? I need to test the effect of different versions, now I have two versions of webui, but my model files need to be migrated a lot, is there any configuration file to support model path sharing? Like comfyui
```
### Additional information
_No response_ | open | 2024-09-19T10:17:23Z | 2024-10-17T15:30:30Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16502 | [
"asking-for-help-with-local-system-issues"
] | huangjianyi0701 | 9 |
plotly/dash | data-science | 3,119 | pattern-matched long callbacks cancelled incorrectly | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.18.2 /home/amorton/gh/dash
dash-core-components 2.0.0
dash_dangerously_set_inner_html 0.0.2
dash-flow-example 0.0.5
dash_generator_test_component_nested 0.0.1 /home/amorton/gh/dash/@plotly/dash-generator-test-component-nested
dash_generator_test_component_standard 0.0.1 /home/amorton/gh/dash/@plotly/dash-generator-test-component-standard
dash_generator_test_component_typescript 0.0.1 /home/amorton/gh/dash/@plotly/dash-generator-test-component-typescript
dash-html-components 2.0.0
dash-table 5.0.0
dash_test_components 0.0.1 /home/amorton/gh/dash/@plotly/dash-test-components
dash-testing-stub 0.0.2
```
**Describe the bug**
Pattern-matched long callbacks incorrectly cancelled based on wildcard output
Consider the following example app:
```python
import time
from dash import Dash, DiskcacheManager, callback, html, MATCH, Output, Input, State
def build_output_message_id(item_id_str: str) -> dict[str, str]:
return {
'component': 'output-message',
'item_id': item_id_str,
}
def build_output_item(item_id: float) -> html.Div:
return html.Div(
[
html.Span(f'{item_id:.2} sec delay:', style={'margin-right': '1rem'}),
html.Span(0, id=build_output_message_id(str(item_id))),
html.Br(),
],
style={"margin-top": "1rem"},
)
def build_app_layout() -> html.Div:
return html.Div([
html.Button('Fire!', id='button', n_clicks=0),
html.Br(),
*[build_output_item(i * 0.2) for i in range(20)],
], style={"display": "block"})
@callback(
Output(build_output_message_id(MATCH), 'children'),
Input('button', 'n_clicks'),
State(build_output_message_id(MATCH), 'children'),
State(build_output_message_id(MATCH), 'id'),
prevent_initial_call=True,
background=True,
interval=200,
)
def update_messages(_, current_value, id_dict):
delay_secs = float(id_dict["item_id"])
time.sleep(delay_secs)
return current_value + 1
app = Dash(
background_callback_manager=DiskcacheManager(),
)
app.layout = build_app_layout()
app.run(
host='0.0.0.0',
debug=True,
)
```
Upon pressing the button you should see many numbers _never_ increment, and many requests being made with a list of `oldJob` values.
This is unexpected, since the outputs don't correspond to the same concrete component.
The following patch resolves the issue in this example app.
```diff
diff --git a/dash/dash-renderer/src/actions/callbacks.ts b/dash/dash-renderer/src/actions/callbacks.ts
index 23da0a3f..a73af9d0 100644
--- a/dash/dash-renderer/src/actions/callbacks.ts
+++ b/dash/dash-renderer/src/actions/callbacks.ts
@@ -561,7 +561,7 @@ function handleServerside(
cacheKey: data.cacheKey as string,
cancelInputs: data.cancel,
progressDefault: data.progressDefault,
- output
+ output: JSON.stringify(payload.outputs),
};
dispatch(addCallbackJob(jobInfo));
job = data.job;
@@ -761,9 +761,10 @@ export function executeCallback(
let lastError: any;
const additionalArgs: [string, string, boolean?][] = [];
+ const jsonOutput = JSON.stringify(payload.outputs);
values(getState().callbackJobs).forEach(
(job: CallbackJobPayload) => {
- if (cb.callback.output === job.output) {
+ if (jsonOutput === job.output) {
// Terminate the old jobs that are not completed
// set as outdated for the callback promise to
// resolve and remove after.
```
| open | 2025-01-08T19:58:02Z | 2025-01-09T19:19:29Z | https://github.com/plotly/dash/issues/3119 | [
"bug",
"P2"
] | apmorton | 0 |
tensorpack/tensorpack | tensorflow | 974 | error when using get_global_step_var() as a parameter to modify the model graph computation | I am using tensorpack0.8.6 and have used the following code to feed the global_step as a parameter for the graph layer. This is an imagenet-resnet training task and the **1-epoch training went well** , but when 1 epoch ends, and the inference runner runs, the following error occurs:
```
[1106 20:21:22 @base.py:237] Start Epoch 1 ...
0%| |0/5000[00:00<?,?it/s][1106 20:21:22 @input_source.py:513] Pre-filling StagingArea ...
[1106 20:21:23 @input_source.py:517] 1 element was put into StagingArea.
2018-11-06 20:22:35.701900: I tensorflow/core/kernels/cuda_solvers.cc:137] Creating CudaSolver handles for stream 0x7fef4912be80
100%|##########################################################################################################################################################################################################9|4999/5000[19:08<00:00, 4.35it/s]
2018-11-06 20:40:31.328630: W tensorflow/core/kernels/queue_base.cc:295] _1_DataParallelInferenceRunner/QueueInput/input_queue: Skipping cancelled enqueue attempt with queue not closed
2018-11-06 20:40:31.328769: W tensorflow/core/kernels/queue_base.cc:295] _0_QueueInput/input_queue: Skipping cancelled enqueue attempt with queue not closed
Traceback (most recent call last):
File "imagenet.py", line 200, in <module>
launch_train_with_config(config, trainer)
File "/root/anaconda2/lib/python2.7/site-packages/tensorpack-0.8.6-py2.7.egg/tensorpack/train/interface.py", line 90, in launch_train_with_config
extra_callbacks=config.extra_callbacks)
[1106 20:40:31 @input_source.py:150] EnqueueThread QueueInput/input_queue Exited.
File "/root/anaconda2/lib/python2.7/site-packages/tensorpack-0.8.6-py2.7.egg/tensorpack/train/base.py", line 306, in train_with_defaults
steps_per_epoch, starting_epoch, max_epoch)
File "/root/anaconda2/lib/python2.7/site-packages/tensorpack-0.8.6-py2.7.egg/tensorpack/train/base.py", line 278, in train
self.main_loop(steps_per_epoch, starting_epoch, max_epoch)
[1106 20:40:31 @input_source.py:150] EnqueueThread DataParallelInferenceRunner/QueueInput/input_queue Exited.
File "/root/anaconda2/lib/python2.7/site-packages/tensorpack-0.8.6-py2.7.egg/tensorpack/utils/argtools.py", line 181, in wrapper
return func(*args, **kwargs)
File "/root/anaconda2/lib/python2.7/site-packages/tensorpack-0.8.6-py2.7.egg/tensorpack/train/base.py", line 243, in main_loop
self.run_step() # implemented by subclass
File "/root/anaconda2/lib/python2.7/site-packages/tensorpack-0.8.6-py2.7.egg/tensorpack/train/base.py", line 146, in run_step
self.hooked_sess.run(self.train_op)
File "/root/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 518, in run
run_metadata=run_metadata)
File "/root/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 862, in run
run_metadata=run_metadata)
File "/root/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 818, in run
return self._sess.run(*args, **kwargs)
File "/root/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 972, in run
run_metadata=run_metadata)
File "/root/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 818, in run
return self._sess.run(*args, **kwargs)
File "/root/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/root/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1124, in _run
feed_dict_tensor, options, run_metadata)
File "/root/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run
options, run_metadata)
File "/root/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Retval[0] does not have value
PrefetchDataZMQ successfully cleaned-up.
PrefetchDataZMQ successfully cleaned-up.
```
@ppwwyyxx | closed | 2018-11-06T12:59:24Z | 2018-12-12T17:55:07Z | https://github.com/tensorpack/tensorpack/issues/974 | [
"upstream issue"
] | brisker | 19 |
Skyvern-AI/skyvern | automation | 1,650 | Add GCP Cloud Storage File Upload Integration | ## Feature Request: GCP Cloud Storage File Upload Support
Currently, the project supports AWS S3 file uploads, but lacks integration with Google Cloud Platform (GCP) Cloud Storage. Adding GCP Cloud Storage support would:
- Provide flexibility for users leveraging GCP infrastructure
- Expand the project's cloud storage compatibility | open | 2025-01-26T19:16:11Z | 2025-01-31T13:34:41Z | https://github.com/Skyvern-AI/skyvern/issues/1650 | [
"help wanted"
] | piyushmittal20 | 3 |
aiogram/aiogram | asyncio | 717 | Improve filters factory resolve error | closed | 2021-10-05T20:54:49Z | 2021-10-07T14:07:37Z | https://github.com/aiogram/aiogram/issues/717 | [] | JrooTJunior | 0 | |
jazzband/django-oauth-toolkit | django | 613 | Instropection not working with json | I have 'OAUTH2_BACKEND_CLASS' :'oauth2_provider.oauth2_backends.JSONOAuthLibCore' set like this to read json instead of x-www-form-urlenconded but when i try the /instrospect/ it always trying to read like x-www-form-urlenconded and not in json.
I already check instrospect.py and oauthlib_backend_class = oauth2_settings.OAUTH2_BACKEND_CLASS is not present. This can be a pull request or i can override these class? | open | 2018-06-28T10:47:01Z | 2023-07-29T19:46:42Z | https://github.com/jazzband/django-oauth-toolkit/issues/613 | [] | tximpa91 | 1 |
yihong0618/running_page | data-visualization | 777 | 请教大佬,Action中有需要同步的gpx数据,同时也同步了coros数据,为啥执行的时候看日志,只有coros脚本执行了,gpx脚本没有执行啊 | 请教大佬,Action中有需要同步的gpx数据,同时也同步了coros数据,为啥执行的时候看日志,只有coros脚本执行了,gpx脚本没有执行啊 | closed | 2025-02-09T07:25:38Z | 2025-02-10T06:04:54Z | https://github.com/yihong0618/running_page/issues/777 | [] | Leerol | 6 |
nonebot/nonebot2 | fastapi | 2,989 | Plugin: nonechat | ### PyPI 项目名
nonebot-plugin-nonechat
### 插件 import 包名
nonebot_plugin_nonechat
### 标签
[{"label":"LLM","color":"#52eacf"}]
### 插件配置项
_No response_ | closed | 2024-10-01T07:54:41Z | 2024-10-04T02:53:25Z | https://github.com/nonebot/nonebot2/issues/2989 | [
"Plugin"
] | hanasa2023 | 3 |
adbar/trafilatura | web-scraping | 514 | Trafilatura to support more robust async library than standard request | This is request for Trafilatura to support a more robust async library, such as `asyncio/aiohttp` instead of underlying `urllib`, which has some issues.
Assisting conversation: https://github.com/adbar/trafilatura/discussions/515 | closed | 2024-02-26T14:56:54Z | 2025-02-25T04:51:35Z | https://github.com/adbar/trafilatura/issues/514 | [
"question"
] | krstp | 8 |
milesmcc/shynet | django | 12 | Single-command Docker-compose deploy | It'd be great to have a way to run shynet from scratch based on a single docker-compose file.
I finished most of the (minor) adaptations that are needed in order to get that running, so I'll PR when I'm done.
For transparency, modifications include:
- exposing current function Arguments as Environement variables (https://github.com/milesmcc/shynet/tree/master/shynet/core/management/commands)
- Adding a check to Webserver.sh to query postgre on startup and check:
- if the db is running (otherwise exit container)
- if the database tables exist, otherwise run migrate
- if the admin user is set, otherwise run createadmin with the env passed as parameter
- if the hostname is set, otherwise run hostname with with the env passed as parameter
- if the whitelabel is set, otherwise run whitelabel with the env passed as parameter
the Compose file itself is pretty much ready but I suck at postgre so it' staking me a while to write valid checks in the bash. | closed | 2020-04-29T16:14:45Z | 2020-05-02T17:00:22Z | https://github.com/milesmcc/shynet/issues/12 | [
"enhancement"
] | Windyo | 2 |
ymcui/Chinese-BERT-wwm | tensorflow | 234 | 如何抽取特定layer的词向量? | 在尝试提取特定layer词向量时使用了以下代码:
# Get the word embeddings from layers 9 to 12
layer_start = 9 # Starting layer (inclusive)
layer_end = 13 # Ending layer (exclusive)
embeddings1 = []
embeddings2 = []
for layer in range(layer_start, layer_end):
embeddings1_layer = module.get_embedding(tokens1, use_specified_layer=True, layer_num=layer)
embeddings2_layer = module.get_embedding(tokens2, use_specified_layer=True, layer_num=layer)
embeddings1_layer = np.array([emb[0] for emb in embeddings1_layer])
embeddings2_layer = np.array([emb[0] for emb in embeddings2_layer])
embeddings1.append(embeddings1_layer)
embeddings2.append(embeddings2_layer)
得到如下报错:
Traceback (most recent call last):
in <module>
embeddings1_layer = module.get_embedding(tokens1, use_specified_layer=True, layer_num=layer)
TypeError: TransformerModule.get_embedding() got an unexpected keyword argument 'use_specified_layer'
貌似并没有这个argument,请问怎样提取特定layer的词向量呢?直接提取的貌似是静态的词向量。 | closed | 2023-06-19T03:40:17Z | 2023-09-17T02:51:45Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/234 | [
"stale"
] | Black-Rhen | 2 |
ipython-books/cookbook-2nd | data-visualization | 8 | Error with temp_fft = sp.fftpack.fft(temp) when run in Jupyter Notebook | Hello, the code does not function for me when run in Jupyter Notebook. I replaced `to_datetime` with `to_pydatetime`. That fixed one error but now I am getting an error with `temp_fft = sp.fftpack.fft(temp)` it is throwing the error `AttributeError: 'Series' object has no attribute 'flags'`
I have updated Conda and scipy to the latest available and the error persists.
Thank you
-A | open | 2020-10-09T20:14:47Z | 2020-10-09T20:14:47Z | https://github.com/ipython-books/cookbook-2nd/issues/8 | [] | armarvin | 0 |
serengil/deepface | deep-learning | 1,018 | how to retain only frontal faces with both eyes and face angle? | need help | closed | 2024-02-10T10:11:43Z | 2024-02-10T10:49:48Z | https://github.com/serengil/deepface/issues/1018 | [
"question"
] | Arslan-Mehmood1 | 3 |
twopirllc/pandas-ta | pandas | 116 | Supertrend indicator giving only short values | This is not a bug report, because i'm pretty sure it has to do with the OHLC data i'm feeding to pandas-ta.
I noticed that with some markets i'll get only short values, so on the chart the band will be red for the whole length of the chart itself. This "issue" seems to happen when i set an higher Accelerator, in this case i'm setting it to 4. What is weird to me, is that this won't happen with some other markets, with the same amount of candles (1550).

Could it be because i don't have enough data to let the ATR be calculated properly?
Here is my code, to replicate my issue:
```python
import pandas as pd
import cfscrape
import pandas_ta as ta
import numpy as np
import json, datetime
BU = cfscrape.create_scraper()
URL = "https://api.binance.com/api/v1/klines?&symbol=TRXBTC&interval=4h&limit=1550"
ResultRaw = BU.get(URL, timeout=(10, 15)).content
Result = json.loads(ResultRaw)
for x in Result:
TimeUnix = float(x[0]) / float(1000)
K = datetime.datetime.fromtimestamp(TimeUnix)
x[0] = K
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
df = pd.DataFrame([x[:6] for x in Result],
columns=['Date', 'Open', 'High', 'Low', 'Close', 'Volume'])
format = '%Y-%m-%d %H:%M:%S'
df['Date'] = pd.to_datetime(df['Date'], format=format)
df = df.set_index(pd.DatetimeIndex(df['Date']))
df["Open"] = pd.to_numeric(df["Open"],errors='coerce')
df["High"] = pd.to_numeric(df["High"],errors='coerce')
df["Low"] = pd.to_numeric(df["Low"],errors='coerce')
df["Close"] = pd.to_numeric(df["Close"],errors='coerce')
df["Volume"] = pd.to_numeric(df["Volume"],errors='coerce')
df = df.fillna(value=0)
df = df.rename(columns={'Open': 'open', 'High': 'high', 'Low': 'low', 'Close': 'close', 'Volume': 'Volume'})
ST = ta.supertrend(df['high'], df['low'], df['close'], 10, 4.0)
print(ST)
```
| closed | 2020-09-06T13:54:09Z | 2022-02-20T18:55:01Z | https://github.com/twopirllc/pandas-ta/issues/116 | [] | Jacks349 | 9 |
modAL-python/modAL | scikit-learn | 12 | about learner.teach | it seems that each time we run the learner. teach, the model will fit the initial data plus the new data from the beginning just like an untrained new model, can the model just learn the new data with the weight which has been trained on the initial data? | closed | 2018-08-09T08:27:49Z | 2018-10-18T16:27:11Z | https://github.com/modAL-python/modAL/issues/12 | [] | luxu1220 | 7 |
rthalley/dnspython | asyncio | 1,037 | Default Branch Renaming on February 17, 2024 | I plan to rename the default branch, currently "master", to "main" sometime on or after Saturday, February 17, 2024.
If you have a checked out clone, after the renaming occurs you'll need to either [update it](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-branches-in-your-repository/renaming-a-branch#updating-a-local-clone-after-a-branch-name-changes) or reclone. | closed | 2024-02-05T22:48:03Z | 2024-02-20T22:02:58Z | https://github.com/rthalley/dnspython/issues/1037 | [
"Pinned"
] | rthalley | 1 |
pyeventsourcing/eventsourcing | sqlalchemy | 103 | Event migration | There are five approaches... it might be useful for the library to support them. | closed | 2017-08-01T05:37:24Z | 2019-06-27T12:49:43Z | https://github.com/pyeventsourcing/eventsourcing/issues/103 | [] | johnbywater | 12 |
microsoft/nni | data-science | 5,715 | How can I review the experiment that has been stopped? It seems that the Web can only show the detail of 'RUNNING' experiment. | closed | 2023-11-27T09:09:32Z | 2023-11-28T12:16:30Z | https://github.com/microsoft/nni/issues/5715 | [] | lightup666 | 1 | |
google/trax | numpy | 1,686 | Multiple heads option is not working in SelfAttention | ### Description
I use just some input activations, one SelfAttention layer and `n_heads=2`, but my code breaks. However, when I set `n_heads=1`, everything works fine.
### Environment information
```
OS: <MacOS>
$ pip freeze | grep trax
# your output here
trax==1.3.9
$ pip freeze | grep tensor
# your output here
mesh-tensorflow==0.1.19
tensorboard==2.5.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
tensorflow==2.4.1
tensorflow-datasets==4.3.0
tensorflow-estimator==2.4.0
tensorflow-hub==0.12.0
tensorflow-metadata==0.30.0
tensorflow-text==2.4.3
$ pip freeze | grep jax
# your output here
jax==0.2.19
jaxlib==0.1.70
$ python -V
# your output here
Python 3.8.10
```
# Steps to reproduce:
Here is a minimal code:
```
import trax
import numpy as np
attention = trax.layers.SelfAttention(n_heads=2)
activations = np.random.randint(0, 10, (1, 100, 1)).astype(np.float32)
input = (activations, )
init = attention.init(input)
output = attention(input)
```
# Error logs:
```
File [...]/site-packages/jax/linear_util.py, line 166, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File [...]/layers/research/efficient_attention.py, line 1637, in forward_unbatched_h
return forward_unbatched(*i_h, weights=w_h, state=s_h)
File [...]/layers/research/efficient_attention.py, line 1175, in forward_unbatched
q_info = kv_info = np.arange(q.shape[-2], dtype=np.int32)
IndexError: tuple index out of range
```
| open | 2021-08-17T16:39:28Z | 2021-09-01T00:42:59Z | https://github.com/google/trax/issues/1686 | [] | kenenbek | 1 |
ivy-llc/ivy | numpy | 27,930 | Fix Ivy Failing Test: numpy - statistical.min | closed | 2024-01-16T19:14:51Z | 2024-01-17T10:00:32Z | https://github.com/ivy-llc/ivy/issues/27930 | [
"Sub Task"
] | samthakur587 | 1 | |
mwaskom/seaborn | data-visualization | 3,002 | Global matplotlib style with objects interface | Hey! thanks for the amazing 0.12 release 🚀 !
I want to use the new object interface while using a global `matplotlib` style following https://seaborn.pydata.org/generated/seaborn.objects.Plot.theme.html#seaborn.objects.Plot.theme
I have tried:
```python
import matplotlib.pyplot as plt
import seaborn as sns
import seaborn.objects as so
anscombe = sns.load_dataset("anscombe")
plt.style.use("bmh")
sns.set_theme({**style.library["bmh"]})
sns.set_style({**style.library["bmh"]})
sns.set_theme(rc=style.library["bmh"])
sns.set_style(rc=style.library["bmh"])
p = (
so.Plot(anscombe, "x", "y")
.facet("dataset", wrap=2)
.add(so.Line(), so.PolyFit(order=1))
.add(so.Dot())
)
p
```
but I am still getting:

Any tips on how to make this work? Thanks!
**Remark:** With the "old" seaborn's functional API it works as expected. | closed | 2022-09-06T13:18:09Z | 2022-09-08T10:38:00Z | https://github.com/mwaskom/seaborn/issues/3002 | [] | juanitorduz | 2 |
Miserlou/Zappa | flask | 1,668 | [Feature] Ability to pass function timeout to the manage command | Having a timeout of 30s is generally good for an API but when it comes to execute commands (invoke or manage), it would be useful to be able to pass a higher value like 900s. | open | 2018-10-19T10:15:46Z | 2018-10-19T11:11:46Z | https://github.com/Miserlou/Zappa/issues/1668 | [] | khamaileon | 1 |
dsdanielpark/Bard-API | api | 213 | AsyncClient.get() got an unexpected keyword argument 'follow_redirects | ```
async def generate_response(user_prompt):
response = await bard.get_answer(user_prompt)
return response
```
this was working until yesterday, until this error appeared randomly:
```
AsyncClient.get() got an unexpected keyword argument 'follow_redirects'
``` | closed | 2023-10-17T14:32:58Z | 2023-10-26T19:01:17Z | https://github.com/dsdanielpark/Bard-API/issues/213 | [] | odgtr | 2 |
ageitgey/face_recognition | python | 1,409 | Always get "Segmentation fault" when using threads? | closed | 2022-02-13T18:58:41Z | 2022-04-18T09:16:08Z | https://github.com/ageitgey/face_recognition/issues/1409 | [] | razrabotkajava | 1 | |
python-gino/gino | sqlalchemy | 647 | Gino transaction not rollback with Quart test_client | * GINO version:0.8.6
* Python version: 3.8
* asyncpg version: 0.18
* aiocontextvars version:0.2.2
* PostgreSQL version:
### Description
I'm trying to write tests for my Quart application. As per this ticket https://github.com/python-gino/gino/issues/512 it appears that pytest-async's fixtures doesn't allow us to yield the connection and rollback and it's suggested that we do something like this within each test:
```
from myproject import db # this is the usual gino db
@pytest.mark.asyncio
async def test_something():
async with db.bind.transaction() as transaction:
# - first load whatever data into the db
# - then perform test logic
# - finally, clean up the DB by rolling back
transaction.raise_rollback()`
```
My issue is that when I utilize Quart's builtin test_client for testing of my routes that use Gino to perform db insert/update operations, my transactions performed in the endpoints are not rolled back. Is there a way to rollback these transactions? Is another db connection being spawned and used from my Pool? Is it because a new asyncio execution context is being used by test_client?
I can provide additional code if the below example is not clear.
### What I Did
```
async def test_something():
async with db.bind.transaction() as transaction:
# - first load whatever data into the db
# - then perform test logic
### The operations performed in this call are not rolled back
response = await test_app.post()
# - finally, clean up the DB by rolling back
transaction.raise_rollback()`
```
| open | 2020-04-02T14:53:31Z | 2020-04-20T23:09:48Z | https://github.com/python-gino/gino/issues/647 | [
"question"
] | lawrencealexander10 | 6 |
Kanaries/pygwalker | pandas | 101 | Colab Demo giving error (missing DC Bikes data URL) | This line of code gives a 404:
`df = pd.read_csv("https://raw.githubusercontent.com/Kanaries/pygwalker/main/tests/bike_sharing_dc.csv", parse_dates=['date'])`
I can make a PR, but this seems like a preference is good to discuss here. Keep it in this repo? Where? Fetch from somewhere else? Where? | closed | 2023-04-21T19:26:26Z | 2023-04-22T06:06:57Z | https://github.com/Kanaries/pygwalker/issues/101 | [] | bhollan | 2 |
keras-team/keras | tensorflow | 20,722 | Is it possible to use tf.data with tf operations while utilizing jax or torch as the backend? | Apart from tensorflow as backend, what are the proper approach to use basic operatons (i.e. tf.concat) inside the tf.data API pipelines? The following code works with tensorflow backend, but not with torch or jax.
```python
import os
os.environ["KERAS_BACKEND"] = "jax" # tensorflow, torch, jax
import keras
from keras import layers
import tensorflow as tf
aug_model = keras.Sequential([
keras.Input(shape=(224, 224, 3)),
layers.RandomFlip("horizontal_and_vertical")
])
def augment_data_tf(x, y):
combined = tf.concat([x, y], axis=-1)
z = aug_model(combined)
x = z[..., :3]
y = z[..., 3:]
return x, y
a = np.ones((4, 224, 224, 3)).astype(np.float32)
b = np.ones((4, 224, 224, 2)).astype(np.float32)
dataset = tf.data.Dataset.from_tensor_slices((a, b))
dataset = dataset.batch(3, drop_remainder=True)
dataset = dataset.map(
augment_data_tf, num_parallel_calls=tf.data.AUTOTUNE
)
```
```bash
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
[<ipython-input-7-2d25b0c0bbad>](https://localhost:8080/#) in <cell line: 3>()
1 dataset = tf.data.Dataset.from_tensor_slices((a, b))
2 dataset = dataset.batch(3, drop_remainder=True)
----> 3 dataset = dataset.map(
4 augment_data_tf, num_parallel_calls=tf.data.AUTOTUNE
5 )
25 frames
[/usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py](https://localhost:8080/#) in _convert_to_array_if_dtype_fails(x)
4102 dtypes.dtype(x)
4103 except TypeError:
-> 4104 return np.asarray(x)
4105 else:
4106 return x
NotImplementedError: in user code:
File "<ipython-input-5-ca4b074b58a5>", line 6, in augment_data_tf *
z = aug_model(combined)
File "/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler **
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.10/dist-packages/optree/ops.py", line 752, in tree_map
return treespec.unflatten(map(func, *flat_args))
File "/usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py", line 4252, in asarray
return array(a, dtype=dtype, copy=bool(copy), order=order, device=device)
File "/usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py", line 4058, in array
leaves = [_convert_to_array_if_dtype_fails(leaf) for leaf in leaves]
File "/usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py", line 4058, in <listcomp>
leaves = [_convert_to_array_if_dtype_fails(leaf) for leaf in leaves]
File "/usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py", line 4104, in _convert_to_array_if_dtype_fails
return np.asarray(x)
NotImplementedError: Cannot convert a symbolic tf.Tensor (concat:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported.
``` | closed | 2025-01-03T19:44:45Z | 2025-01-04T22:44:43Z | https://github.com/keras-team/keras/issues/20722 | [] | innat | 1 |
postmanlabs/httpbin | api | 498 | 404s after deploy on ECS | Hi! Once I've deployed this on an ECS cluster with an ALB I get 404s from the Server: gunicorn/19.9.0 webserver.
I appreciate any help to push me in the right direction. | open | 2018-07-31T08:52:32Z | 2018-07-31T13:43:34Z | https://github.com/postmanlabs/httpbin/issues/498 | [] | etiennemunnich | 1 |
Lightning-AI/pytorch-lightning | pytorch | 20,307 | `Trainer`'s `.init_module()` context does not initialize model on target device | ### Bug description
I refer to the documentation on https://lightning.ai/docs/pytorch/stable/advanced/model_init.html which states "you can force PyTorch to create the model directly on the target device" when using the `.init_module()` context. However I have verified across different GPU machines that this is not the case. A simple code is provided below which prints out the model's device after initialization under the context. It always prints 'cpu'.
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
from torch import nn, optim
from pytorch_lightning import Trainer, LightningModule
class LitAutoEncoder(LightningModule):
'''
Model taken from https://lightning.ai/docs/pytorch/stable/starter/introduction.html
Details unimportant
'''
def __init__(self):
super().__init__()
self.encoder = nn.Sequential(nn.Linear(28 * 28, 64), nn.ReLU(), nn.Linear(64, 3))
self.decoder = nn.Sequential(nn.Linear(3, 64), nn.ReLU(), nn.Linear(64, 28 * 28))
def training_step(self, batch, batch_idx):
x, _ = batch
x = x.view(x.size(0), -1)
z = self.encoder(x)
x_hat = self.decoder(z)
loss = nn.functional.mse_loss(x_hat, x)
return loss
def configure_optimizers(self):
optimizer = optim.Adam(self.parameters(), lr=1e-3)
return optimizer
trainer = Trainer(accelerator='gpu', devices=[0])
with trainer.init_module():
model = LitAutoEncoder()
print(model.device) # => cpu
```
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
`
* CUDA:
- GPU:
- Tesla V100-SXM2-32GB-LS
- Tesla V100-SXM2-32GB
- Tesla V100-SXM2-32GB-LS
- Tesla V100-SXM2-32GB-LS
- Tesla V100-SXM2-32GB-LS
- Tesla V100-SXM2-32GB-LS
- Tesla V100-SXM2-32GB-LS
- Tesla V100-SXM2-32GB-LS
- available: True
- version: 11.8
* Lightning:
- lightning: 2.4.0
- lightning-utilities: 0.11.6
- open-clip-torch: 2.26.1
- pytorch-lightning: 2.4.0
- torch: 2.1.0
- torchaudio: 2.1.0
- torchmetrics: 1.4.0.post0
- torchvision: 0.16.0
* Packages:
- aiohappyeyeballs: 2.3.5
- aiohttp: 3.10.3
- aiosignal: 1.3.1
- altair: 5.4.1
- antlr4-python3-runtime: 4.9.3
- appdirs: 1.4.4
- asttokens: 2.4.1
- async-timeout: 4.0.3
- attrs: 24.2.0
- autocommand: 2.2.2
- backports.tarfile: 1.2.0
- blinker: 1.8.2
- brotli: 1.1.0
- cachetools: 5.5.0
- certifi: 2024.8.30
- cffi: 1.17.0
- charset-normalizer: 3.3.2
- click: 8.1.7
- colorama: 0.4.6
- comm: 0.2.2
- datasets: 2.20.0
- debugpy: 1.8.5
- decorator: 5.1.1
- dill: 0.3.8
- docker-pycreds: 0.4.0
- einops: 0.8.0
- exceptiongroup: 1.2.2
- executing: 2.1.0
- filelock: 3.15.4
- frozenlist: 1.4.1
- fsspec: 2024.5.0
- ftfy: 6.2.3
- gitdb: 4.0.11
- gitpython: 3.1.43
- gmpy2: 2.1.5
- h2: 4.1.0
- hpack: 4.0.0
- huggingface-hub: 0.24.5
- hyperframe: 6.0.1
- idna: 3.7
- importlib-metadata: 7.2.1
- importlib-resources: 6.4.5
- inflect: 7.3.1
- ipykernel: 6.29.5
- ipython: 8.27.0
- ipywidgets: 8.1.5
- jaraco.context: 5.3.0
- jaraco.functools: 4.0.1
- jaraco.text: 3.12.1
- jedi: 0.19.1
- jinja2: 3.1.4
- jsonlines: 4.0.0
- jsonschema: 4.23.0
- jsonschema-specifications: 2023.12.1
- jupyter-client: 8.6.3
- jupyter-core: 5.7.2
- jupyterlab-widgets: 3.0.13
- lightning: 2.4.0
- lightning-utilities: 0.11.6
- markdown-it-py: 3.0.0
- markupsafe: 2.1.5
- matplotlib-inline: 0.1.7
- mdurl: 0.1.2
- more-itertools: 10.3.0
- mpmath: 1.3.0
- multidict: 6.0.5
- multiprocess: 0.70.16
- narwhals: 1.8.2
- nest-asyncio: 1.6.0
- networkx: 3.3
- numpy: 1.26.4
- omegaconf: 2.3.0
- open-clip-torch: 2.26.1
- opencv-python: 4.10.0
- opencv-python-headless: 4.10.0
- ordered-set: 4.1.0
- packaging: 24.1
- pandas: 2.2.2
- parso: 0.8.4
- pathtools: 0.1.2
- pexpect: 4.9.0
- pickleshare: 0.7.5
- pillow: 10.4.0
- pip: 24.2
- pkgutil-resolve-name: 1.3.10
- platformdirs: 4.3.6
- prompt-toolkit: 3.0.47
- protobuf: 4.25.3
- psutil: 6.0.0
- ptyprocess: 0.7.0
- pure-eval: 0.2.3
- pyarrow: 17.0.0
- pyarrow-hotfix: 0.6
- pycparser: 2.22
- pydeck: 0.8.0b4
- pygments: 2.18.0
- pysocks: 1.7.1
- python-dateutil: 2.9.0
- pytorch-lightning: 2.4.0
- pytz: 2024.1
- pyyaml: 6.0.2
- pyzmq: 26.2.0
- referencing: 0.35.1
- regex: 2024.7.24
- requests: 2.32.3
- rich: 13.8.1
- rpds-py: 0.20.0
- safetensors: 0.4.4
- sentry-sdk: 2.12.0
- setproctitle: 1.3.3
- setuptools: 72.1.0
- six: 1.16.0
- smmap: 5.0.0
- stack-data: 0.6.2
- streamlit: 1.38.0
- sympy: 1.13.2
- tenacity: 8.5.0
- timm: 1.0.8
- tokenizers: 0.19.1
- toml: 0.10.2
- tomli: 2.0.1
- torch: 2.1.0
- torchaudio: 2.1.0
- torchmetrics: 1.4.0.post0
- torchvision: 0.16.0
- tornado: 6.4.1
- tqdm: 4.66.5
- traitlets: 5.14.3
- transformers: 4.44.2
- triton: 2.1.0
- typeguard: 4.3.0
- typing-extensions: 4.12.2
- tzdata: 2024.1
- tzlocal: 5.2
- urllib3: 2.2.2
- validators: 0.34.0
- wandb: 0.16.6
- watchdog: 4.0.1
- wcwidth: 0.2.13
- wheel: 0.44.0
- widgetsnbextension: 4.0.13
- xformers: 0.0.22.post7
- xxhash: 3.4.1
- yarl: 1.9.4
- zipp: 3.20.2
- zstandard: 0.23.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.10.14
- release: 4.15.0-55-generic
- version: #60-Ubuntu SMP Tue Jul 2 18:22:20 UTC 2019
```
</details>
#- How you installed Lightning(`conda`, `pip`, source): `conda`
### More info
_No response_ | open | 2024-09-27T05:58:08Z | 2024-10-11T05:29:25Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20307 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | jin-zhe | 1 |
dynaconf/dynaconf | fastapi | 197 | [RFC] Allow dotted environment variables | **Is your feature request related to a problem? Please describe.**
Parameters defined in environment variables cannot be accessed with dotted key notation in the same way as parameters defined in settings files, which prevents overriding dotted key parameters with environment variables.
**Example:** Environment variable `DYNACONF_SERVICE_PORT` cannot be accessed via `settings['SERVICE.PORT']`. Instead, it must be accessed via `settings['SERVICE_PORT']`.
**Describe the solution you'd like**
One solution would be to use a designated sequence of characters to symbolize a dot separator, possibly two consecutive underscores ( `__` ). This character sequence could be configurable (eg `ENVVAR_DOT_SEPARATOR_FOR_DYNACONF`).
**Example:** Environment variable `DYNACONF_SERVICE__PORT` could be accessed via `settings['SERVICE.PORT']`, and would override any `service.port` parameter defined in a settings file.
**Describe alternatives you've considered**
An alternative to this could be to simply match either dotted or non-dotted key values to the correlated environment variable. As a note, this is how Spring Framework handles this feature.
**Example:** Environment variable `DYNACONF_SERVICE_PORT` could be accessed via `settings['SERVICE.PORT']` or via `settings['SERVICE_PORT']`, and would override any `service.port` or `service_port` parameter defined in a settings file.
**Additional context**
This would be especially nice for:
- allowing natural grouping of related settings (eg `service.host` and `service.port`)
- allowing overriding of existing parameters defined in a settings file
- accessing environment variables via object dot notation (eg `settings.SERVICE.PORT`)
| closed | 2019-06-14T06:21:10Z | 2019-09-02T17:50:26Z | https://github.com/dynaconf/dynaconf/issues/197 | [
"Not a Bug",
"RFC",
"good first issue",
"HIGH"
] | ATXMJ | 3 |
graphql-python/graphene | graphql | 1,479 | Mutation error: must be a mapping (dict / OrderedDict) with field names as keys or a function | I'm trying to return an object / class in my mutation response but Im getting this error:
`AssertionError: Success fields must be a mapping (dict / OrderedDict) with field names as keys or a function which returns such a mapping.`
```
class SuccessResponse(ObjectType):
job_id = String
user = String
class Payload(relay.ClientIDMutation):
class Input(object):
emails = List(String, description="List of user emails")
success = List(SuccessResponse)
@classmethod
def mutate_and_get_payload(cls, root, info, **kwargs):
success = []
raw_emails = kwargs.get('emails')
for user in raw_emails:
job_id = do_job(user.id)
success.append(SuccessResponse(job_id='job_id', user='user'))
# Return the payload
return Payload(success=success)
``` | closed | 2022-11-22T01:00:25Z | 2022-11-22T11:06:31Z | https://github.com/graphql-python/graphene/issues/1479 | [
"🐛 bug"
] | simkessy | 0 |
numba/numba | numpy | 9,685 | Installation fails on termux Python 3.11 | <!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
<!--
Please include details of the bug here, including, if applicable, what you
expected to happen!
-->
### Steps to reproduce (run on termux):
`python3.11 -m pip install numba`
### Description:
Installation on termux on python 3.11 fails with the following error:
```c
aarch64-linux-android-clang -DNDEBUG -g -fwrapv -O3 -Wall -fstack-protector-strong -O3 -fstack-protector-strong -O3 -fPIC -I/data/data/com.termux/files/usr/tmp/pip-build-env-xsazcb5s/normal/lib/python3.11/site-packages/numpy/_core/include -I/data/data/com.termux/files/usr/include/python3.11 -c numba/_helpermod.c -o build/temp.linux-aarch64-cpython-311/numba/_helpermod.o
In file included from numba/_helpermod.c:23:
numba/_helperlib.c:147:12: error: use of undeclared identifier 'I'
147 | *out = _complex_float_ctor((float) _out.real, (float) _out.imag);
| ^
numba/_helperlib.c:22:44: note: expanded from macro '_complex_float_ctor'
22 | #define _complex_float_ctor(r, i) (r + I * i)
| ^
1 error generated.
```
So, this line is the cause of this error:
https://github.com/numba/numba/blob/2001717f3321a5082c39c5787676320e699aed12/numba/_helperlib.c#L22
I may be wrong, but it seems to me that there is a typo and it should be like this:
```c
#define _complex_float_ctor(r, i) (r + i * i)
``` | closed | 2024-08-03T17:57:50Z | 2024-08-06T21:30:27Z | https://github.com/numba/numba/issues/9685 | [
"bug - build/packaging"
] | not-lum | 3 |
seleniumbase/SeleniumBase | web-scraping | 3,264 | Add CDP Mode methods for handling `<input>` Sliders and `<select>` Dropdowns | ### Add CDP Mode methods for handling `<input>` Sliders and `<select>` Dropdowns
----
Specifically, I want to handle these:
<img width="368" alt="Screenshot" src="https://github.com/user-attachments/assets/b9a13bc7-ac81-4d94-a92c-fdf28a7c06bc">
----
These methods should be added into the API:
```python
sb.cdp.set_value(selector, text)
sb.cdp.select_option_by_text(dropdown_selector, option)
```
Eg:
```python
sb.cdp.set_value("input#mySlider", "100")
sb.cdp.select_option_by_text("#mySelect", "Set to 75%")
```
----
| closed | 2024-11-14T19:13:11Z | 2024-11-14T21:44:55Z | https://github.com/seleniumbase/SeleniumBase/issues/3264 | [
"enhancement",
"UC Mode / CDP Mode"
] | mdmintz | 1 |
matplotlib/matplotlib | matplotlib | 28,891 | [Bug]: `GridSpecFromSubplotSpec` displayed incorrectly with `layout="constrained"` | ### Bug summary
When creating a nested grid of axes using `GridSpecFromSubplotSpec` (EG by calling `axis.get_subplotspec().subgridspec(...)`), and plotting a figure using `layout="constrained"`, the nested axes are not displayed correctly. Specifically, the inner grids do not respect the spacing between the outer grids, as shown below.
### Code for reproduction
```Python
import matplotlib.pyplot as plt
import matplotlib.axes
import matplotlib.gridspec
import matplotlib._layoutgrid
def make_demo(
plot_name,
outer_space=0.1,
inner_space=0.1,
layout=None,
):
file_name = "%s.png" % plot_name
print(file_name)
figure = plt.figure(figsize=[10, 6], layout=layout)
grid_spec = matplotlib.gridspec.GridSpec(
nrows=1,
ncols=2,
figure=figure,
wspace=outer_space,
hspace=outer_space,
width_ratios=[2, 1],
)
axis_array = grid_spec.subplots(squeeze=False)
a0, a1 = axis_array.flatten().tolist()
assert isinstance(a0, matplotlib.axes.Axes)
assert isinstance(a1, matplotlib.axes.Axes)
for axis in [a0, a1]:
axis.get_xaxis().set_visible(False)
axis.get_yaxis().set_visible(False)
for s in axis.spines.values():
s.set(color="r", lw=10)
subplot_spec = axis.get_subplotspec()
subgrid_spec = subplot_spec.subgridspec(
nrows=3,
ncols=3,
wspace=inner_space,
hspace=inner_space,
)
axis_array = subgrid_spec.subplots(squeeze=False)
figure.suptitle(plot_name, fontsize=25)
figure.savefig(file_name)
def fix_layout_grid():
defaults = matplotlib._layoutgrid.LayoutGrid.__init__.__defaults__
defaults = list(defaults)
defaults[2] = True
defaults = tuple(defaults)
matplotlib._layoutgrid.LayoutGrid.__init__.__defaults__ = defaults
if __name__ == "__main__":
make_demo("1_original")
make_demo("2_more_outer_space", outer_space=0.8)
make_demo("3_more_inner_space", outer_space=0.8, inner_space=0.8)
make_demo("4_constrained", layout="constrained")
make_demo("5_constrained_more_outer_space", layout="constrained", outer_space=0.3)
make_demo("6_constrained_more_inner_space", layout="constrained", outer_space=0.3, inner_space=0.3)
fix_layout_grid()
make_demo("7_fixed_constrained", layout="constrained")
make_demo("8_fixed_constrained_more_outer_space", layout="constrained", outer_space=0.3)
# filenames = [
# "1_original.png",
# "2_more_outer_space.png",
# "3_more_inner_space.png",
# "4_constrained.png",
# "5_constrained_more_outer_space.png",
# "6_constrained_more_inner_space.png",
# "7_fixed_constrained.png",
# "8_fixed_constrained_more_outer_space.png",
# ]
# from jutility import util, plotting
# mp = plotting.MultiPlot(
# *[plotting.ImShow(util.load_image(s)) for s in filenames],
# colour="grey",
# )
# mp.save()
```
### Actual outcome
In images 1-3 below (not constrained layout), the inner grids respond to the spacing between the outer grids. However, when using `layout="constrained"` in images 4-6 below (before my workaround is applied), the inner grids ignore the spacing between the outer grids.

### Expected outcome
The inner grids should respond to the spacing between the outer grids (which is the case in images 7-8 after my workaround is applied).
### Additional information
I have found a workaround, which doesn't involve modifying the `matplotlib` installation (included in the `fix_layout_grid` function above):
```python
defaults = matplotlib._layoutgrid.LayoutGrid.__init__.__defaults__
defaults = list(defaults)
defaults[2] = True
defaults = tuple(defaults)
matplotlib._layoutgrid.LayoutGrid.__init__.__defaults__ = defaults
```
I have also found 2 possible fixes (either one works, the first is more conservative):
1. In https://github.com/matplotlib/matplotlib/blob/v3.9.2/lib/matplotlib/_constrained_layout.py#L226 , in `layoutgrids[rep] = mlayoutgrid.LayoutGrid(...)`, include `parent_inner=True`
2. In https://github.com/matplotlib/matplotlib/blob/v3.9.2/lib/matplotlib/_layoutgrid.py#L36 , in `LayoutGrid.__init__(...)`, change `parent_inner=False` to `parent_inner=True`
### Operating system
Ubuntu
### Matplotlib Version
3.9.2
### Matplotlib Backend
qtagg
### Python version
Python 3.10.12
### Jupyter version
_No response_
### Installation
pip | open | 2024-09-26T01:08:57Z | 2024-09-26T17:25:25Z | https://github.com/matplotlib/matplotlib/issues/28891 | [] | jakelevi1996 | 7 |
Urinx/WeixinBot | api | 134 | Selector为3时出现死循环 | 代码中似乎没有处理Selector为3的情况,导致死循环。有人解决这个问题吗? | open | 2016-12-09T06:08:24Z | 2017-03-24T03:02:12Z | https://github.com/Urinx/WeixinBot/issues/134 | [] | wzw1990 | 1 |
tortoise/tortoise-orm | asyncio | 1,152 | transaction task not correctly exited at asyncio.exceptions.CancelledError | **Describe the bug**
I was using get_or_create as part of my atomic operation, basically tested it with exiting the task real quick. Then I found some errors blocking the server before it shutdown.
**To Reproduce**
[Code]
here is part of the code in a sanic server, routing to a websocket blueprint
```python
bp_ws = Blueprint("Websockets", url_prefix="/ws")
@bp_ws.websocket("/file/<pk>/")
async def feed(request, ws, pk):
await Room.get_or_create(channel_name='test_channel')
```
[Do]
1. Run server
2. use postman to connect to the websocket
3. disconnect in postman real quickly to try exit in the middle of the get_or_create action
[Errors]
```bash
Traceback (most recent call last):
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/tortoise/models.py", line 1057, in get_or_create
await cls.select_for_update().filter(**kwargs).using_db(connection).get(),
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/tortoise/queryset.py", line 1006, in _execute
instance_list = await self._db.executor_class(
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/tortoise/backends/base/executor.py", line 130, in execute_select
_, raw_results = await self.db.execute_query(query.get_sql())
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/tortoise/backends/mysql/client.py", line 44, in translate_exceptions_
return await func(self, *args)
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/tortoise/backends/mysql/client.py", line 199, in execute_query
await cursor.execute(query, values)
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/aiomysql/cursors.py", line 239, in execute
await self._query(query)
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/aiomysql/cursors.py", line 457, in _query
await conn.query(q)
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/aiomysql/connection.py", line 469, in query
await self._read_query_result(unbuffered=unbuffered)
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/aiomysql/connection.py", line 672, in _read_query_result
await result.read()
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/aiomysql/connection.py", line 1153, in read
first_packet = await self.connection._read_packet()
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/aiomysql/connection.py", line 598, in _read_packet
packet_header = await self._read_bytes(4)
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/aiomysql/connection.py", line 646, in _read_bytes
data = await self._reader.readexactly(num_bytes)
File "/home/yuzixin/usr/lib/python3.10/asyncio/streams.py", line 708, in readexactly
await self._wait_for_data('readexactly')
File "/home/yuzixin/usr/lib/python3.10/asyncio/streams.py", line 502, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/sanic/app.py", line 994, in _websocket_handler
await fut
File "/home/yuzixin/workspace/sanicserver/filesystem/blueprint.py", line 19, in feed
await Room.get_or_create(channel_name='test_channel')
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/tortoise/models.py", line 1054, in get_or_create
async with in_transaction(connection_name=db.connection_name) as connection:
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/tortoise/backends/base/client.py", line 280, in __aexit__
await self.connection.rollback()
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/tortoise/backends/mysql/client.py", line 255, in rollback
await self._connection.rollback()
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/aiomysql/connection.py", line 398, in rollback
await self._execute_command(COMMAND.COM_QUERY, "ROLLBACK")
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/aiomysql/connection.py", line 695, in _execute_command
self._ensure_alive()
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/aiomysql/connection.py", line 1114, in _ensure_alive
raise InterfaceError(self._close_reason)
pymysql.err.InterfaceError: Cancelled during execution
```
Error at shutting down the server (a cold shutdown, as the warm shutdown would take forever)
```bash
Process ForkProcess-4:
Traceback (most recent call last):
File "/home/yuzixin/usr/lib/python3.10/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/yuzixin/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/yuzixin/workspace/sanicserver/venv/lib/python3.10/site-packages/sanic/server/runners.py", line 191, in serve
loop.run_until_complete(app._server_event("shutdown", "after"))
File "uvloop/loop.pyx", line 1499, in uvloop.loop.Loop.run_until_complete
RuntimeError: Event loop stopped before Future completed.
```
Same error occurs in \_\_aenter\_\_ as well, depending on the timing disconnection is triggered.
I tried adding a simple try except logic to \_\_aexit\_\_ at line 279 and 281, tortoise/backends/base/client.py, simply passing the exception and it is not blocking the server loop anymore. I am not an expert on async programming, but I feel like this might not be a final solution.
I was actually hoping this situation rarely occurs, as it looks like "trying to close mysql connection after ensure_connection but before the next several lines of code". But as I observed, this happened once every two tries
**Expected behavior**
Correctly closing the task in the middle of a transaction
| open | 2022-06-09T11:45:17Z | 2022-06-09T12:22:10Z | https://github.com/tortoise/tortoise-orm/issues/1152 | [] | jrayu | 0 |
pandas-dev/pandas | python | 60,515 | DOC: methods in see also section in the pandas.DataFrame.shape and pandas.DataFrame.ndim are not hyperlinks | ### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.ndim.html
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.shape.html
### Documentation problem
In the see also section the ndarray.shape and ndarray.ndim method is listed, but it is not hyperlinks and thus the reader cannot navigate with ease but has to look for them instead.
### Suggested fix for documentation
Add numpy.ndarray.shape and numpy.ndarray.ndim in the docstring.
| closed | 2024-12-07T06:39:22Z | 2024-12-08T14:04:33Z | https://github.com/pandas-dev/pandas/issues/60515 | [
"Docs"
] | Shubhank-Gyawali | 1 |
frappe/frappe | rest-api | 31,300 | Agkiya : Image required in the excel export report | Dear Team,
Please find below the requirement from our client. They need an Excel report that includes item images.
In the jewelry industry, multiple ERP companies have already implemented this feature. Therefore, I request you to enable a functionality where users can export stock reports or custom reports with item images included in the Excel file.
Current Requirement: Stock Balance Report should be exportable to Excel along with item images.
Regards,
Deepak Singh | open | 2025-02-18T08:44:37Z | 2025-03-06T22:22:25Z | https://github.com/frappe/frappe/issues/31300 | [
"feature-request"
] | Deepakjs5665 | 1 |
supabase/supabase-py | flask | 463 | Functions not working due invalid URL formatting | Hi all,
Calling functions using supabase-py for me allways results in the following response:
`{'data': b'Function not found', 'error': None}`
I believe this is due to the improper URL format in the `FunctionClient`:
https://github.com/supabase-community/supabase-py/blob/5c752443277de0a4a8dfd7d0d113f0d177efc81f/supabase/client.py#LL74C64-L74C64
Overwriting the `functions_url` in the following format seems to work for me: `"{}/rest/v1/rpc".format(url)` | closed | 2023-06-14T09:21:58Z | 2023-06-15T03:43:48Z | https://github.com/supabase/supabase-py/issues/463 | [
"duplicate"
] | tobias-scheepers | 2 |
shibing624/text2vec | nlp | 6 | 文本相似度 | 请问,为什么有些句子根本不搭边,但是分数还是大于0.5
a = 你吃饭了吗
b = 我会唱歌
score = 0.7190231597933535# Similarity().get_score(a, b) | closed | 2020-02-17T02:47:13Z | 2020-03-15T03:05:01Z | https://github.com/shibing624/text2vec/issues/6 | [
"question"
] | etrigger | 1 |
ultralytics/ultralytics | computer-vision | 18,707 | mAP always zero when training is resumed from a particular epoch | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Train
### Bug
I had trained a model and had stopped it after a certain number of epochs. Then, to continue training, i used resume=True. The training gets resumed, but after validation, the mask Precision, Recall and both the mAP values are zero.

### Environment
Ultralytics 8.3.61 Python-3.9.18 torch-1.13.1+cu117 CUDA:0 (Quadro P6000, 24444MiB)
Setup complete ✅ (56 CPUs, 62.8 GB RAM, 116.6/468.4 GB disk)
OS Linux-5.4.0-150-generic-x86_64-with-glibc2.27
Environment Linux
Python 3.9.18
Install pip
RAM 62.81 GB
Disk 116.6/468.4 GB
CPU Intel Xeon E5-2680 v4 2.40GHz
CPU count 56
GPU Quadro P6000, 24444MiB
GPU count 1
CUDA 11.7
numpy ✅ 1.26.4>=1.23.0
numpy ✅ 1.26.4<2.0.0; sys_platform == "darwin"
matplotlib ✅ 3.9.0>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 10.3.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.13.0>=1.4.1
torch ✅ 2.4.0>=1.8.0
torch ✅ 2.4.0!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.14.1+cu116>=0.9.0
tqdm ✅ 4.66.4>=4.64.0
psutil ✅ 6.0.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.2>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.3>=2.0.0
### Minimal Reproducible Example
```
from ultralytics import YOLO
# Load a model
# model = YOLO("yolo11x-seg.pt") # load a pretrained model (recommended for training)
model = YOLO('/home/dcil/Desktop/VyshakB/RWG_Cracks/Data/runs/segment/Jan10Training/weights/best (copy).pt')
# model.train(data="rwg_crack_3/data.yaml", epochs=500, patience=50, save=True, imgsz=1052,
# device=0, save_period=30, batch=1, verbose=True, scale=0, name='Jan10Training', optimizer='Adam', lr0=0.01)
model.train(resume=True, data="rwg_crack_3/data.yaml", epochs=500, patience=50, save=True, imgsz=1052,
device=0, save_period=30, batch=1, verbose=True, scale=0, name='Jan10Training', optimizer='Adam', lr0=0.01)
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-01-16T05:27:11Z | 2025-01-20T06:00:51Z | https://github.com/ultralytics/ultralytics/issues/18707 | [
"bug",
"segment"
] | AbhishekHollaAB | 10 |
mljar/mercury | data-visualization | 250 | strange shadow on mobile screen when app is busy | When app is in busy state there is a strange shadow over sidebar.

| closed | 2023-04-20T14:03:24Z | 2023-04-20T15:15:09Z | https://github.com/mljar/mercury/issues/250 | [
"bug"
] | pplonski | 0 |
darrenburns/posting | automation | 30 | Soften up python dependancies...? | Good afternoon!
I'm looking into packaging Posting for Fedora, but I'm hitting a wall regarding the package requiring very specific package versions. For example:
click-default-group ( == 1.2.4)
pydantic (==2.7.3)
textual (==0.72)
textual[syntax] (==0.72)
xdg-base-dirs (==6.0.1)
Would it be possible to relax those requirements a bit? I guess I could alter the .toml to soften the requirements a bit but I wanted to see beforehand if you'd be willing to look at it.
Thank you! | closed | 2024-07-11T20:04:06Z | 2024-07-12T07:59:59Z | https://github.com/darrenburns/posting/issues/30 | [] | farchord | 1 |
thp/urlwatch | automation | 80 | urlwatch==2.3 package on PyPI is broken | I just wanted to report that the urlwatch package on PyPI was packaged incorrectly. I haven't looked into it much but it's missing a bunch of files. Appears to have been caused by 142a0e5e8c10b37344794a7056594e8079b6decb.
Running `pip install urlwatch` in a fresh virtualenv presents the following:
> ```
> Collecting urlwatch
> Using cached urlwatch-2.3.tar.gz
> Complete output from command python setup.py egg_info:
> Traceback (most recent call last):
> File "<string>", line 1, in <module>
> File "/tmp/pip-build-8gsgi5q8/urlwatch/setup.py", line 26, in <module>
> main_py = open(os.path.join(HERE, 'lib', PACKAGE_NAME, '__init__.py')).read()
> FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-build-8gsgi5q8/urlwatch/lib/urlwatch/__init__.py'
>
> ----------------------------------------
> Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-8gsgi5q8/urlwatch/
> ```
| closed | 2016-07-12T16:05:59Z | 2016-07-12T18:29:36Z | https://github.com/thp/urlwatch/issues/80 | [] | brbsix | 0 |
LAION-AI/Open-Assistant | python | 3,027 | [SMALL Problem] MOVE 'wikihow' dataset to SUMMARISATION Datasets | Version : **v0.0.3-alpha24-3-g26acf953**
In the file `Open-Assistant/model/model_training/custom_datasets/__init__.py`, move 'wikihow' dataset from variable `QA_DATASETS` to the variable `SUMMARIZATION_DATASETS` | closed | 2023-05-03T15:25:53Z | 2023-05-05T09:36:50Z | https://github.com/LAION-AI/Open-Assistant/issues/3027 | [] | loloMD | 1 |
2noise/ChatTTS | python | 503 | 生成的语音不全 | 生成的语音不全,比如tell me again,经常出现 tell me 就结束了 | open | 2024-07-01T06:50:43Z | 2024-11-02T18:49:44Z | https://github.com/2noise/ChatTTS/issues/503 | [
"documentation",
"help wanted",
"algorithm"
] | PMPBinZhang | 6 |
igorbenav/FastAPI-boilerplate | sqlalchemy | 8 | Other settings should be based on app.core.config settings | Ex: if settings inherits from RedisCacheSettings, create_redis_cache_pool() and close_redis_cache_pool() should be added to startup and shutdown respectively in create_application | closed | 2023-10-16T07:45:07Z | 2023-10-24T02:36:13Z | https://github.com/igorbenav/FastAPI-boilerplate/issues/8 | [
"enhancement"
] | igorbenav | 1 |
ageitgey/face_recognition | machine-learning | 1,047 | Face recognition produce poor results on Raspbian Buster | * face_recognition version: 1.2.3
* Python version: 3.7.3
* Operating System: Raspbian Linux Buster
### Description
So, I installed the library via pip, everything ok, I've tried to install opencv (latest version) via pip as well, it installs, but when I tested opencv, it asks about __atomic_add_fetch_8. I found out that some devs have the same probelm. So what I did? I installed opencv via apt, which it has version 3.2.0. And after that I've tried testing it using facerec_from_webcam_faster script and it produces poor results.
Additional notes:
1 month ago, I was testing on the same machine, the same script, but made a custom loader for it, with a older version of Raspbian on which I can get the latest version of opencv and it worked and I have another machine with the same steps taken as this one with poor results and somehow it works
### What I Did
```
pip3 install face_recognition
sudo apt install lib-opencv-dev python3-lib-opencv
```
| open | 2020-02-05T13:40:33Z | 2020-03-16T03:42:49Z | https://github.com/ageitgey/face_recognition/issues/1047 | [] | Unicode-Hater | 2 |
wkentaro/labelme | computer-vision | 533 | How to add point for my annotated polygons? | Hi,thanks for your great work.But i met some problem recently.I have annotated some datas for sementic works before, and i want to add some points for per polygon in images i annotated,but i don't want to delete the annotated polygons to annotate again,I just want to add one or two point for the annotated polygon.But i don't find way how to do it.Does the labelme supports such operator?tkx. | closed | 2019-12-31T08:07:31Z | 2021-04-13T11:20:28Z | https://github.com/wkentaro/labelme/issues/533 | [] | chegnyanjun | 4 |
oegedijk/explainerdashboard | dash | 189 | Citation? | Hi - Is there any way you'd like the code to be cited in publications? | closed | 2022-02-25T15:37:10Z | 2022-05-04T19:34:17Z | https://github.com/oegedijk/explainerdashboard/issues/189 | [] | e5k | 6 |
joerick/pyinstrument | django | 145 | How to pass arguments to the script itself | How would I call a script with arguments
```bash
python script.py -n 1
```
with `pyinstrument`?
I've tried something like
```bash
pyinstrument script.py -- -n 1
```
but it doesn't seem to work. Is this possible?
| closed | 2021-08-12T02:32:12Z | 2021-08-12T02:51:41Z | https://github.com/joerick/pyinstrument/issues/145 | [] | dycw | 1 |
home-assistant/core | asyncio | 141,234 | Add support for switchbot roller shades | ### The problem
Add support for switchbot roller shades
https://us.switch-bot.com/products/switchbot-roller-shade
### What version of Home Assistant Core has the issue?
core-2025.3.4
### What was the last working version of Home Assistant Core?
n/a
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
SwitchBot Bluetooth
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/switchbot/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-23T17:49:22Z | 2025-03-23T21:04:06Z | https://github.com/home-assistant/core/issues/141234 | [
"integration: switchbot",
"feature-request"
] | joe81tx | 2 |
Farama-Foundation/PettingZoo | api | 744 | [Bug Report] Environment initialization error | I installed the latest 1.19.0 version of pettingzoo (pip and github cloning) to [colab](https://colab.research.google.com/drive/1Nt0umnAL4UkEge7yxCBoMoS2J_-3SABS?usp=sharing) and imported pong_v3 environment. However, when I tried to initialize the environment I got the following error:
env = pong_v3.env(num_players=2)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
[<ipython-input-8-5d9a5ee49d6c>](https://localhost:8080/#) in <module>()
----> 1 env = pong_v3.env(num_players=2)
1 frames
[/usr/local/lib/python3.7/dist-packages/pettingzoo/atari/pong/pong.py](https://localhost:8080/#) in raw_env(num_players, game_version, **kwargs)
36 name = os.path.basename(__file__).split(".")[0]
37 parent_file = glob("./pettingzoo/atari/" + name + "*.py")
---> 38 version_num = parent_file[0].split("_")[-1].split(".")[0]
39 name = name + "_" + version_num
40 return BaseAtariEnv(
IndexError: list index out of range
Python 3.7.13
Google.Colab
Thx! | closed | 2022-07-30T18:54:32Z | 2022-07-31T12:39:34Z | https://github.com/Farama-Foundation/PettingZoo/issues/744 | [] | cinemere | 2 |
frappe/frappe | rest-api | 31,741 | Not possible to login since #31494 |
## Description of the issue
Since #31494 it is impossible to login with following error in console: ReferenceError: False is not defined
## Context information (for bug reports)
'is_fc_site': False seems to make problems. After changing it manually to 0, it works.
**Output of `bench version`**
15.58.1
```
## Steps to reproduce the issue
1. Have the given version of Frappe
2. Try to login
## Additional information
Tried in manually installation and Docker. Further apps installed: Insights, ERPNext, HRMS
| open | 2025-03-16T00:16:05Z | 2025-03-17T10:39:20Z | https://github.com/frappe/frappe/issues/31741 | [
"bug"
] | DrZoidberg09 | 3 |
Kanaries/pygwalker | matplotlib | 305 | New Environment: Flutter (Dart syntax) | Hi, it would be awesome if this package could support the Flutter Framework (dart). I would be willing to write it or help contribute to it - if someone could break down the services that the existing client environments use!
| closed | 2023-11-04T16:06:05Z | 2023-11-08T04:35:53Z | https://github.com/Kanaries/pygwalker/issues/305 | [] | ForeverAngry | 1 |
deepfakes/faceswap | deep-learning | 1,204 | Crash on starting. Potential fix provided | *Note: For general usage questions and help, please use either our [FaceSwap Forum](https://faceswap.dev/forum)
or [FaceSwap Discord server](https://discord.gg/FC54sYg). General usage questions are liable to be closed without
response.*
**Crash reports MUST be included when reporting bugs.**
**Describe the bug**
Maybe the language setting for OS is prevent GUI from starting.
**To Reproduce**
Steps to reproduce the behaviour:
1. Simplified Chinese as OS language(windows 10)
2. run python3.8 faceswap.py gui
**Expected behaviour**
crashed
**Desktop (please complete the following information):**
- OS: win10
- Python Version 3.8
- Conda Version not using conda
**Additional context**
Fixed by changing `stdout.decode()` to `stdout.decode("utf-8","ignore")` on line 271 of faceswap/lib/gui/menu.py
**Crash Report**
> Traceback (most recent call last):
File "D:\deepswap\faceswap\lib\cli\launcher.py", line 181, in execute_script
process = script(arguments)
File "D:\deepswap\faceswap\scripts\gui.py", line 179, in __init__
self.root = FaceswapGui(arguments.debug)
File "D:\deepswap\faceswap\scripts\gui.py", line 34, in __init__
self.build_gui()
File "D:\deepswap\faceswap\scripts\gui.py", line 58, in build_gui
self.configure(menu=MainMenuBar(self))
File "D:\deepswap\faceswap\lib\gui\menu.py", line 47, in __init__
self.help_menu = HelpMenu(self)
File "D:\deepswap\faceswap\lib\gui\menu.py", line 207, in __init__
self.build()
File "D:\deepswap\faceswap\lib\gui\menu.py", line 220, in build
if self._build_branches_menu():
File "D:\deepswap\faceswap\lib\gui\menu.py", line 241, in _build_branches_menu
stdout = self._get_branches()
File "D:\deepswap\faceswap\lib\gui\menu.py", line 271, in _get_branches
retcode, stdout.decode().strip().replace("\n", " - "))
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb2 in position 6: invalid start byte | closed | 2022-01-16T12:30:47Z | 2022-05-15T01:26:41Z | https://github.com/deepfakes/faceswap/issues/1204 | [] | theFSO | 2 |
biolab/orange3 | data-visualization | 6,692 | Is there a limit to the number of algorithm widgets that should be run in a single workflow? | <!--
This is neither a bug report nor a feature request. We are using Orange for Kaggle competitions. To accomplish this we are adjusting every algorithm in multiple ways (HPO) leading to 100+ algorithms (widgets) per workflow. We have noticed that when we have a large number of widgets in a workflow the performance results e.g., F1 score are different when it is run as part of 100 widgets, as opposed to running the algorithm separately. In other words, the F1 might be 0.715 when run with the other 99 algorithms and 0.615 when run separately on the Titanic dataset. We could not find any answer in any Orange resources or an Internet search as to a recommended limit to algorithms per workflow. Should be using DASK? Thanks.
-->
| closed | 2024-01-04T17:17:43Z | 2024-01-05T09:07:48Z | https://github.com/biolab/orange3/issues/6692 | [] | rehoyt | 1 |
plotly/plotly.py | plotly | 4,117 | Plotly Express constructor: lists passed to `x` and `y` params are mutated | When either the `x` or `y` parameters to the Plotly Express constructor are passed lists of values, they are mutated such that values are converted to string.
Expected behaviour: Plotly Express should not mutate objects supplied as arguments by the user.
Minimal example (tested on Plotly 5.13.1 and master branch):
```python
from random import randrange
import pandas as pd
import plotly.express as px
cols, rows = list(range(3)), list(range(10))
print(cols)
df = pd.DataFrame({col: [randrange(100) for _row in rows] for col in cols})
px.bar(df, y=cols, barmode="group")
print(cols)
```
Before the `px.bar()` call, `cols` has the value `[1, 2, 3]` and afterwards it is `["1", "2", "3"]`. The same behaviour occurs for the `x` param and also when using other plotting functions than `pxbar()` (I tried `px.histogram()`).
I did some poking around, and can see that in the function `build_dataframe` in `plotly/express/_core.py`, the some of the fields in the user supplied args have copies made (which would fix this issue), but that is only applied to args found in the `array_attrables` list, which is `["dimensions", "custom_data", "hover_data", "path", "wide_variable"]`, which does not include `x` and `y`. | open | 2023-03-21T06:01:51Z | 2024-08-12T20:50:43Z | https://github.com/plotly/plotly.py/issues/4117 | [
"bug",
"P3"
] | ned2 | 3 |
onnx/onnxmltools | scikit-learn | 477 | Support for Spark variable length features | Hello, I was wondering if there's support for variable length features. For example Spark's CountVectorizer can receive a list of strings of different sizes, and perform a sort of One Hot Encoding on the values.
See below:
+-----------------------+-------------------+
| csv | output |
+-----------------------+-------------------+
|[Val1, Val2] | (5,[1,4],[1.0,1.0]) |
|[Val3, Val4] | (5,[0,3],[1.0,1.0]) |
|[Val5, Val3] | (5,[0,2],[1.0,1.0]) |
|[Val3, Val3, Val3] | (5,[0],[3.0]) |
+-----------------------+-------------------+
To clarify:
1. Is the CountVectorizer Supported?
2. Is there support for variable length features?
3. Is there support for sparse input? (The only way I could get this to work was transforming the feature vector back into dense, which in my context is unacceptable due to latency constraints)
Thanks in advance to anyone that can answer any of these questions. | closed | 2021-06-30T20:56:47Z | 2021-07-20T21:32:00Z | https://github.com/onnx/onnxmltools/issues/477 | [] | mhv777 | 2 |
kizniche/Mycodo | automation | 1,365 | Action - Ramp Value | Add a Action to Ramp a value x over time to value x
Like the Action - Ramp PWM
should be useful for many where PWM is not applicable to the used output or following function
| open | 2024-02-27T00:06:37Z | 2024-02-27T00:06:37Z | https://github.com/kizniche/Mycodo/issues/1365 | [] | silverhawk1983 | 0 |
FujiwaraChoki/MoneyPrinterV2 | automation | 83 | Youtube Account With multiple channels | this create youtube.json file only first time but 2nd time I can't create account disappear 2nd time.
so my question is what if I hv more then 1 youtube channel. how can i switch between diffrant chanels in headless mode it always upload video only default youtube chnl.
multiple profile takes alot time and storage. i hope you got what I mean.
is there a way so that it irritate between different chnal in same yt account?
plz share dis ird link if possible to get more he'll from community? curent link not wrking | open | 2024-09-01T21:17:21Z | 2024-09-02T05:20:42Z | https://github.com/FujiwaraChoki/MoneyPrinterV2/issues/83 | [] | asonsgh | 2 |
jonaswinkler/paperless-ng | django | 1,679 | [BUG] Scan produces word with spaces between letters | <!---
=> Before opening an issue, please check the documentation and see if it helps you resolve your issue: https://paperless-ng.readthedocs.io/en/latest/troubleshooting.html
=> Please also make sure that you followed the installation instructions.
=> Please search the issues and look for similar issues before opening a bug report.
=> If you would like to submit a feature request please submit one under https://github.com/jonaswinkler/paperless-ng/discussions/categories/feature-requests
=> If you encounter issues while installing of configuring Paperless-ng, please post that in the "Support" section of the discussions. Remember that Paperless successfully runs on a variety of different systems. If paperless does not start, it's probably an issue with your system, and not an issue of paperless.
=> Don't remove the [BUG] prefix from the title.
-->
**Describe the bug**
Scan produces word with spaces between letters: K o n t o a u s z u g v o m 3 0 . 0 8 . 2 0 1 9
This way paperless-ng is absolutely unusable :-(
**To Reproduce**
1. Install docker version of paperless-ng
2. add any pdf-Document
**Expected behavior**
no additional spaces between letters of words
**Screenshots**
Part of pdf:

**Relevant information**
- latest docker version
- debian 10
- Installation method: docker
- No configuration changes made
| closed | 2022-03-04T21:46:39Z | 2022-03-05T12:31:14Z | https://github.com/jonaswinkler/paperless-ng/issues/1679 | [] | tpre | 1 |
automl/auto-sklearn | scikit-learn | 1,448 | [Question] Is there any straight forward way to retrieve the solution and prediction vector during CV? | Hello everyone!
I was wondering if there was any way to retrieve the predictions and associated solutions that were used to compute the metrics during training. Specifically, in the case of a 10 fold CV it would correspond to an array containing 10 entries with each entry having in turn the predictions and the labels of that fold.
Thank you for your time in advance! | open | 2022-04-21T13:05:59Z | 2022-06-10T13:01:32Z | https://github.com/automl/auto-sklearn/issues/1448 | [
"enhancement",
"question"
] | chogovadze | 3 |
mljar/mljar-supervised | scikit-learn | 75 | Add SHAP explanations to models | Please add
- SHAP summary plot
- SHAP dependence plots
- SHAP decision plots for top-10 best predictions and top-10 worst prediction (the later might be more important) | closed | 2020-04-24T12:28:16Z | 2023-06-06T06:38:27Z | https://github.com/mljar/mljar-supervised/issues/75 | [
"enhancement"
] | pplonski | 6 |
biolab/orange3 | pandas | 7,048 | Violin Plot: display datetime/time as datetime/time and not as a number | **What's your use case?**
On the axis, Violin Plot displays datetime variables as numbers (i.e., seconds since t=0):
<img width="730" alt="Image" src="https://github.com/user-attachments/assets/bce9cafc-f26c-46e6-bb30-d94250ff4141" />
**What's your proposed solution?**
Just like Box Plot, Violin plot should display datetime variables as datetime variables so that they can be interpreted straightforwardly.
**Are there any alternative solutions?**
Not that I know of
| open | 2025-03-13T09:05:43Z | 2025-03-13T09:05:43Z | https://github.com/biolab/orange3/issues/7048 | [] | wvdvegte | 0 |
microsoft/nni | tensorflow | 5,018 | ERROR: aten::norm is not Supported! | When I use the NNI L1 method to prune the facenet model and perform speedup, I encounter an error indicating that aten::norm is not supported. How to solve this problem? My pytorch network structure is defined as follows:

| open | 2022-07-25T09:01:37Z | 2022-11-17T03:31:10Z | https://github.com/microsoft/nni/issues/5018 | [
"user raised",
"support",
"ModelSpeedup",
"v2.9.1"
] | jia0511 | 11 |
litl/backoff | asyncio | 102 | Option to use the "Retry-After" header | I think it would be a good idea if we could use the "Retry-After" header.
Either as the `base` arg to `backoff.expo` or just in itself.
I could create a PR this week if this sounds interesting. | closed | 2020-09-07T14:19:34Z | 2022-04-26T19:59:34Z | https://github.com/litl/backoff/issues/102 | [] | tiptop96 | 4 |
tensorflow/tensor2tensor | machine-learning | 1,624 | How to do online training? | Online training, or incremental training, refers to continue to train the model as new data comes in.
Does t2t support online training? How ? | open | 2019-07-07T14:14:05Z | 2019-07-07T14:14:05Z | https://github.com/tensorflow/tensor2tensor/issues/1624 | [] | PromptExpert | 0 |
pandas-dev/pandas | data-science | 60,394 | ENH: Add first_inverted and last_inverted options to keep in DataFrame.duplicated | ### Feature Type
- [X] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
I suggest adding options `first_inverted` and `last_inverted` as `keep` options to function `pandas.DataFrame.duplicated`. Below an example of how it would work and what it would return.
df = pd.DataFrame({
'brand': ['Yum Yum', 'Yum Yum', 'Yum Yum', 'Indomie', 'Indomie', 'Indomie'],
'style': ['cup', 'cup', 'cup', 'cup', 'pack', 'pack'],
'rating': [4, 4, 4, 3.5, 15, 5],
})
df.duplicated(keep='first_inverted')
0 True
1 False
2 False
3 False
4 False
5 False
dtype: bool
### Feature Description
.
### Alternative Solutions
.
### Additional Context
_No response_ | open | 2024-11-22T08:21:07Z | 2024-11-29T13:42:51Z | https://github.com/pandas-dev/pandas/issues/60394 | [
"Enhancement",
"Needs Discussion",
"duplicated",
"Closing Candidate"
] | tommycarstensen | 4 |
dpgaspar/Flask-AppBuilder | flask | 1,395 | Documentation incomplete | Hi, the views.py of the "Simple contacts application" outlined in the documentation misses some imports to be executable. Its corresponding project in GitHub seems to be complete, though.
from .models import Contact, ContactGroup
from . import appbuilder, db
Patrick | closed | 2020-06-09T13:12:10Z | 2020-09-15T01:29:51Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1395 | [
"stale"
] | pichlerpa | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.