repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
paperless-ngx/paperless-ngx
django
9,480
[BUG] Consumer losing connection to redis behind HA proxy
### Description Hello, I have a redis cluster behind haproxy in kubernetes. The consumer seems to regularly lose connection to redis and recovers connection immediately after the following error message. The cycle continues indefinitely spamming the log file. It doesn't seem to cause other disruptions and the app seems to be otherwise working ok. Thanks for looking into this. ``` [2025-03-24 08:12:59,761] [INFO] [celery.worker.consumer.connection] Connected to redis://:**@redis-prod-redis-ha-haproxy.database.svc.cluster.local:6379// [2025-03-24 08:13:29,888] [WARNING] [celery.worker.consumer.consumer] consumer: Connection to broker lost. Trying to re-establish the connection... Traceback (most recent call last): File "/usr/local/lib/python3.12/site-packages/celery/worker/consumer/consumer.py", line 340, in start blueprint.start(self) File "/usr/local/lib/python3.12/site-packages/celery/bootsteps.py", line 116, in start step.start(parent) File "/usr/local/lib/python3.12/site-packages/celery/worker/consumer/consumer.py", line 746, in start c.loop(*c.loop_args()) File "/usr/local/lib/python3.12/site-packages/celery/worker/loops.py", line 97, in asynloop next(loop) File "/usr/local/lib/python3.12/site-packages/kombu/asynchronous/hub.py", line 373, in create_loop cb(*cbargs) File "/usr/local/lib/python3.12/site-packages/kombu/transport/redis.py", line 1352, in on_readable self.cycle.on_readable(fileno) File "/usr/local/lib/python3.12/site-packages/kombu/transport/redis.py", line 569, in on_readable chan.handlers[type]() File "/usr/local/lib/python3.12/site-packages/kombu/transport/redis.py", line 918, in _receive ret.append(self._receive_one(c)) ^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/kombu/transport/redis.py", line 928, in _receive_one response = c.parse_response() ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/client.py", line 865, in parse_response response = self._execute(conn, try_read) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/client.py", line 841, in _execute return conn.retry.call_with_retry( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/retry.py", line 65, in call_with_retry fail(error) File "/usr/local/lib/python3.12/site-packages/redis/client.py", line 843, in <lambda> lambda error: self._disconnect_raise_connect(conn, error), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/client.py", line 830, in _disconnect_raise_connect raise error File "/usr/local/lib/python3.12/site-packages/redis/retry.py", line 62, in call_with_retry return do() ^^^^ File "/usr/local/lib/python3.12/site-packages/redis/client.py", line 842, in <lambda> lambda: command(*args, **kwargs), ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/client.py", line 863, in try_read return conn.read_response(disconnect_on_error=False, push_request=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/connection.py", line 592, in read_response response = self._parser.read_response(disable_decoding=disable_decoding) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/_parsers/hiredis.py", line 128, in read_response self.read_from_socket() File "/usr/local/lib/python3.12/site-packages/redis/_parsers/hiredis.py", line 90, in read_from_socket raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR) redis.exceptions.ConnectionError: Connection closed by server. [2025-03-24 08:13:32,175] [WARNING] [py.warnings] /usr/local/lib/python3.12/site-packages/celery/worker/consumer/consumer.py:391: CPendingDeprecationWarning: In Celery 5.1 we introduced an optional breaking change which on connection loss cancels all currently executed tasks with late acknowledgement enabled. These tasks cannot be acknowledged as the connection is gone, and the tasks are automatically redelivered back to the queue. You can enable this behavior using the worker_cancel_long_running_tasks_on_connection_loss setting. In Celery 5.1 it is set to False by default. The setting will be set to True by default in Celery 6.0. warnings.warn(CANCEL_TASKS_BY_DEFAULT, CPendingDeprecationWarning) [2025-03-24 08:13:32,175] [INFO] [celery.worker.consumer.consumer] Temporarily reducing the prefetch count to 4 to avoid over-fetching since 8 tasks are currently being processed. The prefetch count will be gradually restored to 16 as the tasks complete processing. ``` ### Steps to reproduce 1. Set up paperless to connect to redis cluster behind haproxy 2. Check logs ### Webserver logs ```bash [2025-03-24 08:12:59,687] [INFO] [celery.worker.consumer.consumer] Temporarily reducing the prefetch count to 4 to avoid over-fetching since 8 tasks are currently being processed. The prefetch count will be gradually restored to 16 as the tasks complete processing. [2025-03-24 08:12:59,761] [INFO] [celery.worker.consumer.connection] Connected to redis://:**@redis-prod-redis-ha-haproxy.database.svc.cluster.local:6379// [2025-03-24 08:13:29,888] [WARNING] [celery.worker.consumer.consumer] consumer: Connection to broker lost. Trying to re-establish the connection... Traceback (most recent call last): File "/usr/local/lib/python3.12/site-packages/celery/worker/consumer/consumer.py", line 340, in start blueprint.start(self) File "/usr/local/lib/python3.12/site-packages/celery/bootsteps.py", line 116, in start step.start(parent) File "/usr/local/lib/python3.12/site-packages/celery/worker/consumer/consumer.py", line 746, in start c.loop(*c.loop_args()) File "/usr/local/lib/python3.12/site-packages/celery/worker/loops.py", line 97, in asynloop next(loop) File "/usr/local/lib/python3.12/site-packages/kombu/asynchronous/hub.py", line 373, in create_loop cb(*cbargs) File "/usr/local/lib/python3.12/site-packages/kombu/transport/redis.py", line 1352, in on_readable self.cycle.on_readable(fileno) File "/usr/local/lib/python3.12/site-packages/kombu/transport/redis.py", line 569, in on_readable chan.handlers[type]() File "/usr/local/lib/python3.12/site-packages/kombu/transport/redis.py", line 918, in _receive ret.append(self._receive_one(c)) ^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/kombu/transport/redis.py", line 928, in _receive_one response = c.parse_response() ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/client.py", line 865, in parse_response response = self._execute(conn, try_read) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/client.py", line 841, in _execute return conn.retry.call_with_retry( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/retry.py", line 65, in call_with_retry fail(error) File "/usr/local/lib/python3.12/site-packages/redis/client.py", line 843, in <lambda> lambda error: self._disconnect_raise_connect(conn, error), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/client.py", line 830, in _disconnect_raise_connect raise error File "/usr/local/lib/python3.12/site-packages/redis/retry.py", line 62, in call_with_retry return do() ^^^^ File "/usr/local/lib/python3.12/site-packages/redis/client.py", line 842, in <lambda> lambda: command(*args, **kwargs), ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/client.py", line 863, in try_read return conn.read_response(disconnect_on_error=False, push_request=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/connection.py", line 592, in read_response response = self._parser.read_response(disable_decoding=disable_decoding) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/redis/_parsers/hiredis.py", line 128, in read_response self.read_from_socket() File "/usr/local/lib/python3.12/site-packages/redis/_parsers/hiredis.py", line 90, in read_from_socket raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR) redis.exceptions.ConnectionError: Connection closed by server. [2025-03-24 08:13:32,175] [WARNING] [py.warnings] /usr/local/lib/python3.12/site-packages/celery/worker/consumer/consumer.py:391: CPendingDeprecationWarning: In Celery 5.1 we introduced an optional breaking change which on connection loss cancels all currently executed tasks with late acknowledgement enabled. These tasks cannot be acknowledged as the connection is gone, and the tasks are automatically redelivered back to the queue. You can enable this behavior using the worker_cancel_long_running_tasks_on_connection_loss setting. In Celery 5.1 it is set to False by default. The setting will be set to True by default in Celery 6.0. warnings.warn(CANCEL_TASKS_BY_DEFAULT, CPendingDeprecationWarning) [2025-03-24 08:13:32,175] [INFO] [celery.worker.consumer.consumer] Temporarily reducing the prefetch count to 4 to avoid over-fetching since 8 tasks are currently being processed. The prefetch count will be gradually restored to 16 as the tasks complete processing. [2025-03-24 08:13:32,259] [INFO] [celery.worker.consumer.connection] Connected to redis://:**@redis-prod-redis-ha-haproxy.database.svc.cluster.local:6379// ``` ### Browser logs ```bash ``` ### Paperless-ngx version 2.14.7 ### Host OS docker official image ### Installation method Docker - official image ### System status ```json ``` ### Browser _No response_ ### Configuration changes _No response_ ### Please confirm the following - [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation. - [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools. - [x] I have already searched for relevant existing issues and discussions before opening this report. - [x] I have updated the title field above with a concise description.
closed
2025-03-24T15:18:49Z
2025-03-24T15:32:16Z
https://github.com/paperless-ngx/paperless-ngx/issues/9480
[ "not a bug" ]
fcarucci
2
InstaPy/InstaPy
automation
6,747
Candra
closed
2023-08-22T07:18:10Z
2023-08-22T07:18:34Z
https://github.com/InstaPy/InstaPy/issues/6747
[]
Persia48
0
voila-dashboards/voila
jupyter
865
New release with PR [#841]?
Hi everyone. I would like to use the multi_kernel_manager_class configuration, in combination with the [hotpot_km](https://github.com/voila-dashboards/hotpot_km) library in order to warm up kernels. Unfortunately, I've found that the latest release (0.2.7) available from PyPI doesn't contain it. Would it be possible to do a new release of voilà including PR [#841](https://github.com/voila-dashboards/voila/pull/841/files)?
closed
2021-03-30T07:48:35Z
2021-04-12T14:55:46Z
https://github.com/voila-dashboards/voila/issues/865
[]
mcornudella
6
remsky/Kokoro-FastAPI
fastapi
124
[Solved] Docker container crashes on generation
**Describe the bug** Attempting to generate any output the docker container just crashes without any error message. Doesn't make a difference if you use the integrated Web UI or the Gradio UI. **Screenshots or console output** ``` ========== == CUDA == ========== CUDA Version 12.4.1 Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. This container image and its contents are governed by the NVIDIA Deep Learning Container License. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience. 11:18:54 AM | INFO | Loading TTS model and voice packs... 11:18:54 AM | INFO | Initializing Kokoro V1 on cuda 11:18:54 AM | DEBUG | Searching for model in path: /app/api/src/models 11:18:54 AM | INFO | Loading Kokoro model on cuda 11:18:54 AM | INFO | Config path: /app/api/src/models/v1_0/config.json 11:18:54 AM | INFO | Model path: /app/api/src/models/v1_0/kokoro-v1_0.pth 11:18:56 AM | DEBUG | Scanning for voices in path: /app/api/src/voices/v1_0 11:18:56 AM | DEBUG | Searching for voice in path: /app/api/src/voices/v1_0 11:18:56 AM | DEBUG | Generating audio for text: 'Warmup text for initialization....' 11:18:56 AM | DEBUG | Got audio chunk with shape: torch.Size([57600]) 11:18:56 AM | INFO | Warmup completed in 1993ms 11:18:56 AM | INFO | ░░░░░░░░░░░░░░░░░░░░░░░░ ╔═╗┌─┐┌─┐┌┬┐ ╠╣ ├─┤└─┐ │ ╚ ┴ ┴└─┘ ┴ ╦╔═┌─┐┬┌─┌─┐ ╠╩╗│ │├┴┐│ │ ╩ ╩└─┘┴ ┴└─┘ ░░░░░░░░░░░░░░░░░░░░░░░░ Model warmed up on cuda: kokoro_v1CUDA: True 67 voice packs loaded Beta Web Player: http://0.0.0.0:8880/web/ ░░░░░░░░░░░░░░░░░░░░░░░░ 11:19:04 AM | INFO | Created global TTSService instance 11:19:04 AM | DEBUG | Scanning for voices in path: /app/api/src/voices/v1_0 INFO: xxx:51832 - "GET /v1/audio/voices HTTP/1.1" 200 OK 11:19:17 AM | DEBUG | Scanning for voices in path: /app/api/src/voices/v1_0 INFO: xxx:46778 - "GET /v1/audio/voices HTTP/1.1" 200 OK 11:19:17 AM | DEBUG | Scanning for voices in path: /app/api/src/voices/v1_0 INFO: xxx:46792 - "POST /v1/audio/speech HTTP/1.1" 200 OK 11:19:17 AM | DEBUG | Scanning for voices in path: /app/api/src/voices/v1_0 11:19:17 AM | DEBUG | Searching for voice in path: /app/api/src/voices/v1_0 11:19:17 AM | DEBUG | Using single voice path: /app/api/src/voices/v1_0/af_heart.pt 11:19:17 AM | DEBUG | Using voice path: /app/api/src/voices/v1_0/af_heart.pt 11:19:17 AM | INFO | Starting smart split for 4 chars ** Press ANY KEY to close this window ** ``` **Branch / Deployment used** Docker container for `kokoro-fastapi-gpu:v0.2.0` **Operating System** Linux w/ Nvidia GPU
closed
2025-02-07T11:30:55Z
2025-02-08T05:20:58Z
https://github.com/remsky/Kokoro-FastAPI/issues/124
[]
0pac1ty
2
lux-org/lux
pandas
107
Widget can not show in jupyter notebook
I used follow commend to instal lux widget. pip install git+https://github.com/lux-org/lux-widget jupyter nbextension install --py luxWidget jupyter nbextension enable --py luxWidget It's ok but the widget is not appear in Jupyter notebook. My environment is conda 4.8.5. and Python is 3.7.8.
closed
2020-10-06T09:12:27Z
2020-10-15T12:34:40Z
https://github.com/lux-org/lux/issues/107
[]
Jack-ee
11
apachecn/ailearning
python
647
第11章_Apriori算法
# Apriori 算法的使用 Typo: ”燃尽后对生下来的集合进行组合以声场包含两个元素的项集。“ 应为 ”然后对剩下来的集合进行组合以生成包含两个元素的项集。“
closed
2023-12-15T16:54:46Z
2024-01-03T01:51:23Z
https://github.com/apachecn/ailearning/issues/647
[]
SydCS
2
python-restx/flask-restx
api
381
Swagger Topbar Missing from Swagger UI
Swagger UI have a topbar, that is generally used as a search bar for all the APIs that are in the API documentation. Is there any roadmap to include that topbar in Restx Swagger UI?
open
2021-10-25T08:42:27Z
2022-07-21T13:39:03Z
https://github.com/python-restx/flask-restx/issues/381
[ "enhancement" ]
anandtripathi5
1
JoeanAmier/TikTokDownloader
api
425
[功能异常] Tiktok连接超时,无法下载视频
**问题描述** 设置了代理地址还是无法下载TK视频 我不知道是不是我设置错了 我只是打开翻墙软件后启动程序,不知道少了步骤没有 ![Image](https://github.com/user-attachments/assets/2a36740e-2a40-422f-b4e5-415e008904a9) ![Image](https://github.com/user-attachments/assets/139da453-b063-4cf6-aa79-3cb081e53430)
open
2025-03-10T08:25:08Z
2025-03-10T11:25:44Z
https://github.com/JoeanAmier/TikTokDownloader/issues/425
[]
oyoy131
2
TheKevJames/coveralls-python
pytest
20
Nothing gets reported to coveralls.io
I think I have set up everything correctly. As you can see in Travis-CI's logs coveralls gets called and it returns 200: https://travis-ci.org/rubik/radon However, on coveralls.io dashboard the coverage percentage does not get displayed! https://coveralls.io/r/rubik/radon How is that possible? Thanks, rubik
closed
2013-05-31T15:27:44Z
2013-06-11T13:12:01Z
https://github.com/TheKevJames/coveralls-python/issues/20
[]
rubik
7
ARM-DOE/pyart
data-visualization
1,408
BUG: read_odim_hdf5 - TypeError: 'numpy.float32' object cannot be interpreted as an integer
* Py-ART version: 1.14.6 * Python version: 3.11 (same with 3.10) * Operating System: Linux / Ubuntu ### Description An error is raised when using `pyart.aux_io.odim_hdf5.read_odim_hdf5`: `TypeError: 'numpy.float32' object cannot be interpreted as an integer` The line to blame is https://github.com/ARM-DOE/pyart/blob/75cb88466a74b48680c0cdf1400e3ed8a92d35f1/pyart/aux_io/odim_h5.py#L333 The `datetime.datetime.utcfromtimestamp` function doesn't seem to like `numpy.float32` dtype anymore. A fix (that worked for me) is to specify "float64" in the dtype of `t_data`, instead of "float32": https://github.com/ARM-DOE/pyart/blob/75cb88466a74b48680c0cdf1400e3ed8a92d35f1/pyart/aux_io/odim_h5.py#L327
closed
2023-03-29T17:07:27Z
2023-05-03T18:47:35Z
https://github.com/ARM-DOE/pyart/issues/1408
[ "Bug" ]
Vforcell
7
ExpDev07/coronavirus-tracker-api
rest-api
251
Add docker compatibility
Hello everyone!!! To get the benefit of run the api with one command (with isolation), how about add the dockerfile and docker-compose to this project? Do you see some problem about this?
closed
2020-04-01T21:06:26Z
2020-04-07T17:33:18Z
https://github.com/ExpDev07/coronavirus-tracker-api/issues/251
[ "enhancement", "dev" ]
GabrielDS
8
kubeflow/katib
scikit-learn
2,175
Error "Objective metric accuracy is not found in training logs, unavailable value is reported. metric:<name:"accuracy" value:"unavailable"
/kind bug **What steps did you take and what happened:** I have been trying to create a simple Katib experiment with sklearn iris dataset but am facing an error "Objective metric accuracy is not found in training logs, unavailable value is reported. metric:<name:"accuracy" value:"unavailable" **Below is my code:** import argparse import os import hypertune import logging import pandas as pd # YOUR IMPORTS HERE from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier def main(): parser = argparse.ArgumentParser() parser.add_argument('--neighbors', type=int, default=3, help='value of k') parser.add_argument("--log-path", type=str, default="", help="Path to save logs. Print to StdOut if log-path is not set") parser.add_argument("--logger", type=str, choices=["standard", "hypertune"], help="Logger", default="standard") args = parser.parse_args() if args.log_path == "" or args.logger == "hypertune": logging.basicConfig( format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%dT%H:%M:%SZ", level=logging.DEBUG) else: logging.basicConfig( format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%dT%H:%M:%SZ", level=logging.DEBUG, filename=args.log_path) if args.logger == "hypertune" and args.log_path != "": os.environ['CLOUD_ML_HP_METRIC_FILE'] = args.log_path # For JSON logging hpt = hypertune.HyperTune() # LOAD DATA HERE iris_data = load_iris() iris_df = pd.DataFrame(data=iris_data['data'], columns=iris_data['feature_names']) iris_df['Iris type'] = iris_data['target'] iris_df['Iris name'] = iris_df['Iris type'].apply( lambda x: 'sentosa' if x == 0 else ('versicolor' if x == 1 else 'virginica')) def f(x): if x == 0: val = 'setosa' elif x == 1: val = 'versicolor' else: val = 'virginica' return val iris_df['test'] = iris_df['Iris type'].apply(f) iris_df.drop(['test'], axis=1, inplace=True) X = iris_df[['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']] y = iris_df['Iris name'] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) knn = KNeighborsClassifier(n_neighbors=args.neighbors) knn.fit(X_train, y_train) accuracy = knn.score(X_test, y_test) logging.info("{{metricName: accuracy, metricValue: {:.4f}}}\n".format(accuracy)) if args.logger == "hypertune": hpt.report_hyperparameter_tuning_metric( hyperparameter_metric_tag='accuracy', metric_value=accuracy) if __name__ == '__main__': main() **Below is my yaml file:** --- apiVersion: kubeflow.org/v1beta1 kind: Experiment metadata: namespace: kubeflow name: iris-1 spec: parallelTrialCount: 1 maxTrialCount: 2 maxFailedTrialCount: 3 objective: type: maximize goal: 0.99 objectiveMetricName: accuracy metricsCollectorSpec: collector: kind: StdOut algorithm: algorithmName: random parameters: - name: neighbors parameterType: int feasibleSpace: min: "3" max: "5" trialTemplate: primaryContainerName: training-container trialParameters: - name: neighbors description: KNN neighbors reference: neighbors trialSpec: apiVersion: batch/v1 kind: Job spec: template: metadata: annotations: sidecar.istio.io/inject: "false" spec: containers: - name: training-container image: e-dpiac-docker-local.docker.lowes.com/katib-sklearn:v3 command: - "python3" - "/app/iris.py" - "--neighbors=${trialParameters.neighbors}" - "--logger=hypertune" resources: requests: memory: "6Gi" cpu: "2" limits: memory: "10Gi" cpu: "4" restartPolicy: Never **What did you expect to happen:** The metrics should have been collected and the trials should have succeeded.. **Anything else you would like to add:** [Miscellaneous information that will assist in solving the issue.] **Environment:** - Katib version (check the Katib controller image version): katib-controller:v0.12.0 - Kubernetes version: (`kubectl version`): - OS (`uname -a`): --- <!-- Don't delete this message to encourage users to support your issue! --> Impacted by this bug? Give it a 👍 We prioritize the issues with the most 👍
closed
2023-07-19T16:45:32Z
2023-07-24T20:34:25Z
https://github.com/kubeflow/katib/issues/2175
[ "kind/bug" ]
mChowdhury-91
23
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
1,593
poor quality for testing with a custom dataset
Hello, I want to train pix2pix on cityscape and test on a segmentation masks from another dataset. This is my train command: `python train.py --name label2city_1024p --label_nc 0 --no_instance ` And my test command: `python test.py --name label2city_1024p --label_nc 0 --no_instance` These are the results: ![101_input_label](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/assets/63915243/11b48e8b-0c85-431c-9f7b-bd8e17940bff) ![101_synthesized_image](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/assets/63915243/1d92775f-2620-41c3-88e1-ff7a556e4f74) Thank you for helping
open
2023-07-31T10:08:28Z
2023-07-31T10:08:28Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1593
[]
At-Walid
0
jupyterhub/repo2docker
jupyter
1,272
Packagemanager.rstudio.com migration broke R buildpack
<!-- Thank you for contributing. These HTML commments will not render in the issue, but you can delete them once you've read them if you prefer! --> ### Bug description Per https://status.posit.co/ Rstudio-now-Posit PackageManager server was migrated from https://packagemanager.rstudio.com to https://packagemanager.posit.co/. `RBuildPack.get_rspm_snapshot_url` doesn't handle the redirect properly. <!-- Use this section to clearly and concisely describe the bug. --> #### Expected behaviour r2d outputs Dockerfile to stdout <!-- Tell us what you thought would happen. --> #### Actual behaviour ``` [Repo2Docker] Looking for repo2docker_config in /tmp/a Picked Local content provider. Using local repo .. Traceback (most recent call last): File "/home/xarth/codes/jupyterhub/repo2docker/venv/lib/python3.10/site-packages/requests/models.py", line 971, in json return complexjson.loads(self.text, **kwargs) File "/usr/lib/python3.10/json/__init__.py", line 346, in loads return _default_decoder.decode(s) File "/usr/lib/python3.10/json/decoder.py", line 340, in decode raise JSONDecodeError("Extra data", s, end) json.decoder.JSONDecodeError: Extra data: line 1 column 5 (char 4) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/xarth/codes/jupyterhub/repo2docker/venv/bin/jupyter-repo2docker", line 33, in <module> sys.exit(load_entry_point('jupyter-repo2docker', 'console_scripts', 'jupyter-repo2docker')()) File "/home/xarth/codes/jupyterhub/repo2docker/repo2docker/__main__.py", line 469, in main r2d.start() File "/home/xarth/codes/jupyterhub/repo2docker/repo2docker/app.py", line 850, in start self.build() File "/home/xarth/codes/jupyterhub/repo2docker/repo2docker/app.py", line 799, in build print(picked_buildpack.render(build_args)) File "/home/xarth/codes/jupyterhub/repo2docker/repo2docker/buildpacks/base.py", line 464, in render for user, script in self.get_build_scripts(): File "/home/xarth/codes/jupyterhub/repo2docker/repo2docker/buildpacks/r.py", line 278, in get_build_scripts cran_mirror_url = self.get_cran_mirror_url(self.checkpoint_date) File "/home/xarth/codes/jupyterhub/repo2docker/repo2docker/buildpacks/r.py", line 241, in get_cran_mirror_url return self.get_rspm_snapshot_url(snapshot_date) File "/home/xarth/codes/jupyterhub/repo2docker/repo2docker/buildpacks/r.py", line 209, in get_rspm_snapshot_url ).json() File "/home/xarth/codes/jupyterhub/repo2docker/venv/lib/python3.10/site-packages/requests/models.py", line 975, in json raise RequestsJSONDecodeError(e.msg, e.doc, e.pos) requests.exceptions.JSONDecodeError: Extra data: line 1 column 5 (char 4) ``` <!-- Tell us what you actually happens. --> ### How to reproduce <!-- Use this section to describe the steps that a user would take to experience this bug. --> 1. `mkdir /tmp/foo` 2. `echo "r-2022-05-05" > /tmp/foo/runtime.txt` 3. `jupyter-repo2docker --no-build --debug /tmp/foo` 4. See error ### Your personal set up <!-- Tell us a little about the system you're using. You can see the guidelines for setting up and reporting this information at https://repo2docker.readthedocs.io/en/latest/contributing/contributing.html#setting-up-for-local-development. --> - OS: Linux - Docker version: 24.0.0 - repo2docker version: 2022.10.0+69.g364bf2e.dirty (dirty cause I already tested a patch)
closed
2023-05-22T17:51:50Z
2023-05-30T14:29:03Z
https://github.com/jupyterhub/repo2docker/issues/1272
[]
Xarthisius
1
svc-develop-team/so-vits-svc
pytorch
59
Not saving checkpoints
I am running the Colab 4.0 notebook and everything works very well but when I ran the actual training step I notice that it's not saving any of the checkpoints. It does so at the beginning but then it just doesn't at all. I've run it for over 80 epochs now with absolutely no updates.
closed
2023-03-19T20:59:54Z
2023-03-23T08:39:19Z
https://github.com/svc-develop-team/so-vits-svc/issues/59
[]
Wacky817
1
yihong0618/running_page
data-visualization
195
TODO List
@geekplux @ben-29 @shaonianche what we can do. - [x] 地图上虚线作为可选项 - [ ] 考虑换成 next.js - [x] fix all alert - [ ] python code refactor - [ ] pnpm consider? - [ ] 性能提升 - [x] doc improve - [ ] geojson support - [ ] rss3 maybe - [ ] #122 - [ ] more status char - [ ] iOS app new --> GitHubPoster - [ ] oppo - [x] #187 - [x] dockerfile
open
2022-01-07T05:25:52Z
2023-11-11T14:54:57Z
https://github.com/yihong0618/running_page/issues/195
[ "help wanted" ]
yihong0618
0
dsdanielpark/Bard-API
api
250
How do I use cookies with ChatBard
i can only use bard with full cookie dict
closed
2023-12-12T22:02:54Z
2023-12-23T12:08:09Z
https://github.com/dsdanielpark/Bard-API/issues/250
[]
tinker1234
2
lepture/authlib
flask
409
in v0.15.5, flask_client do not contain UserInfo in `authorize_access_token()` method
I used pipenv to install authlib, the lastest version is `v.0.15.5` however, when I read the document, it said ``` when .authorize_access_token, the provider will include a id_token in the response. This id_token contains the UserInfo we need so that we don’t have to fetch userinfo endpoint again. ``` but when I look into the source code of authlib, I don't think it contains the UserInfo https://github.com/lepture/authlib/blob/d8e428c9350c792fc3d25dbaaffa3bfefaabd8e3/authlib/integrations/flask_client/remote_app.py#L67 is the source code in the master branch will be tagged in the future?
closed
2021-12-07T09:25:37Z
2022-01-14T06:53:59Z
https://github.com/lepture/authlib/issues/409
[]
tan-i-ham
1
JaidedAI/EasyOCR
machine-learning
614
ModuleNotFoundError: No module named 'easyocr'
Hey, I tried every method to install easyocr. I installed PyTorch without GPU `pip3 install torch torchvision torchaudio` and then I tried `pip install easyocr` but still I got an error, afterwards from one of the solved issues I tried `pip uninstall easyocr` `pip install git+git://github.com/jaidedai/easyocr.git` but still unable to import easyocr. And also firstly when I run the command `pip uninstall easyocr`, it showed me **WARNING: Skipping easyocr as it is not installed.** @rkcosmos please help through this problem
closed
2021-12-09T06:57:13Z
2022-08-07T05:01:26Z
https://github.com/JaidedAI/EasyOCR/issues/614
[]
harshitkd
4
pyeve/eve
flask
1,097
Application does not work when I upload photos to an eve dict schema properties
Hi: I have an eve schema set up as such: ``` python 'schema': { 'email': { 'type': 'string', 'unique': True, }, 'isEmailVerified': { 'type': 'boolean', 'default': False, 'readonly': True, }, 'profile': { 'type': 'dict', 'schema': { 'avatar': { 'type': 'media', }, 'bio': { 'type': 'string', 'minlength': 1, 'maxlength': 30, }, 'location': { 'type': 'dict', 'schema': { 'state': { 'type': 'string', }, 'city': { 'type': 'string', }, }, }, 'birth': { 'type': 'datetime', }, 'gender': { 'type': 'string', 'allowed': ["male", "famale"], } }, }, } ``` And I'm using cURL to make a PATCH request: ```cURL curl -X PATCH "http://127.0.0.1:5000/api/v1/accounts/5a461646b2fcb841401ea437" -H 'If-Match: a1944717f279751875f6f6794147b258df7e2af1' -F "profile.avatar=@WechatIMG21.jpeg" ``` Application is not responding and blocking other requests. How can I upload the photo?
closed
2017-12-29T12:17:22Z
2018-07-04T14:01:13Z
https://github.com/pyeve/eve/issues/1097
[ "stale" ]
zzzhouuu
1
nltk/nltk
nlp
2,666
Encountered an old issue #1387 with the latest version
I've got the same error after doing similar stuff. (See #1387.) ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) G:\Anaconda3\lib\site-packages\IPython\core\formatters.py in __call__(self, obj) 343 method = get_real_method(obj, self.print_method) 344 if method is not None: --> 345 return method() 346 return None 347 else: G:\Anaconda3\lib\site-packages\nltk\tree.py in _repr_png_(self) 817 raise LookupError 818 --> 819 with open(out_path, "rb") as sr: 820 res = sr.read() 821 os.remove(in_path) FileNotFoundError: [Errno 2] No such file or directory: 'G:\\Users\\01\\AppData\\Local\\Temp\\tmpkyszdc9g.png' ``` (Python) ``` Error: /syntaxerror in (binary token, type=155) Operand stack: --nostringval-- 寰蒋?Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval- - 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- fa lse 1 %stopped_push 1926 1 3 %oparray_pop 1925 1 3 %oparray_ pop --nostringval-- 1909 1 3 %oparray_pop 1803 1 3 %oparray_po p --nostringval-- %errorexec_pop .runexec2 --nostringval-- --nostringv al-- --nostringval-- 2 %stopped_push Dictionary stack: --dict:1173/1684(ro)(G)-- --dict:0/20(G)-- --dict:82/200(L)-- --dict:23 /50(L)-- Current allocation mode is local Last OS error: No such file or directory GPL Ghostscript 9.05: Unrecoverable error, exit code 1 ``` (commmand line) Seems the problem is that ghostscript failed - strange. It is reproducible as https://github.com/nltk/nltk/issues/1387#issuecomment-216426399: (in ipython notebook) ``` import nltk nltk.tree.Tree.fromstring("(test (this tree))") ``` This old issue is closed and referenced by some PRs, so I thought it was fixed - or this is a problem of Jupyter Notebook?
open
2021-02-08T14:02:44Z
2021-10-27T20:06:50Z
https://github.com/nltk/nltk/issues/2666
[]
yanhuihang
1
scikit-optimize/scikit-optimize
scikit-learn
1,127
Bug with using n_calls, n_initial_points and (x0, y0)
This is regarding `gp_minimize` method from skopt. The problem is using the `n_calls` and `n_initial_points` parameters when the (x0, y0) is already present. From the documentation I find, `The total number of evaluations, n_calls, are performed like the following. If x0 is provided but not y0, then the elements of x0 are first evaluated, followed by n_initial_points evaluations. Finally, n_calls - len(x0) - n_initial_points evaluations are made guided by the surrogate model. If x0 and y0 are both provided then n_initial_points evaluations are first made then n_calls - n_initial_points subsequent evaluations are made guided by the surrogate model.` This works well when `(x0, y0)` is not provided. However, when we provide `(x0, y0)` along with `n_calls` and `n_initial_points`, it does not behave the way it's supposed to be. **Scenario 1** (x0, y0) are provided. len(x0)=10 n_initial_points=5 n_calls=20 Expected: According to above documentation, 10 provided points are evaluated --> 5 initial random points are evaluated --> (20-5)=15 optimization steps are taken guided by surrogate model. Observed: 10 provided points are evaluated --> (len(x0) + n_initial_points) = 10+5 = 15 initial random points are evaluated --> (20-15) = 5 optimization steps are evaluated. If I had provided n_calls < (len(x0) + n_initial_points) e.g. n_calls=10, it complains that `n_calls` should be higher than `len(x0)` + `n_initial_points`. **Scenario 2 (hack)** (x0, y0) are provided. len(x0)=10 n_initial_points= -len(x0) + 5 = -10 + 5 = -5 n_calls=20 Expected: 10 provided points are evaluated --> -5 initial random points are evaluated (which is basically no new points are evaluated) --> (20-(-5)) = 25 optimization steps are taken guided by surrogate model. Observed: 10 provided points are evaluated --> -len(x0) + 5) = -10+5 = -5 initial random points are evaluated which means it should not evaluate anything but it takes -5 as 5 (absolute value) and evaluates 5 random points --> (20 - 10 - (-5)) = 15 optimization steps are evaluated. Would be great if you kindly fix the number or explain what is expected in reality.
open
2022-10-04T12:00:53Z
2024-01-19T15:34:33Z
https://github.com/scikit-optimize/scikit-optimize/issues/1127
[]
skshahidur
2
SALib/SALib
numpy
131
Add new method - New Sampling Strategy for Method of Morris
[Khare et al. (2015)](http://www.sciencedirect.com/science/article/pii/S1364815214003399) describe an improved method for sampling for the method of morris.
open
2017-01-20T11:03:46Z
2019-05-22T17:09:09Z
https://github.com/SALib/SALib/issues/131
[ "enhancement", "add_method" ]
willu47
2
graphql-python/graphene-sqlalchemy
sqlalchemy
320
How to convert a json response to graphql SQLAlchemyObjectType
I am trying to build an application with graphene_sqlalchemy to manage quite a few micro-services. I have the data models (SQLAlchemyObjectType) of each micro-service, and want to reuse them so that I don't need to duplicate data models. But the challenge is how to convert json responses from micro-services to graphql SQLAlchemyObjectType. Is there any reference? Or do I have to redefine data models for graphql? Thank you!
open
2021-10-25T13:40:07Z
2022-04-27T19:18:42Z
https://github.com/graphql-python/graphene-sqlalchemy/issues/320
[ "question" ]
zhaoxwei
0
apify/crawlee-python
automation
662
Memory usage average periodically printed from `AutoscaledPool` seems wrong
This issue was observed on the Apify platform - see for example https://api.apify.com/v2/logs/g7piQaNbAdteXnkFC. The problem is that the average memory usage starts at 0 and suddenly hops to 1.0. I can testify that the increase was way more gradual :slightly_smiling_face: ``` 2024-11-06T15:11:50.471Z [crawlee._autoscaling.autoscaled_pool] INFO current_concurrency = 1; desired_concurrency = 1; cpu = 0.581; mem = 0.0; event_loop = 0.227; client_info = 0.0 2024-11-06T15:12:50.474Z [crawlee._autoscaling.autoscaled_pool] INFO current_concurrency = 0; desired_concurrency = 1; cpu = 0.491; mem = 0.0; event_loop = 0.1; client_info = 0.0 2024-11-06T15:13:50.487Z [crawlee._autoscaling.autoscaled_pool] INFO current_concurrency = 0; desired_concurrency = 1; cpu = 0.671; mem = 0.0; event_loop = 0.104; client_info = 0.0 2024-11-06T15:14:50.496Z [crawlee._autoscaling.autoscaled_pool] INFO current_concurrency = 0; desired_concurrency = 3; cpu = 0.451; mem = 0.206; event_loop = 0.039; client_info = 0.0 2024-11-06T15:15:50.491Z [crawlee._autoscaling.autoscaled_pool] INFO current_concurrency = 0; desired_concurrency = 1; cpu = 0.242; mem = 1.0; event_loop = 0.0; client_info = 0.0 ```
closed
2024-11-06T16:37:13Z
2024-11-15T16:23:43Z
https://github.com/apify/crawlee-python/issues/662
[ "bug", "t-tooling" ]
janbuchar
15
sigmavirus24/github3.py
rest-api
792
Latest PyPI wheel not installing all dependencies
Is it possible that the recently released version 1.0.0 does not install all required dependencies. On Linux and Windows environments (Python 3.6.3, 3.6.4) we get an error like this during our unit test execution, which involves calling into the package API: ``` 2018-03-14 10:39:11.684617 | node | ImportError while importing test module '/workspace/test/test_github.py'. 2018-03-14 10:39:11.684809 | node | Hint: make sure your test modules/packages have valid Python names. 2018-03-14 10:39:11.684862 | node | Traceback: 2018-03-14 10:39:11.684974 | node | test/test_github.py:1: in <module> 2018-03-14 10:39:11.685046 | node | import github3 2018-03-14 10:39:11.685267 | node | /usr/local/lib/python3.6/site-packages/github3/__init__.py:18: in <module> 2018-03-14 10:39:11.685349 | node | from .api import ( 2018-03-14 10:39:11.686490 | node | /usr/local/lib/python3.6/site-packages/github3/api.py:11: in <module> 2018-03-14 10:39:11.686651 | node | from .github import GitHub, GitHubEnterprise 2018-03-14 10:39:11.686855 | node | /usr/local/lib/python3.6/site-packages/github3/github.py:10: in <module> 2018-03-14 10:39:11.686939 | node | from . import auths 2018-03-14 10:39:11.687134 | node | /usr/local/lib/python3.6/site-packages/github3/auths.py:6: in <module> 2018-03-14 10:39:11.687243 | node | from .models import GitHubCore 2018-03-14 10:39:11.687441 | node | /usr/local/lib/python3.6/site-packages/github3/models.py:8: in <module> 2018-03-14 10:39:11.687557 | node | import dateutil.parser 2018-03-14 10:39:11.687711 | node | E ModuleNotFoundError: No module named 'dateutil' ``` On the ``develop`` branch (is this where you released from?) ``setup.py`` does contain the required dependency ``python-dateutil``, however, it is not installed on Windows nor Linux. While not sure, I just noticed that ``setup.cfg`` does _not_ mention the new package in ``requires-dist``. Maybe this overrides the settings?
closed
2018-03-14T12:05:50Z
2018-03-14T17:24:49Z
https://github.com/sigmavirus24/github3.py/issues/792
[]
moltob
1
iperov/DeepFaceLab
deep-learning
932
Error extracting frames from data_src in deepfacelab colab
I was trying to run DFL_Colab, i imported the workspace folder successfully but got this error when i ran the extract frames cell "/content /!\ input file not found. Done. " please, any help regarding this?
open
2020-10-30T00:34:27Z
2023-06-08T21:51:12Z
https://github.com/iperov/DeepFaceLab/issues/932
[]
jamesbright
2
hpcaitech/ColossalAI
deep-learning
6,091
[BUG]: Got nan during backward with zero2
### Is there an existing issue for this bug? - [X] I have searched the existing issues ### 🐛 Describe the bug My code is based on Open-Sora, and can run without any issue on 32 gpus, using zero2. However, when using 64 gpus, nan appears in the tensor gradients after the second backward step. I have made a workaround to patch [`colossalai/zero/low_level/low_level_optim.py`](https://github.com/hpcaitech/ColossalAI/blob/89a9a600bc4802c912b0ed48d48f70bbcdd8142b/colossalai/zero/low_level/low_level_optim.py#L315) with ```python # line 313, in _run_reduction flat_grads = bucket_store.get_flatten_grad() flat_grads /= bucket_store.world_size if torch.isnan(flat_grads).any(): # here raise RuntimeError(f"rank {dist.get_rank()} got nan on flat_grads") # here ... if received_grad.dtype != grad_dtype: received_grad = received_grad.to(grad_dtype) if torch.isnan(received_grad).any(): # here raise RuntimeError(f"rank {dist.get_rank()} got nan on received_grad") # here ... ``` With the patch above, my code run normally and the loss seems fine. I think it may related to asynchronized state between cuda streams. I do not the exact reason and I do not think my workaround could really solve the issue. Any idea from the team member? ### Environment Nvidia H20 ColossalAI version: 0.4.3 cuda 12.4 pytorch 2.4
open
2024-10-16T06:39:10Z
2024-10-29T01:40:58Z
https://github.com/hpcaitech/ColossalAI/issues/6091
[ "bug" ]
flymin
9
huggingface/datasets
nlp
7,297
wrong return type for `IterableDataset.shard()`
### Describe the bug `IterableDataset.shard()` has the wrong typing for its return as `"Dataset"`. It should be `"IterableDataset"`. Makes my IDE unhappy. ### Steps to reproduce the bug look at [the source code](https://github.com/huggingface/datasets/blob/main/src/datasets/iterable_dataset.py#L2668)? ### Expected behavior Correct return type as `"IterableDataset"` ### Environment info datasets==3.1.0
closed
2024-11-22T17:25:46Z
2024-12-03T14:27:27Z
https://github.com/huggingface/datasets/issues/7297
[]
ysngshn
1
jina-ai/serve
machine-learning
5,898
Streaming for HTTP for Deployment
closed
2023-05-30T07:13:50Z
2023-06-19T08:23:38Z
https://github.com/jina-ai/serve/issues/5898
[]
alaeddine-13
0
graphql-python/graphene-sqlalchemy
sqlalchemy
111
How to generate GraphiQL documentation?
Hello, I'm trying to figure out how to provide descriptions to for the attributes of my SQLAlchemy class used to define my Graphene `ObjectType`: ```python from graphene_sqlalchemy import SQLAlchemyObjectType from database.model_people import ModelPeople import graphene class People(SQLAlchemyObjectType): """People node.""" class Meta: model = ModelPeople interfaces = (graphene.relay.Node,) ``` My "People node." docstring get displayed properly in GraphiQL but I'm not able to get a description for the attributes of my node which come from the ModelPeople class defined by SQLAlchemy. ![image](https://user-images.githubusercontent.com/13064696/36012967-895b1ebe-0d9c-11e8-885a-4049bddddf5a.png) Any idea? Thank you
closed
2018-02-09T05:27:15Z
2023-02-25T06:58:45Z
https://github.com/graphql-python/graphene-sqlalchemy/issues/111
[]
alexisrolland
3
waditu/tushare
pandas
1,033
最近2天一直出现Read Time Out的exception
#使用tushare的pro.query失败:HTTPConnectionPool(host='api.tushare.pro', port=80): Read timed out. (read timeout=10) 我使用pro.query执行查询,基本最近2天都有遇到这个出错。
closed
2019-05-07T22:55:49Z
2019-05-08T04:28:23Z
https://github.com/waditu/tushare/issues/1033
[]
yyxochen
1
Johnserf-Seed/TikTokDownload
api
374
你们会经常出现”获取作品数据失败,正在重新获取“吗?
经常出现提示”获取作品数据失败,正在重新获取“ 然后几次之后又能获取到。 就是说,能用,但是老是出这种提示。 你们也这样吗?
open
2023-03-28T04:52:20Z
2023-03-28T04:52:21Z
https://github.com/Johnserf-Seed/TikTokDownload/issues/374
[ "故障(bug)", "额外求助(help wanted)", "无效(invalid)" ]
zhengjinzhj
0
sczhou/CodeFormer
pytorch
273
scripts/crop_align_face.py broken
Hello. This file uses some deprecated function, which can be fixed (like ANTIALIAS). The other big problem is that if -o is anything other than "." is defined the whole thing breaks. it combines input and output and then uses that. Another thing is why save only the biggest picture? any tags to save everything?
open
2023-07-22T06:18:49Z
2023-07-22T06:18:49Z
https://github.com/sczhou/CodeFormer/issues/273
[]
Astra060
0
plotly/dash
jupyter
2,975
add new persistence_type: history_state
When basing the app around internal links which add entries to the browser history it could be useful with a persistence type which stores the values in the history state entries (`history.pushState(state, ...)`). I know that _ideally_ most/all such state would be part of the URL as this has more advantages beyond preserving the state on back/forward. But sometimes this can be a bit tricky - using URL parameters for state does require a certain app architecture - so I think it could be useful to easily add some state to the history entry "under the hood". I think this could be reasonable easy to implement. Do a `replaceState({...history.state, ...historyProps}, ...)` each time a "history-state persisted" property is changed. Then restore these in a "popstate" event handler. (I could not find any previous discussion on this topic)
open
2024-09-03T18:23:57Z
2024-09-04T13:22:42Z
https://github.com/plotly/dash/issues/2975
[ "feature", "P3" ]
olejorgenb
0
keras-team/keras
data-science
20,857
Several Problems when using Keras Image Augmentation Layers to Augment Images and Masks for a Semantic Segmentation Dataset
In the following, I will try to tell you a bit of my “story” about how I read through the various layers and (partly outdated) tutorials on image augmentation for semantic segmentation, what problems and bugs I encountered and how I was able to solve them, at least for my use case. I hope that I am reporting these issue(s) in the right place and that I can help the developers responsible with my “story”. I am currently using the latest versions of Keras (3.8) and Tensorflow (2.18.0) on an Ubuntu 23.10 machine, but have also observed the issues with other setups. My goal was to use some of the Image Augmentation Layers from Keras to augment the images AND masks in my semantic segmentation dataset. To do this I followed the [segmentation tutorial from Tensarflow](https://www.tensorflow.org/tutorials/images/segmentation) where a custom `Augment` class is used for augmentation. This code works fine because the `Augment` class uses only a single augmentation layer. But, for my custom code I wanted to use more augmentations. Therefore, I "combined" several of the augmentation layers trying both the normal [Pipeline layer](https://keras.io/api/layers/preprocessing_layers/image_augmentation/pipeline/) but also my preferred [RandomAugmentationPipeline layer from keras_cv](https://github.com/keras-team/keras-cv/blob/master/keras_cv/src/layers/preprocessing/random_augmentation_pipeline.py). I provide an example implementation in [this gist](https://colab.research.google.com/gist/RabJon/9f27d600301dcf6a734e8b663c802c7f/modification_of_segmentation.ipynb). The gist is a modification of the aforementioned [image segmentation tutorial from Tensorflow](https://www.tensorflow.org/tutorials/images/segmentation) where I intended to only change the `Augment` class slightly to fit my use case and to remove all the unnecessary parts. Unfortunately, I also had to touch the code that loads the dataset, because the dataset version used in the notebook was no longer supported. But, this is another issue. Anyway, at the bottom of the notebook you can see some example images and masks from the data pipeline and if you inspect the in more detail you can see that for some examples images and masks do not match because they were augmented differently, which is obviously not the expected behaviour. You can also see that these mismatches already happen from the first epoch on. On my system with my custom use case (where I was using the RandomAugmentationPipeline layer) it happend only from the second epoch onward, which made debugging much more difficult. I first assumed that it was one specific augmentation layer that caused the problems, but after trying them one-by-one I found out that it is the combination of layers that makes the problems. So, I started to think about possible solutions and I also tried to use the Sequential model from keras to combine the layers instead of the Pipeline layer, but the result remained the same. I found that the random seed is the only parameter which can be used to "control and sync" the augmentation of the images and masks, but I already had ensured that I always used the same seed. So, I started digging into the source code of the different augmentation layers and I found that most of the ones that I was using implement a `transform_segmentation_masks()` method which is however not used if the layers are called as described in the tutorial. This is because, in order to enforce the call of this method, images and masks must be passed to the same augmentation layer object as a dictionary with the keys “images” and “segmentation_masks” instead of using two different augmentation layer objects for images and masks. However, I had not seen this type of call using a dictionary in any tutorial, neither for Tensorflow nor for Keras. Nevertheless, I decided to change my code to make use of the `transform_segmentation_masks()` method, as I hoped that if such a method already existed, it would also process the images and masks correctly and thus avoid the mismatches. Unfortunately, this was not directly the case, because some of the augmentation layers changed the masks to float32 data types although the input was uint8. Even with layers such as “RandomFlip”, which should not change or interpolate the data at all, but only shift it. So, I had to wrap all layers again in a custom layer, which castes the data back to the input data type before returning it: ```python class ApplyAndCastAugmenter(BaseImageAugmentationLayer): def __init__(self, other_augmenter, **kwargs): super().__init__(**kwargs) self.other_augmenter = other_augmenter def call(self, inputs): output_dtypes = {} is_dict = isinstance(inputs, dict) if is_dict: for key, value in inputs.items(): output_dtypes[key] = value.dtype else: output_dtypes = inputs.dtype outputs = self.other_augmenter(inputs) if is_dict: for key in outputs.keys(): outputs[key] = keras.ops.cast(outputs[key], output_dtypes[key]) else: outputs = keras.ops.cast(outputs, output_dtypes) return outputs ``` With this (admittedly unpleasant) work-around, I was able to fix all the remaining problems. Images and masks finally matched perfectly after augmentation and this was immediately noticeable in a significant performance improvement in the training of my segmentation model. For the future I would now like to see that wrapping with my custom `ApplyAndCastAugmenter` is no longer necessary but is handled directly by the `transform_segmentation_masks()` method and it would also be good to have a proper tutorial on image augmentation for semantic segmentation or to update the old tutorials.
open
2025-02-04T11:08:42Z
2025-03-12T08:32:24Z
https://github.com/keras-team/keras/issues/20857
[ "type:Bug" ]
RabJon
3
jschneier/django-storages
django
567
The root folder is unsupported
I run the following code: ``` from django.core.management import call_command def create_db_backup(): call_command('dbbackup', compress=True, clean=True) ``` the backup creates and is uploaded to my dropbox account but at the end I get the following error: ``` [2018-08-25 17:18:13,416: INFO/ForkPoolWorker-2] Writing file to default-2018-08-25-17-17-50.psql.gz [2018-08-25 17:18:13,417: INFO/ForkPoolWorker-2] Request to files/get_metadata [2018-08-25 17:18:13,951: INFO/ForkPoolWorker-2] Request to files/upload_session/start [2018-08-25 17:18:32,623: INFO/ForkPoolWorker-2] Request to files/upload_session/append_v2 [2018-08-25 17:18:50,764: INFO/ForkPoolWorker-2] Request to files/upload_session/append_v2 [2018-08-25 17:21:52,031: INFO/ForkPoolWorker-2] Request to files/upload_session/finish [2018-08-25 17:21:54,395: INFO/ForkPoolWorker-2] Request to files/get_metadata BadInputError: BadInputError('d8e906319f920d75ffc769f3fe86fba4', 'Error in call to API function "files/get_metadata": request body: path: The root folder is unsupported.') ``` If I set any directory in root_path of DBBACKUP_STORAGE_OPTIONS, e.g.: ``` DBBACKUP_STORAGE_OPTIONS = { 'oauth2_access_token': 'token', 'root_path': '/dir/' } ``` then the error changes to: TypeError: 'FolderMetadata' object is not subscriptable. The same as in the issue: https://github.com/jschneier/django-storages/issues/396 Do you have any idea what may be wrong? Below I enclose information about my configuration. Requirements: ``` dropbox==9.0.0 django-dbbackup==3.2.0 django-storages==1.6.6 ``` Settings: ``` DBBACKUP_STORAGE = 'storages.backends.dropbox.DropBoxStorage' DBBACKUP_STORAGE_OPTIONS = { 'oauth2_access_token': 'token' } ```
closed
2018-08-25T17:45:17Z
2019-09-08T04:49:31Z
https://github.com/jschneier/django-storages/issues/567
[ "dropbox" ]
pwierzgala
3
davidsandberg/facenet
tensorflow
459
Is SVM a good choice for large amount class classification?
I am trying to use SVM to classify unknown image. In my use case, I have almost 1400 classes of people to classify. But when I apply SVM on my pipeline the performance is so bad that I really doubt that if SVM is the best choice. My pipeline is shown as below. 1. Collect images in each class The amount of face of each person is not fixed, from about max 600 to min 20. But the angle of face is not fixed also. 2. Embed by facenet & remove outlier After collecting all class image, input all of them to facenet class by class. I remove outlier in each class using [Local Outlier Factor](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.LocalOutlierFactor.html) and set n_neighbors=10 3. Apply SVM as final classifier Here is the most difficult stage I meet. After survey on the internet I think maybe the reason is that output of facenet 128 dimension vector which is much less than the number of classes. Apart from the reason I mentioned (dimension difference between embedding vector & amount of classes) I think there are still somewhere may cause such problem. - the size of bounding box is too small or resolution may too low - the algorithm I choose or some parameter can be tuned I hope I describe my problem well enough for everyone to understand. If there is still some information I need to provide, please let me know. Thank you.
closed
2017-09-17T09:53:56Z
2017-11-07T02:49:00Z
https://github.com/davidsandberg/facenet/issues/459
[]
posutsai
4
google-research/bert
nlp
609
Order of classes in test_results.tsv
When using BERT to make a prediction, I can see the prediction probabilities in the output file test_results.tsv, but I am not sure which is the order of the classes. How can I determine that?
open
2019-04-28T20:03:57Z
2019-07-25T18:47:59Z
https://github.com/google-research/bert/issues/609
[]
Crista23
1
dpgaspar/Flask-AppBuilder
flask
1,645
How can a flask Addon overwrite a template?
Sorry to post here but no result in other places like Stack Overflow or the mailing list. I created a flask addon using "flask fab create-addon". I would like to change the template appbuilder/general/security/login_oauth.html of a third-party app so I have in my addon: ``` templates appbuilder general security login_oauth.html ``` But when I load the third-party application my version of login_oauth.html is not loaded. I tried registering a blueprint as in [this post](https://stackoverflow.com/questions/32415245/how-to-load-templates-from-flask-extension) with the following code: ``` from flask import Blueprint bp = Blueprint('fab_addon_fslogin', __name__, template_folder='templates') class MyAddOnManager(BaseManager): def __init__(self, appbuilder): """ Use the constructor to setup any config keys specific for your app. """ super(MyAddOnManager, self).__init__(appbuilder) self.appbuilder.get_app.config.setdefault("MYADDON_KEY", "SOME VALUE") self.appbuilder.register_blueprint(bp) def register_views(self): """ This method is called by AppBuilder when initializing, use it to add you views """ pass def pre_process(self): pass def post_process(self): pass ``` But register_blueprint(bp) return: ``` File "/home/cquiros/data/projects2017/personal/software/superset/addons/fab_addon_fslogin/fab_addon_fslogin/manager.py", line 24, in __init__ self.appbuilder.register_blueprint(bp) File "/home/cquiros/data/projects2017/personal/software/superset/env_superset/lib/python3.8/site-packages/Flask_AppBuilder-3.3.0-py3.8.egg/flask_appbuilder/base.py", line 643, in register_blueprint baseview.create_blueprint( AttributeError: 'Blueprint' object has no attribute 'create_blueprint' ```
closed
2021-05-25T11:14:46Z
2021-09-06T20:42:00Z
https://github.com/dpgaspar/Flask-AppBuilder/issues/1645
[ "stale" ]
qlands
2
deeppavlov/DeepPavlov
nlp
896
Bert_Squad context length longer than 512 tokens sequences
Hi, Currently I have a long context and the answer cannot be extracted from the context when the context exceeds a certain length. Is there any python code to deal with long length context. P.S. I am using BERT-based model for context question answering.
closed
2019-06-23T14:37:55Z
2020-05-13T11:41:45Z
https://github.com/deeppavlov/DeepPavlov/issues/896
[]
Chunglwc
2
python-arq/arq
asyncio
220
New release?
Hello, Seems arq wasn't updated since October. Also, there's some good PR's to be reviewed. @samuelcolvin do you plan any updates to project?
closed
2020-12-21T21:51:04Z
2021-04-26T11:55:53Z
https://github.com/python-arq/arq/issues/220
[]
erakli
7
modelscope/modelscope
nlp
581
AttributeError: SiameseUiePipeline: 'torch.device' object has no attribute 'lower'
ModelScope中我在使用'damo/nlp_structbert_siamese-uninlu_chinese-base'模型时,根据模型示例finetune微调代码测试,报错:AttributeError: SiameseUiePipeline: 'torch.device' object has no attribute 'lower'
closed
2023-10-10T06:22:25Z
2023-12-20T13:18:23Z
https://github.com/modelscope/modelscope/issues/581
[]
CourseAI2015
5
blacklanternsecurity/bbot
automation
1,533
HTTP_RESPONSE header_dict cant handle multiple headers with the same value
This is most impactful with set-cookie headers, where only the last set-cookie cookie will end up in the dict, and multiple headers are often expected
closed
2024-07-07T18:32:19Z
2024-07-08T00:52:57Z
https://github.com/blacklanternsecurity/bbot/issues/1533
[ "bug" ]
liquidsec
1
pytest-dev/pytest-cov
pytest
425
pytest and local pytest plugin running in two separate processes, talking over http
This is first general approach, I can go to details, but that would require some time since I need to filter out some sensitive data/code. But I summarise the best I can for the moment. Essentially, we have a local pytest plugin api in `tests/fix_api.py` (snippet): ```python """ Pytest plugin to interact with the api at http level """ import json from urllib.parse import urljoin import pytest import requests def pytest_addoption(parser): @pytest.fixture(scope="session") def api_url(request): @pytest.fixture(scope="function") def api(api_url): # Users readily available in the test suite users = { "demo": "demo123", } class Api: def __init__(self, url=None): def get(self, user, url, data=None, headers=None): return self._request("get", user, url, data=data, headers=headers) def put(self, user, url, data=None, headers=None): def post(self, user, url, data=None, headers=None): def patch(self, user, url, data=None, headers=None): def delete(self, user, url, data=None, headers=None): def _request(self, method_name, user, url, data=None, headers=None): def login(self, user, password=None): def logout(self, user): ``` And our `conftest.py`: ```python import pytest pytest_plugins = ("tests.fix_api",) ``` Then, a test example `test_auth.py`: ```python def test_login_logout(api): resp = api.get(None, "/is_logged_in") assert resp.status_code == 401 assert resp.json() == {"error": "Unauthenticated"} resp = api.post(None, "/login", {"user": "demo", "password": "demo1234"}) assert resp.status_code == 401 assert resp.json() == {"error": "Invalid Credentials. Please try again."} resp = api.get(None, "/is_logged_in") assert resp.status_code == 401 resp = api.post(None, "/login", {"user": "demo", "password": "demo123"}) assert resp.status_code == 200 assert resp.json() == {"success": "Authenticated", "username": "demo"} resp = api.get(None, "/is_logged_in") assert resp.status_code == 200 assert resp.json() == {"username": "demo"} resp = api.post(None, "/logout") assert resp.status_code == 200 assert resp.json() == {"success": "logged out"} resp = api.get(None, "/is_logged_in") assert resp.status_code == 401 ``` Running `pytest tests/test_auth.py --cov --cov-report term-missing` and I got: ``` views/auth.py 45 26 42% 20-24, 30-39, 48-55, 63-65, 71 ``` Basically, none of the functions `def login()`, `def logout()` and `def is_logged_in()` is being reported as covered. Since pytest and local pytest plugin are running in two separate processes and talking over http, I'm wondering how could I make, if possible, coverage to work here.
closed
2020-08-17T20:29:23Z
2020-09-06T07:03:52Z
https://github.com/pytest-dev/pytest-cov/issues/425
[]
alanwilter
13
robotframework/robotframework
automation
4,523
Unit test `test_parse_time_with_now_and_utc` fails around DST change
Very minor but reporting as it was creating me headaches, running tests just exactly _today_. It seems to me that the `test_parse_time_with_now_and_utc` test will fail if run just on the day of DST change (ie. **today**) or the day before as they are doing comparisons with +/- 1 day and a hardcoded seconds difference value, which is incorrect if the DST happens between the two cases compared. Specifically today (last night DST changed here) I see failing those tests: ``` ('now - 1 day 100 seconds',-86500), ('NOW - 1D 10H 1MIN 10S', -122470)]: ``` Probably yesterday (so just before DST change) would be the ones with `+`. As written above I'm not sure this is worth investigating and fixing, but just wanted to report as it gave me by chance some worries while rebuilding and retesting the new version of Robotframework. I will just redo the tests tomorrow ;-) **Version Information:** Tried with both 6.0 and 5.0.1 **Steps to reproduce:** Run the unit tests, specifically on the day immediately before or after DST change. **Error message and traceback:** ``` ====================================================================== FAIL: test_parse_time_with_now_and_utc (test_robottime.TestTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/foo/rpmbuild/BUILD/robotframework-6.0/utest/utils/test_robottime.py", line 341, in test_parse_time_with_now_and_utc assert_true(expected <= parsed <= expected + 1), File "/home/foo/rpmbuild/BUILD/robotframework-6.0/utest/../src/robot/utils/asserts.py", line 115, in assert_true _report_failure(msg) File "/home/foo/rpmbuild/BUILD/robotframework-6.0/utest/../src/robot/utils/asserts.py", line 218, in _report_failure raise AssertionError() AssertionError ---------------------------------------------------------------------- ```
closed
2022-10-30T11:37:08Z
2022-10-30T18:36:13Z
https://github.com/robotframework/robotframework/issues/4523
[ "bug", "priority: low" ]
fedepell
1
polarsource/polar
fastapi
4,782
Subscription (Webhook): Renewal event
### Description Currently, you need to listen to `order.created` with `billing_reason=subscription_cycle` to get notified about subscription renewals. Trigger a `subscription.renewed` event to offer a more intuitive and self-explanatory webhook event.
open
2025-01-03T15:25:39Z
2025-01-03T15:25:43Z
https://github.com/polarsource/polar/issues/4782
[ "feature", "changelog", "dx" ]
birkjernstrom
0
tflearn/tflearn
data-science
928
IOError: [Errno socket error] [Errno 110] Connection timed out
hello, When I run `python convnet_mnist.py`, it still shows error: ``` File "/home/xxx/local/anaconda2/lib/python2.7/socket.py", line 575, in create_connection raise err IOError: [Errno socket error] [Errno 110] Connection timed out ``` is the url bad?
open
2017-10-10T04:07:16Z
2017-10-10T04:07:16Z
https://github.com/tflearn/tflearn/issues/928
[]
PapaMadeleine2022
0
aimhubio/aim
data-visualization
3,286
Remove or prune artifacts along with the run
## 🚀 Feature When deleting a run, artifacts should be removed too. If that is not possible, maybe `aim storage prune` could remove dangling artifacts ? ### Motivation Not filling up my hard drive with checkpoints. ### Additional context https://github.com/aimhubio/aim/pull/3164 https://github.com/aimhubio/aim/issues/3234
open
2025-02-07T15:57:52Z
2025-02-22T21:48:00Z
https://github.com/aimhubio/aim/issues/3286
[ "type / enhancement" ]
nlgranger
2
PaddlePaddle/models
computer-vision
5,696
希望能够给这个页面加一个目录,要不然从上往下翻非常不方便
https://github.com/PaddlePaddle/models/blob/release/2.4/docs/official/README.md 祝好~
open
2023-01-03T06:37:34Z
2024-02-26T05:07:48Z
https://github.com/PaddlePaddle/models/issues/5696
[]
we-enjoy-today
0
numba/numba
numpy
9,598
NumPy 2.0 incompatibility - A use of `np.complex_` still remains
When using `np.corrcoef`, the following error is produced with NumPy 2.0: ``` AttributeError: `np.complex_` was removed in the NumPy 2.0 release. Use `np.complex128` instead. ``` This is because its implementation still uses `np.complex_`: https://github.com/numba/numba/blob/556545c5b2b162574c600490a855ba8856255154/numba/np/arraymath.py#L2904
closed
2024-05-31T12:04:16Z
2024-06-10T11:33:03Z
https://github.com/numba/numba/issues/9598
[ "bug - failure to compile", "NumPy 2.0" ]
gmarkall
0
python-security/pyt
flask
46
Trim the "Reassigned in:" nodes to the ones that are relevant
So if we have the following code: ```python @app.route('/menu', methods=['POST']) def menu(): param = request.form['suggestion'] command = 'echo ' + param + ' >> ' + 'menu.txt' hey = 'echo ' + param + ' >> ' + 'menu.txt' yo = 'echo ' + hey + ' >> ' + 'menu.txt' subprocess.call(command, shell=True) with open('menu.txt','r') as f: menu = f.read() return render_template('command_injection.html', menu=menu) ``` We show the vulnerability output as: ``` 1 vulnerability found: Vulnerability 1: File: example/vulnerable_code/command_injection.py > User input at line 15, trigger word "form[": param = request.form['suggestion'] Reassigned in: File: example/vulnerable_code/command_injection.py > Line 16: command = 'echo ' + param + ' >> ' + 'menu.txt' File: example/vulnerable_code/command_injection.py > Line 17: hey = 'echo ' + param + ' >> ' + 'menu.txt' File: example/vulnerable_code/command_injection.py > Line 18: yo = 'echo ' + hey + ' >> ' + 'menu.txt' File: example/vulnerable_code/command_injection.py > reaches line 20, trigger word "subprocess.call(": subprocess.call(command,shell=True) ``` Where we don't really care about Line 17 and 18 in the output, right? I ran into this while doing https://github.com/python-security/pyt/issues/45, once I fix this then I can make the PR fixing both of them.
closed
2017-05-22T23:36:39Z
2017-06-05T17:36:28Z
https://github.com/python-security/pyt/issues/46
[]
KevinHock
5
awtkns/fastapi-crudrouter
fastapi
184
Overriding GET route does not properly remove existing route
Just stumbled upon this (and admit I haven't actually tested it yet, apologies for that) but it _looks_ like `CRUDGenerator.get` would not properly remove its existing GET route when overriding it: https://github.com/awtkns/fastapi-crudrouter/blob/9b829865d85113a3f16f94c029502a9a584d47bb/fastapi_crudrouter/core/_base.py#L149 Looks like a typo to me - shouldn't it rather be `"GET"` instead of `"Get"` here? The corresponding removal code compares to `route.methods` and IIRC FastAPI does uppercase all the methods: https://github.com/awtkns/fastapi-crudrouter/blob/26702027aa0b823c105ee1b96e1b2b3f46a3b742/fastapi_crudrouter/core/_base.py#L170-L178 Best regards, Holger
open
2023-03-29T14:19:07Z
2023-03-29T14:19:07Z
https://github.com/awtkns/fastapi-crudrouter/issues/184
[]
hjoukl
0
keras-team/autokeras
tensorflow
1,466
How to furnish AutoKeras in Anaconda on M1-chip MacBook Air?
First, sorry for my basic question. This is my first time to use MacOS 11.1, and I wonder how to install and build up a new AutoKeras-compatible virtual environment. Could you kindly let me about it? During a "usual" formalities below, the kernel dies :< I'm using Jupyter Notebook in Anaconda, and the environment is virtual one. --- import pandas as pd import gc import random import numpy as np from datetime import datetime, date, timedelta import math import re import sys import codecs import pickle import autokeras as ak
closed
2020-12-18T11:12:49Z
2021-09-28T22:06:35Z
https://github.com/keras-team/autokeras/issues/1466
[ "wontfix" ]
jisho-iemoto
9
huggingface/transformers
pytorch
36,045
Optimization -OO crashes docstring handling
### System Info transformers = 4.48.1 Python = 3.12 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use the `pipeline_flux.calculate_shift` function with` python -OO` Code runs fine with no Python interpreter optimization but crashes with `-OO` with: ``` File "/home/me/Documents/repo/train.py", line 25, in <module> from diffusers.pipelines.flux.pipeline_flux import calculate_shift File "/home/me/Documents/repo/.venv/lib/python3.12/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 20, in <module> from transformers import ( File "<frozen importlib._bootstrap>", line 1412, in _handle_fromlist File "/home/me/Documents/repo/.venv/lib/python3.12/site-packages/transformers/utils/import_utils.py", line 1806, in __getattr__ value = getattr(module, name) ^^^^^^^^^^^^^^^^^^^^^ File "/home/me/Documents/repo/.venv/lib/python3.12/site-packages/transformers/utils/import_utils.py", line 1805, in __getattr__ module = self._get_module(self._class_to_module[name]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/me/Documents/repo/.venv/lib/python3.12/site-packages/transformers/utils/import_utils.py", line 1819, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.clip.modeling_clip because of the following error (look up to see its traceback): 'NoneType' object has no attribute 'split' ``` This is related to the removal of docstrings with `-OO`. ### Expected behavior I'd expect no crash. I assume `lines = func_doc.split("\n")` could be replaced with: `lines = func_doc.split("\n") if func_doc else []`
closed
2025-02-05T11:31:43Z
2025-02-06T15:31:24Z
https://github.com/huggingface/transformers/issues/36045
[ "bug" ]
donthomasitos
3
ultralytics/ultralytics
deep-learning
18,769
Can yolo11 count objects in Classification?
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question Hi! I want to know if a classification model can count the classes. Like there are multiple bottles in a picture and classification only return "bottle". Can the classification model be use to count how many bottles are present. Thanks. ### Additional _No response_
open
2025-01-20T06:05:16Z
2025-01-20T08:40:11Z
https://github.com/ultralytics/ultralytics/issues/18769
[ "question", "classify" ]
Taimoor505
6
sczhou/CodeFormer
pytorch
384
Macos Video Enhancement Issue
Enviroment: MacOS Monterey 12.2.1 Command: sudo python inference_codeformer.py --bg_upsampler realesrgan --face_upsample -w 0.1 --input_path /Users/justinhan/Desktop/test/7.mp4 Error info: [vost#0:0/rawvideo @ 0x7fb4824118c0] Error submitting a packet to the muxer: Broken pipe Last message repeated 1 times [out#0/rawvideo @ 0x7fb48240e840] Error muxing a packet [out#0/rawvideo @ 0x7fb48240e840] Task finished with error code: -32 (Broken pipe) [out#0/rawvideo @ 0x7fb48240e840] Terminating thread with return code -32 (Broken pipe) zsh: killed sudo python inference_codeformer.py --bg_upsampler realesrgan --face_upsample [out#0/rawvideo @ 0x7fb48240e840] Error writing trailer: Broken pipe [out#0/rawvideo @ 0x7fb48240e840] Error closing file: Broken pipe
open
2024-07-01T07:39:46Z
2024-07-01T07:39:46Z
https://github.com/sczhou/CodeFormer/issues/384
[]
qunge5211314
0
google-research/bert
nlp
725
how to view the prediction result of each sample?
Bert is used for text prediction. In the output result, only the accuracy rate, how to view the prediction result of each sample?
open
2019-06-27T08:01:21Z
2019-08-29T15:28:56Z
https://github.com/google-research/bert/issues/725
[]
zhangrenjie666666
1
developmentseed/lonboard
jupyter
632
Pass color to `.viz` for a given table
Not sure if the title is the best description but this is what I'd like to do. At the moment I have two different tables (minor_lines and power_lines) that have more than one type of geometry (lines and polygons) in their geometry column. I'd like to plot both tables with different colors that I can choose. At the moment these colors are different but assigned automatically. ``` lonboard.viz([minor_lines, power_lines]) ``` ![Screenshot 2024-09-13 at 2 02 04 PM](https://github.com/user-attachments/assets/55dd4354-7e7e-4ecf-a001-bf8230753f3f) I want everything related to `minor_lines` to be one color, and for `power_lines` a different one.
closed
2024-09-13T20:03:28Z
2024-09-16T19:57:34Z
https://github.com/developmentseed/lonboard/issues/632
[]
ncclementi
4
Sanster/IOPaint
pytorch
304
嗨,作者你好,我希望通过有效的方式支持你,但kofi不允许同样来自中国的用户向你付款
按照paypal的说法,它认为同样来自中国的支持者违反了“国际法”,这让我很无奈,希望你能够提供可以供中国用户付款的方式,例如支付宝或微信。如果你认为这是可以的,请回复。特别对你的辛苦工作与付出表示衷心感谢! ![Snipaste_2023-05-10_04-53-22](https://github.com/Sanster/lama-cleaner/assets/72090841/5e1fc5e8-b17d-498a-9d0e-6e04d62b08cd)
closed
2023-05-09T20:51:50Z
2023-05-10T17:42:24Z
https://github.com/Sanster/IOPaint/issues/304
[]
ieatchili2024
2
dbfixtures/pytest-postgresql
pytest
540
Unable to connect to Postgresql in a docker
### What action do you want to perform Connect to a postgresql db in a docker container ### What are the results I copied the example from the README.md: ```python postgresql_in_docker = factories.postgresql_noproc() postresql = factories.postgresql("postgresql_in_docker", db_name="test") def test_postgres_docker(postresql): """Run test.""" cur = postgresql.cursor() cur.execute("CREATE TABLE test (id serial PRIMARY KEY, num integer, data varchar);") postgresql.commit() cur.close() ``` First of all, there are 2 mistakes in this example, the first one is the name of the fixture which has to be postgresql instead of postresql, the second mistake is the argument db_name which has to be dbname. After I made the changes and the code is syntactically correct and compiles, I got the error: AttributeError: 'function' object has no attribute 'cursor' ### What are the expected results Get a cursor and use it to insert some data in the database in the docker container
closed
2021-12-28T10:05:24Z
2022-01-10T11:11:25Z
https://github.com/dbfixtures/pytest-postgresql/issues/540
[]
igondiu
1
babysor/MockingBird
pytorch
794
PPG训练时的报错,请帮忙看看
PPG预处理很顺利,ppg2mel.yaml路径也改了,但是这个错误提示怎么都解决不了,请大家帮忙看看能否有经验分享下。 D:\MockingBird-main>python ppg2mel_train.py --config .\ppg2mel\saved_models\ppg2mel.yaml --oneshotvc Traceback (most recent call last): File "D:\MockingBird-main\ppg2mel_train.py", line 67, in <module> main() File "D:\MockingBird-main\ppg2mel_train.py", line 50, in main config = HpsYaml(paras.config) File "D:\MockingBird-main\utils\load_yaml.py", line 44, in __init__ hps = load_hparams(yaml_file) File "D:\MockingBird-main\utils\load_yaml.py", line 8, in load_hparams for doc in docs: File "C:\Users\benny\AppData\Local\Programs\Python\Python39\lib\site-packages\yaml\__init__.py", line 127, in load_all loader = Loader(stream) File "C:\Users\benny\AppData\Local\Programs\Python\Python39\lib\site-packages\yaml\loader.py", line 34, in __init__ Reader.__init__(self, stream) File "C:\Users\benny\AppData\Local\Programs\Python\Python39\lib\site-packages\yaml\reader.py", line 85, in __init__ self.determine_encoding() File "C:\Users\benny\AppData\Local\Programs\Python\Python39\lib\site-packages\yaml\reader.py", line 124, in determine_encoding self.update_raw() File "C:\Users\benny\AppData\Local\Programs\Python\Python39\lib\site-packages\yaml\reader.py", line 178, in update_raw data = self.stream.read(size) UnicodeDecodeError: 'gbk' codec can't decode byte 0x86 in position 176: illegal multibyte sequence
open
2022-12-01T14:03:43Z
2022-12-03T02:45:27Z
https://github.com/babysor/MockingBird/issues/794
[]
benny1227
1
AirtestProject/Airtest
automation
1,103
airtest报告问题
生成报告的时候,用了--export到出静态文件,发现/staic相关的文件导出了,但是截图文件没有导出来
open
2023-02-08T06:49:48Z
2023-02-08T06:49:48Z
https://github.com/AirtestProject/Airtest/issues/1103
[]
jinx0321
0
Yorko/mlcourse.ai
plotly
664
jupyter_russian/project_alice/week2_analysis_hypotheses.ipynb Специализация Курсеры
На курсере отличается первый вопрос от указанного в файле. Вместо количества уникальных длин новых разреженных матриц там спрашивается: Вопрос 1. Каковы длины (число сессий) новых 16 разреженных матриц? Эти длины нужно вывести в файл через пробел. Проверить можно здесь: https://www.coursera.org/learn/data-analysis-project/programming/lWVOk/podghotovka-i-piervichnyi-analiz-dannykh/submission Также можно добавить во втором вопросе, что грейдер в качестве ответов "да/нет" принимает только "YES/NO", регистр важен.
closed
2020-04-21T12:16:17Z
2020-04-21T13:20:09Z
https://github.com/Yorko/mlcourse.ai/issues/664
[]
eye-shield-77
1
explosion/spaCy
machine-learning
12,659
FIXED: Pydantic issubclass error for python 3.8 and 3.9
**UPDATE**: This is now fixed for spacy v3.4 and v3.5 with the release of Pydantic v1.10.8. New installs with `pip install spacy` should work without issue. To fix an existing venv for spacy v3.4 or v3.5, please upgrade Pydantic to the latest compatible version with: ```shell python -m pip install -U pydantic spacy ``` Or for a specific version of spacy: ```shell python -m pip install -U pydantic spacy==3.4.4 ``` For spacy v3.2 and v3.3, we have published patch releases with fixes for the `typing_extension` requirement. Upgrade to spacy v3.2.6+ or v3.3.3+: ```shell python -m pip install 'spacy~=3.2.6' ``` ```shell python -m pip install 'spacy~=3.3.3' ``` --------- Original report: There appears to be a bug in Pydantic v1.10.7 and earlier related to the recent release of `typing_extensions` v4.6.0 that causes errors for `import spacy` and any other spacy commands for python 3.8 and python 3.9. We're hoping that there will be an upstream fix soon. You can follow this issue for more details: https://github.com/pydantic/pydantic/issues/5821 As a workaround, you can add `typing_extensions<4.6.0` to your requirements. ---------- The error traceback looks like this: ``` File "pydantic/main.py", line 197, in pydantic.main.ModelMetaclass.__new__ File "pydantic/fields.py", line 506, in pydantic.fields.ModelField.infer File "pydantic/fields.py", line 436, in pydantic.fields.ModelField.__init__ File "pydantic/fields.py", line 552, in pydantic.fields.ModelField.prepare File "pydantic/fields.py", line 661, in pydantic.fields.ModelField._type_analysis File "pydantic/fields.py", line 668, in pydantic.fields.ModelField._type_analysis File "/usr/lib/python3.8/typing.py", line 774, in __subclasscheck__ return issubclass(cls, self.__origin__) TypeError: issubclass() arg 1 must be a class ```
closed
2023-05-23T08:43:17Z
2023-08-19T00:02:05Z
https://github.com/explosion/spaCy/issues/12659
[ "bug", "third-party" ]
adrianeboyd
13
aiortc/aiortc
asyncio
112
Issues using together with Janus gateway
Hello, thanks again for the great lib! I am trying to use an aiortc based client to send video via [Janus gateway](https://janus.conf.meetecho.com/). However, I am encountering an issue: When connecting with the client, ICE etc always goes through fine. However, it takes up towards 1-2 minutes before video starts streaming. Looking into `chrome://webrtc-internals`, it seems like the reason is that chrome in the beginning cannot determine which encoding is used. In the stats for the aiortc peerconnection it says `codecImplementationName: unknown` and there are lots of PLI:s sent back to the client. After 1-2 minutes it changes to `libvpx` and video starts streaming, as can be seen in the PLI graph: ![image](https://user-images.githubusercontent.com/24483099/50352443-7b3d2d80-0545-11e9-9d5c-4fbabda10c1d.png) Streaming video directly from browser to browser via Janus works fine. Also streaming aiortc to browser works fine. aiortc SDP: https://pastebin.com/4v6JRALa Janus SDP: https://pastebin.com/gmJdgs5b My aiortc code: https://gist.github.com/tobiasfriden/736d62cd103d70936558e967463fa8f5 Using this to run Janus with docker: https://github.com/linagora/docker-janus-gateway Greatly appreciate any help! Edit: The issue seems to be that the PLI packets are not forwarded to the `RTCRtpSender` correctly. If I add some prints I can see PLI:s recieved in the `RTCDtlsTransport` but it cannot find the corresponding reciever. Printing `self._rtp_router.ssrc_tables` prints an empty dict so seems like the reciever is never registered.
closed
2018-12-21T16:34:27Z
2018-12-25T12:08:40Z
https://github.com/aiortc/aiortc/issues/112
[]
tobiasfriden
4
widgetti/solara
jupyter
508
task decorator & cancelled error
Hello! Nice work on this. very cool. I am using (potentially misusing) the task decorator functionality, and I am getting the errors below. Using bleeding edge version. You can see it takes a few times of sliding the slider until it errors. Maybe because Im creating so many tasks? Tried wrapping different parts of the code in try/excepts to no avail. ```python from solara.lab import task from reacton import use_state import numpy as np import solara @task() async def debounce_update(value): await asyncio.sleep(1) return value @solara.component def Page(): im_idx, set_im_idx = use_state(0) plotly_im_idx, set_plotly_im_idx = use_state(0) def on_slider_change(value): set_im_idx(value) if debounce_update.pending: debounce_update.cancel() debounce_update(value) if debounce_update.finished: new_idx = debounce_update.value if new_idx == im_idx: set_plotly_im_idx(new_idx) slider = solara.SliderInt( label="Image Index", min=0, max=len(image_data) - 1, step=1, value=im_idx, on_value=on_slider_change, ) with solara.Card() as main: solara.VBox([slider]) if debounce_update.finished: print("finished") if debounce_update.cancelled: print("cancelled") if debounce_update.pending: print("pending") if debounce_update.error: print("ERRRRRROOOOOOOR") return main ``` <details><summary>Details</summary> <p> pending pending pending pending pending pending pending pending pending pending pending pending pending pending pending pending pending pending pending pending pending pending pending pending finished finished pending pending pending pending pending pending pending pending pending pending pending pending pending pending pending finished finished pending pending pending pending pending pending pending pending Future exception was never retrieved future: <Future finished exception=CancelledError()> Traceback (most recent call last): File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 235, in runs_in_thread thread_event_loop.run_until_complete(current_task) File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 292, in _async_run await runner() File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 266, in runner self._last_value = value = await self.function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/swelborn/Documents/gits/tomopyui/sol.py", line 36, in debounce_update await asyncio.sleep(1) File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/tasks.py", line 649, in sleep return await future ^^^^^^^^^^^^ asyncio.exceptions.CancelledError Future exception was never retrieved future: <Future finished exception=CancelledError()> Traceback (most recent call last): File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 235, in runs_in_thread thread_event_loop.run_until_complete(current_task) File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 292, in _async_run await runner() File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 266, in runner self._last_value = value = await self.function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/swelborn/Documents/gits/tomopyui/sol.py", line 36, in debounce_update await asyncio.sleep(1) File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/tasks.py", line 649, in sleep return await future ^^^^^^^^^^^^ asyncio.exceptions.CancelledError Future exception was never retrieved future: <Future finished exception=CancelledError()> Traceback (most recent call last): File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 235, in runs_in_thread thread_event_loop.run_until_complete(current_task) File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 292, in _async_run await runner() File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 266, in runner self._last_value = value = await self.function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/swelborn/Documents/gits/tomopyui/sol.py", line 36, in debounce_update await asyncio.sleep(1) File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/tasks.py", line 649, in sleep return await future ^^^^^^^^^^^^ asyncio.exceptions.CancelledError finished finished </p> </details>
open
2024-02-18T05:51:34Z
2024-02-19T18:21:23Z
https://github.com/widgetti/solara/issues/508
[ "bug", "good first issue", "help wanted" ]
swelborn
3
fastapi-admin/fastapi-admin
fastapi
22
什么时候支持postgress
git@github.com:tiangolo/full-stack-fastapi-postgresql.git
closed
2020-11-27T06:31:18Z
2020-12-01T01:24:09Z
https://github.com/fastapi-admin/fastapi-admin/issues/22
[]
harrywu001
1
PokeAPI/pokeapi
graphql
542
API's slow response
I am using this api since some weeks and it was working perfectly and i am using Pokepy moduke of it but today the api response time is taking 6min to get a pokemon or anything i have tested alot i used pokebase and pokepy to check but noth taking time to get the data from website please check to it since my discord bot is totally shutdown due to this. Here is the Proof of output when i try in console it takes 6 Minutes to response ![Screenshot_2020-12-02-16-15-56-92_c759c44d10a956b96f85cc66750ff86e](https://user-images.githubusercontent.com/74323090/100862890-c4c87e00-34b9-11eb-86f5-636922f6266c.jpg) Thank you
closed
2020-12-02T10:46:50Z
2020-12-04T18:50:29Z
https://github.com/PokeAPI/pokeapi/issues/542
[ "invalid", "cloudflare" ]
AryanXdProYT
11
miguelgrinberg/python-socketio
asyncio
131
Async send() not working
Hi all, I have a small app which I want to read a log file on disk and send its contents live to connected web clients. I can `socketio.send()` messages successfully in a connect handler, but when I add my `follow_file()` loop nothing is sent, before/during/after the loop. I understand file I/O is blocking and I hope to have accounted for that, but I am clearly missing something. In `on_connect_send_logs()`, each time a client connects I see the contents of `build.log` printed to stdout, and when the end of the file is reached the app prints the following continuously every second: ``` follow_file - for follow_file - blocking/sleep ``` If I append something to `build.log` it gets printed to stdout, so `on_connect_send_logs()` seems to work but the `ws.send()` calls never seem to have any effect. If I remove the `for line in follow_file('/build.log'):` loop my client receives the frames 'A', 'B', 'C'. I am running this via uWSGI in single threaded mode, with `gevent_uwsgi`. Nginx is acting as a reverse proxy using `proxy_pass`. This file is the app logic. `project/views.py` ```project/views.py from flask_cas import CAS from flask_cas import login_required import flask_socketio as ws from project import app, socketio from project.util import is_admin # Initialize SSO provider cas = CAS(app) [...] TRIMMED [...] @socketio.on('connect', namespace='/logs') @login_required def on_connect_send_logs(): user = cas.username if is_admin(user): ws.send("A") for line in follow_file('/build.log'): print("for line") print(line) ws.send(line) sys.stdout.flush() socketio.sleep(0) ws.send("B") ws.send("C") else: ws.disconnect() ''' Follow the live contents of a text file. https://code.activestate.com/recipes/578424-tailing-a-live-log-file-with-python/ ''' def follow_file(filename): stream = open(filename, 'r') line = '' for block in iter(lambda:stream.read(1024), None): print("follow_file - for") if '\n' in block: print("follow_file - newline") # Only enter this block if we have at least one line to yield. # The +[''] part is to catch the corner case of when a block # ends in a newline, in which case it would repeat a line. for line in (line+block).splitlines(True)+['']: if line.endswith('\n'): yield line # When exiting the for loop, 'line' has any remaninig text. elif not block: # Wait for data. print("follow_file - blocking/sleep") socketio.sleep(1) ``` This file starts the Flask and SocketIO server, and loads the logic. `project/__init__.py` ```project/__init__.py from flask import Flask from flask_socketio import SocketIO app = Flask(__name__) app.config.from_object('config') app.secret_key = app.config['SECRET_KEY'] socketio = SocketIO(app, async_mode="gevent_uwsgi", allow_upgrades=True, debug=True) # app.config is now an object containing the values of config.py from project import views ``` This file is called by uWSGI. `project/wsgi.py` ```project/wsgi.py """This is for running a production server via uWSGI""" from project import app, socketio application = app # Don't start the built-in uwsgi server when called from uWSGI # Start the app by calling socketio's own run method, which starts the app too if __name__ == '__main__': socketio.run(debug=True) ``` Thanks in advance! Would love to get this working.
closed
2017-08-18T16:42:57Z
2019-01-13T22:21:11Z
https://github.com/miguelgrinberg/python-socketio/issues/131
[ "question" ]
vladionescu
12
wkentaro/labelme
deep-learning
1,564
Does not launch after installing with pip on windows11
### Provide environment information ``` (labelme_env) PS C:\Users\irema> python --version Python 3.12.9 (labelme_env) PS C:\Users\irema> python -m pip list Package Version ------------------ ----------- annotated-types 0.7.0 beautifulsoup4 4.13.3 certifi 2025.1.31 charset-normalizer 3.4.1 click 8.1.8 colorama 0.4.6 coloredlogs 15.0.1 contourpy 1.3.1 cycler 0.12.1 filelock 3.18.0 flatbuffers 25.2.10 fonttools 4.56.0 gdown 5.2.0 humanfriendly 10.0 idna 3.10 imageio 2.37.0 imgviz 1.7.6 kiwisolver 1.4.8 labelme 5.8.0 lazy_loader 0.4 loguru 0.7.3 matplotlib 3.10.1 mpmath 1.3.0 natsort 8.4.0 networkx 3.4.2 numpy 2.2.4 onnxruntime 1.21.0 onnxruntime-gpu 1.21.0 osam 0.2.3 packaging 24.2 pillow 11.1.0 pip 25.0.1 protobuf 6.30.1 pydantic 2.10.6 pydantic_core 2.27.2 pyparsing 3.2.1 PyQt5 5.15.11 PyQt5-Qt5 5.15.2 PyQt5_sip 12.17.0 pyreadline3 3.5.4 PySocks 1.7.1 python-dateutil 2.9.0.post0 PyYAML 6.0.2 QtPy 2.4.3 requests 2.32.3 scikit-image 0.25.2 scipy 1.15.2 six 1.17.0 soupsieve 2.6 sympy 1.13.3 termcolor 2.5.0 tifffile 2025.3.13 tqdm 4.67.1 typing_extensions 4.12.2 urllib3 2.3.0 win32_setctime 1.2.0 ``` ### What OS are you using? Windows 11 Pro 10.0.26100 ### Describe the Bug I'm using powershell on windows 11. I do the following: ``` python -m venv labelme_env .\labelme_env\Scripts\activate pip install labelme python -m labelme ``` and this is the output: ``` Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "C:\Users\irema\labelme_env\Lib\site-packages\labelme\__main__.py", line 15, in <module> from labelme.app import MainWindow File "C:\Users\irema\labelme_env\Lib\site-packages\labelme\app.py", line 21, in <module> from labelme._automation import bbox_from_text File "C:\Users\irema\labelme_env\Lib\site-packages\labelme\_automation\bbox_from_text.py", line 6, in <module> import osam from . import apis # noqa: F401 ^^^^^^^^^^^^^^^^^^ File "C:\Users\irema\labelme_env\Lib\site-packages\osam\apis.py", line 5, in <module> import onnxruntime File "C:\Users\irema\labelme_env\Lib\site-packages\onnxruntime\__init__.py", line 61, in <module> raise import_capi_exception File "C:\Users\irema\labelme_env\Lib\site-packages\onnxruntime\__init__.py", line 24, in <module> from onnxruntime.capi._pybind_state import ( File "C:\Users\irema\labelme_env\Lib\site-packages\onnxruntime\capi\_pybind_state.py", line 32, in <module> from .onnxruntime_pybind11_state import * # noqa ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ImportError: DLL load failed while importing onnxruntime_pybind11_state: A dynamic link library (DLL) initialization routine failed. ``` ### Expected Behavior The GUI to launch. ### To Reproduce _No response_
open
2025-03-20T23:23:08Z
2025-03-20T23:23:08Z
https://github.com/wkentaro/labelme/issues/1564
[]
sodiumnitrate
0
python-restx/flask-restx
api
413
Implement writeOnly for documentation purpose
This framework already has the readonly parameters for the swagger generation, while the writeonly is still missing. It's very important when implementing models like user while the password should only be assigned and shouldn't appear in any response body. ``` "password": fields.String( required=True, writeonly=True, description="Password", example="password" ), ```
closed
2022-02-13T13:30:58Z
2022-02-13T13:32:45Z
https://github.com/python-restx/flask-restx/issues/413
[ "enhancement" ]
morland96
0
uriyyo/fastapi-pagination
fastapi
564
Create Params that allow for unlimited results in a single page by default
Hello everyone, I'm using FastAPI Pagination in my API. It works like a charm. But I have a special request. I would like the results from my endpoints to be returned in a single page with an unlimited size by default. My first instance would be to achieve it like this: ```py import sys from fastapi import Query from fastapi_pagination import Params class DefaultPaginationConfiguration(Params): page: int = Query(1, ge = 1, description = "Page number") size: int = Query(sys.maxsize, ge = 1, description = "Page size") ``` But this seems like a rather crude approach to me. Is there some better way that I didn't spot in the documentation? Thanks in advance! Joshua
closed
2023-03-07T14:45:40Z
2023-04-12T08:24:10Z
https://github.com/uriyyo/fastapi-pagination/issues/564
[ "question" ]
Joshua-Schroijen
2
saulpw/visidata
pandas
2,198
Table view is blank when table saved from .tsv to .vds - I think there is data but it is invisible
**Brief Description** When I save to .vds from an existing .tsv file the screen is blank Version 2.11,1 **Expected result** For data to be visible in the table view **Actual result with screenshot** ![Screenshot from 2023-12-29 21-09-25](https://github.com/saulpw/visidata/assets/5024267/655e3834-8789-434a-9556-6eab1cfbe5d7) If you get an unexpected error, please include the full stack trace that you get with `Ctrl-E`. 'Ctrl-E' only produces "no error" message **Steps to reproduce with sample data and a .vd** 1. Make a database 2. Save it as a .tsv file 3. Close the file and quit 4. open the file 5. save the file as a .vds file 6. close the file 7. open the .vds file and there is no visible data in the table view First try reproducing without any user configuration by using the flag `-N`. e.g. `echo "abc" | vd -f txt -N` Same result from visidata -N Please attach the commandlog (saved with `Ctrl-D`) to show the steps that led to the issue. See [here](http://visidata.org/docs/save-restore/) for more details. No steps led to the issue except opening the .vds file after it was saved **Additional context** Please include the version of VisiData and Python. Version 2.11,1 Python 3.10.12 -Shift O- changing the colors did not help I could tell the data was there because I could enter the row view and read the data for each individual row [RolfClients.txt](https://github.com/saulpw/visidata/files/13796900/RolfClients.txt)
closed
2023-12-30T02:19:16Z
2023-12-30T22:14:14Z
https://github.com/saulpw/visidata/issues/2198
[ "bug", "fixed" ]
AaronFeldman
2
dmlc/gluon-nlp
numpy
688
[DEMO] GluonNLP demos
Similar to https://demo.allennlp.org, it would be great to have online demo applications for various models available in gluonnlp. List of demos from [AllenNLP](https://demo.allennlp.org/reading-comprehension) for reference: Annotate a sentence - Semantic Role Labeling - Named Entity Recognition - Constituency Parsing - Dependency Parsing - Open Information Extraction - Sentiment Analysis Annotate a passage - Coreference Resolution Answer a question - Reading Comprehension Semantic parsing - WikiTableQuestions Semantic Parser - Cornell NLVR Semantic Parser - Text to SQL (ATIS) - QuaRel Zero Other - Textual Entailment - Event2Mind - Language Modeling - Masked Language Modeling
open
2019-05-02T03:34:15Z
2020-01-09T18:49:27Z
https://github.com/dmlc/gluon-nlp/issues/688
[ "help wanted" ]
eric-haibin-lin
7
mkhorasani/Streamlit-Authenticator
streamlit
39
Implementing a "register user" fails
I've added a widget to allow user to register (per the doc): `try:` ` if authenticator.register_user('Register user', preauthorization=False):` ` st.success('User registered successfully')` `except Exception as e:` ` st.error(e)` But when loading the app, I get: "Pre-authorization argument must not be None" streamlit == 1.9.2 streamlit-authenticator == 0.2.1 OS == Ubuntu 16.04 Python == 3.6.13 ![Screen Shot 2022-11-30 at 6 18 04 PM](https://user-images.githubusercontent.com/22056950/204935748-862c6067-7cdd-4a91-9cd8-12466ba44e00.png)
closed
2022-12-01T00:20:34Z
2022-12-01T17:29:45Z
https://github.com/mkhorasani/Streamlit-Authenticator/issues/39
[]
daytonjones
5
deezer/spleeter
deep-learning
648
Can we use this model to do VAD?
<!-- Please respect the title [Discussion] tag. --> Hello, I found this model in VaD recently. At present, we use the voice audio separated from this model to make VAD (using neural network and LSTM). At present, I have an idea. Since this model can separate voice and background, this model should know whether this frame is voice or background, so we can make a VAD, At present, I have tried the mask matrix, but the effect is not very good. After the mask matrix is visualized, it is similar to the separated vocal spectrum. But I don't want to give up the idea. Where should I modify it Thanks
open
2021-08-24T12:14:29Z
2021-08-25T06:13:50Z
https://github.com/deezer/spleeter/issues/648
[ "question" ]
YLQY
2
scikit-learn/scikit-learn
python
30,281
scipy.optimize._optimize.BracketError in some cases of power transformer
### Describe the bug Similar to #27499, in very few cases the power transformation fails. Edit: Actually, it starts with a `RuntimeWarning: overflow encountered in power` because at this point the lambda is 292.8… And thus the out becomes `[inf, inf, inf]`. ### Steps/Code to Reproduce ```python from sklearn.preprocessing import PowerTransformer transformer = PowerTransformer() transformer.fit([[23.81], [23.98], [23.97]]) ``` ### Expected Results No error. ### Actual Results ```python /tmp/test_venv/lib/python3.12/site-packages/sklearn/preprocessing/_data.py:3438: RuntimeWarning: overflow encountered in power out[pos] = (np.power(x[pos] + 1, lmbda) - 1) / lmbda --------------------------------------------------------------------------- BracketError Traceback (most recent call last) Cell In[1], line 3 1 from sklearn.preprocessing import PowerTransformer 2 transformer = PowerTransformer() ----> 3 transformer.fit([[23.80762687], [23.97982808], [23.97586205]]) File /tmp/test_venv/lib/python3.12/site-packages/sklearn/base.py:1473, in _fit_context.<locals>.decorator.<locals>.wrapper(estimator, *args, **kwargs) 1466 estimator._validate_params() 1468 with config_context( 1469 skip_parameter_validation=( 1470 prefer_skip_nested_validation or global_skip_validation 1471 ) 1472 ): -> 1473 return fit_method(estimator, *args, **kwargs) File /tmp/test_venv/lib/python3.12/site-packages/sklearn/preprocessing/_data.py:3251, in PowerTransformer.fit(self, X, y) 3231 @_fit_context(prefer_skip_nested_validation=True) 3232 def fit(self, X, y=None): 3233 """Estimate the optimal parameter lambda for each feature. 3234 3235 The optimal lambda parameter for minimizing skewness is estimated on (...) 3249 Fitted transformer. 3250 """ -> 3251 self._fit(X, y=y, force_transform=False) 3252 return self File /tmp/test_venv/lib/python3.12/site-packages/sklearn/preprocessing/_data.py:3304, in PowerTransformer._fit(self, X, y, force_transform) 3301 self.lambdas_[i] = 1.0 3302 continue -> 3304 self.lambdas_[i] = optim_function(col) 3306 if self.standardize or force_transform: 3307 X[:, i] = transform_function(X[:, i], self.lambdas_[i]) File /tmp/test_venv/lib/python3.12/site-packages/sklearn/preprocessing/_data.py:3493, in PowerTransformer._yeo_johnson_optimize(self, x) 3491 x = x[~np.isnan(x)] 3492 # choosing bracket -2, 2 like for boxcox -> 3493 return optimize.brent(_neg_log_likelihood, brack=(-2, 2)) File /tmp/test_venv/lib/python3.12/site-packages/scipy/optimize/_optimize.py:2625, in brent(func, args, brack, tol, full_output, maxiter) 2553 """ 2554 Given a function of one variable and a possible bracket, return 2555 a local minimizer of the function isolated to a fractional precision (...) 2621 2622 """ 2623 options = {'xtol': tol, 2624 'maxiter': maxiter} -> 2625 res = _minimize_scalar_brent(func, brack, args, **options) 2626 if full_output: 2627 return res['x'], res['fun'], res['nit'], res['nfev'] File /tmp/test_venv/lib/python3.12/site-packages/scipy/optimize/_optimize.py:2662, in _minimize_scalar_brent(func, brack, args, xtol, maxiter, disp, **unknown_options) 2659 brent = Brent(func=func, args=args, tol=tol, 2660 full_output=True, maxiter=maxiter, disp=disp) 2661 brent.set_bracket(brack) -> 2662 brent.optimize() 2663 x, fval, nit, nfev = brent.get_result(full_output=True) 2665 success = nit < maxiter and not (np.isnan(x) or np.isnan(fval)) File /tmp/test_venv/lib/python3.12/site-packages/scipy/optimize/_optimize.py:2432, in Brent.optimize(self) 2429 def optimize(self): 2430 # set up for optimization 2431 func = self.func -> 2432 xa, xb, xc, fa, fb, fc, funcalls = self.get_bracket_info() 2433 _mintol = self._mintol 2434 _cg = self._cg File /tmp/test_venv/lib/python3.12/site-packages/scipy/optimize/_optimize.py:2401, in Brent.get_bracket_info(self) 2399 xa, xb, xc, fa, fb, fc, funcalls = bracket(func, args=args) 2400 elif len(brack) == 2: -> 2401 xa, xb, xc, fa, fb, fc, funcalls = bracket(func, xa=brack[0], 2402 xb=brack[1], args=args) 2403 elif len(brack) == 3: 2404 xa, xb, xc = brack File /tmp/test_venv/lib/python3.12/site-packages/scipy/optimize/_optimize.py:3031, in bracket(func, xa, xb, args, grow_limit, maxiter) 3029 e = BracketError(msg) 3030 e.data = (xa, xb, xc, fa, fb, fc, funcalls) -> 3031 raise e 3033 return xa, xb, xc, fa, fb, fc, funcalls BracketError: The algorithm terminated without finding a valid bracket. Consider trying different initial points. ``` ### Versions ```shell System: python: 3.12.6 (main, Sep 8 2024, 13:18:56) [GCC 14.2.1 20240805] executable: /usr/bin/python machine: Linux-6.6.54-2-MANJARO-x86_64-with-glibc2.40 Python dependencies: sklearn: 1.5.2 pip: 24.2 setuptools: 69.5.1 numpy: 2.1.3 scipy: 1.14.1 Cython: 3.0.11 pandas: 2.2.2 matplotlib: 3.9.2 joblib: 1.4.2 threadpoolctl: 3.5.0 Built with OpenMP: True threadpoolctl info: user_api: blas internal_api: openblas num_threads: 4 prefix: libscipy_openblas filepath: /tmp/test_venv/lib/python3.12/site-packages/numpy.libs/libscipy_openblas64_-ff651d7f.so version: 0.3.27 threading_layer: pthreads architecture: Haswell user_api: blas internal_api: openblas num_threads: 4 prefix: libscipy_openblas filepath: /tmp/test_venv/lib/python3.12/site-packages/scipy.libs/libscipy_openblas-c128ec02.so version: 0.3.27.dev threading_layer: pthreads architecture: Haswell user_api: openmp internal_api: openmp num_threads: 4 prefix: libgomp filepath: /tmp/test_venv/lib/python3.12/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0 version: None ```
closed
2024-11-15T14:41:56Z
2024-11-15T16:25:05Z
https://github.com/scikit-learn/scikit-learn/issues/30281
[ "Bug" ]
mahlzahn
2
onnx/onnx
scikit-learn
6,550
Is it possible to customize wrapPerspective operator in Opencv with Onnx?
I'm currently using yolov11s-obb and want to rotate the result to front-view. Is there a way to create my own wrapPerspective operator?
open
2024-11-21T03:51:50Z
2024-11-21T03:52:06Z
https://github.com/onnx/onnx/issues/6550
[ "question" ]
lehoangHUST
0
onnx/onnx
machine-learning
6,380
Invalid protobuf error when loading successfully exported onnx model
# Bug Report ### Is the issue related to model conversion? <!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. --> Error reported in the after-stage of model conversion but can possibly be caused by unreported flaws during conversion. ### Describe the bug <!-- Please describe the bug clearly and concisely --> On GCP Vertex VM, my UNet classifier model has been successfully exported to onnx format without raising a warning: ``` import torch import torch.onnx from model.UNet3D import UNETCLF model = UNETCLF(in_channels=7 ,out_channels=1, features=[64, 128, 256, 512, 1024]) model.load_state_dict(torch.load('clf.pth')) model.eval() model = model.to('cuda') dummy_input = torch.randn(1, 7, 16, 64, 64, device='cuda') torch.onnx.export(model, dummy_input, "clf.onnx", export_params=True, opset_version=18, do_constant_folding=True, input_names=['input'], output_names=['output'], dynamic_axes={'input': {0: 'batch_size'}, 'output': {0: 'batch_size'}}) ``` I then tried to load it locally on VCS with: ``` def get_model(): client = storage.Client() bucket = client.get_bucket('model-storage-bucket') blob = bucket.blob('clf.onnx') model = io.BytesIO() blob.download_to_file(model) model.seek(0) return InferenceSession(model.read(), providers=["CPUExecutionProvider"]) ``` It threw an invalid protobuf error: ``` onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Failed to load model because protobuf parsing failed. ``` When I looked more closely by: ``` onnx_model = onnx.load(local_file_path) onnx.checker.check_model(onnx_model) ``` It threw this error message: ``` Error parsing message with type 'onnx.ModelProto' ``` ### System information - OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): Ubuntu 22.04.4 LTS - ONNX version (*e.g. 1.13*): 1.16.2 (same on both GCP VM and local device) - Python version: 3.10.13 - GCC/Compiler version (if compiling from source): - CMake version: - Protobuf version: 3.20.2 (same on both GCP VM and local device) - Visual Studio version (if applicable): 1.89.1 ### Reproduction instructions <!-- Please let me know how you would reproduce the bug if required. I am happy to provide more information and instructions. - Describe the code to reproduce the behavior. ``` import onnx model = onnx.load('model.onnx') ... ``` - Attach the ONNX model to the issue (where applicable)--> ### Expected behavior <!-- A clear and concise description of what you expected to happen. --> The model to be successfully loaded locally with the above `get_model` function. ### Notes <!-- Any additional information --> I have spent quite some time modifying the model so that onnx export finally worked without throwing a warning. It might help if I provide the model here ``` class DoubleConv(nn.Module): def __init__(self, in_channels, out_channels): super(DoubleConv, self).__init__() self.conv = nn.Sequential( nn.Conv3d(in_channels, out_channels, 3, 1, 1, bias=False), nn.BatchNorm3d(out_channels), nn.ReLU(inplace=True), nn.Conv3d(out_channels, out_channels, 3, 1, 1, bias=False), nn.BatchNorm3d(out_channels), nn.ReLU(inplace=True), ) def forward(self, x): return self.conv(x) class UNETCLF(nn.Module): def __init__(self, in_channels=7, out_channels=1, features=[64, 128, 256, 512, 1024]): super(UNETCLF, self).__init__() self.downs = nn.ModuleList() self.ups = nn.ModuleList() self.pool = nn.MaxPool3d(kernel_size=(1, 2, 2)) # Down Part of UNET for feature in features: self.downs.append(DoubleConv(in_channels, feature)) in_channels = feature # Up Part of UNET for feature in reversed(features): self.ups.append( nn.ConvTranspose3d(feature*2, feature, kernel_size=(1, 2, 2), stride=(1, 2, 2)) ) self.ups.append(DoubleConv(feature*2, feature)) self.bottleneck = DoubleConv(features[-1], features[-1]*2) self.final_cov = nn.Conv3d(features[0], out_channels, kernel_size=1) def forward(self, x): skip_connections = [] for down in self.downs: x = down(x) skip_connections.append(x) x = self.pool(x) x = self.bottleneck(x) skip_connections = skip_connections[::-1] for idx in range(0, len(self.ups), 2): x = self.ups[idx](x) skip_connection = skip_connections[idx//2] # Use padding instead of dynamic resizing x = F.pad(x, [0, skip_connection.shape[4] - x.shape[4], 0, skip_connection.shape[3] - x.shape[3], 0, skip_connection.shape[2] - x.shape[2]]) concat_skip = torch.cat((skip_connection, x), dim=1) x = self.ups[idx+1](concat_skip) x = self.final_cov(x) # Use a fixed index for the time dimension o = x[:, :, x.shape[2]//2, :, :] return o ```
closed
2024-09-21T00:27:56Z
2024-11-01T14:57:29Z
https://github.com/onnx/onnx/issues/6380
[]
DagonArises
2
ufoym/deepo
jupyter
140
Custom CPU version | MacOs
I am using the CPU version on MacOs but I don't want to import all the DL libraries (just want Keras and Pytroch only). I tried this command `docker pull ufoym/deepo:keras pytorch` I get an error message: ``` macs-MBP-606:docker_machine macbook$ docker pull ufoym/deepo:keras pytorch "docker pull" requires exactly 1 argument. See 'docker pull --help'. Usage: docker pull [OPTIONS] NAME[:TAG|@DIGEST] Pull an image or a repository from a registry ``` I also tried to follow the instructions of this [customization section](https://github.com/ufoym/deepo#build-your-own-customized-image-with-lego-like-modules) but I got another error ``` macs-MBP-606:docker_machine macbook$ docker build -t deepo/ . invalid argument "deepo/" for "-t, --tag" flag: invalid reference format See 'docker build --help'. ``` any help on this, please?
closed
2020-10-18T09:39:45Z
2020-10-18T13:06:52Z
https://github.com/ufoym/deepo/issues/140
[]
SoufianeDataFan
1
dgtlmoon/changedetection.io
web-scraping
1,695
[feature] cron or exact time scheduler / or trigger check via webhook
**Version and OS** newest docker image **Is your feature request related to a problem? Please describe.** i have an automations that run at exact time, so i need to run www site check at exact time only once a day. current setup doesn't meet my needs **Describe the solution you'd like** to have an option to: 1. provide exact time the check will run or 2. invoke check using webhook (for instance from home assistant) **Describe the use-case and give concrete real-world examples** i check some sites in the morning and want to run some automations now ots.more or less random run time..
closed
2023-07-16T09:46:19Z
2023-07-17T15:20:43Z
https://github.com/dgtlmoon/changedetection.io/issues/1695
[ "enhancement" ]
MG-Sky
4
modelscope/data-juicer
streamlit
224
Potential performance Issue: Slow read_csv() Function with pandas 2.0.0
**Issue Description:** Hello. I have discovered a performance degradation in the `read_csv` function of pandas version below 2.0.1. And I notice some parts of the repository depend on pandas 2.0.0 in `environments/minimal_requires.txt` and some other dependencies require pandas below 2.0.1. I am not sure whether this performance problem in pandas will affect this repository. I found some discussions on pandas GitHub related to this issue, including [#52546](https://github.com/pandas-dev/pandas/issues/52546) and [#52548](https://github.com/pandas-dev/pandas/pull/52548). I also found that `app.py` and `demos/data_process_loop/app.py` used the influenced api. There may be more files using the influenced api. **Suggestion** I would recommend considering an upgrade to a different version of pandas >= 2.0.1 or exploring other solutions to optimize the performance of `read_csv`. Any other workarounds or solutions would be greatly appreciated. Thank you!
closed
2024-03-02T08:26:50Z
2024-05-18T09:32:02Z
https://github.com/modelscope/data-juicer/issues/224
[ "stale-issue" ]
TendouArisu
5
Nemo2011/bilibili-api
api
516
[需求] 增加验证码识别
使用该库提交视频发现每隔20分钟提交1个视频,提交10个左右就出现了验证码,如果可以解决验证码的问题,上传功能的实用性会大大提高。
closed
2023-09-23T21:03:08Z
2023-09-28T10:08:32Z
https://github.com/Nemo2011/bilibili-api/issues/516
[ "need", "feature", "GUGU" ]
HowHsu
9
streamlit/streamlit
data-visualization
10,231
`st.code` is not displayed correctly if the code block is not in the viewport when the page loads
### Checklist - [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues. - [x] I added a very descriptive title to this issue. - [x] I have provided sufficient information below to help reproduce this issue. ### Summary Please go to https://docs.streamlit.io/develop/api-reference/text/st.code to see the issue. The code block will display: ![Image](https://github.com/user-attachments/assets/1d9c5ac2-d978-4b18-afeb-edb322eaeed4) as long as the code block is not in the viewport when page reload. ### Reproducible Code Example ```Python import streamlit as st code = '''def hello(): print("Hello, Streamlit!")''' st.code(code, language="python") ``` ### Steps To Reproduce 1. Go to https://docs.streamlit.io/develop/api-reference/text/st.code 2. make sure the code block is not in the current viewport: ![Image](https://github.com/user-attachments/assets/755169e2-584a-45b9-84f9-212fa312bc23) 3. Scroll down to see what's going on. ![Image](https://github.com/user-attachments/assets/8789d732-b3db-4761-b76f-2cebced794b0) ### Expected Behavior In Chrome, the browser remembers the scroll position, so just reload, and the code block will diplay correctly as it is in the viewport now. ![Image](https://github.com/user-attachments/assets/e30e4dff-7557-4247-b656-a2c5d747d61f) ### Current Behavior `st.code` is not displayed correctly if the code block is not in the viewport when the page loads. It shows: ```py [object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object] ``` the copy function is correct. ```py def hello(): print("Hello, Streamlit!") ``` ### Is this a regression? - [x] Yes, this used to work in a previous version. ### Debug info - Streamlit version: 1.41.1 - Python version: - Operating System: Windows 10 - Browser: Chrome ### Additional Information _No response_
closed
2025-01-22T17:41:57Z
2025-01-24T16:19:34Z
https://github.com/streamlit/streamlit/issues/10231
[ "type:bug", "status:confirmed", "priority:P1", "feature:st.code" ]
Lexachoc
5
babysor/MockingBird
deep-learning
259
模型训练时有 Creating a tensor from a list of numpy.ndarrays is extremely slow.警告会影响训练吗
![image](https://user-images.githubusercontent.com/68454150/145553333-0e8dfc70-384f-4968-87bf-09f511243dd4.png)
closed
2021-12-10T03:49:55Z
2021-12-26T03:35:02Z
https://github.com/babysor/MockingBird/issues/259
[]
luobingit
11
Avaiga/taipy
automation
2,292
Have Menu expanded by default or not
### Description The goal would be to have like the expandable, a property to expand or not the menu *expanded*. This will allow users to choose the default look of the menu: either be expanded or not. This was asked here: https://stackoverflow.com/questions/79234418/taipy-menu-open-by-default ### Acceptance Criteria - [ ] Any new code is covered by a unit tested. - [ ] Check code coverage is at least 90%. ### Code of Conduct - [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+). - [ ] I am willing to work on this issue (optional)
open
2024-11-29T09:53:11Z
2025-03-19T12:57:24Z
https://github.com/Avaiga/taipy/issues/2292
[ "📈 Improvement", "🖰 GUI", "🟨 Priority: Medium" ]
FlorianJacta
6
coleifer/sqlite-web
flask
105
MacBook air
It is not working on my MacBook. It says: > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/peewee.py", line 3133, in connect > self._state.set_connection(self._connect()) > File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/peewee.py", line 3478, in _connect > conn = sqlite3.connect(self.database, timeout=self._timeout, > sqlite3.OperationalError: unable to open database file > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/3.10/bin/sqlite_web", line 33, in <module> > sys.exit(load_entry_point('sqlite-web==0.4.0', 'console_scripts', 'sqlite_web')()) > File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlite_web/sqlite_web.py", line 870, in main > initialize_app(args[0], options.read_only, password, options.url_prefix, > File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlite_web/sqlite_web.py", line 837, in initialize_app > dataset = SqliteDataSet('sqlite:///%s' % filename, bare_fields=True) > File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/playhouse/dataset.py", line 44, in __init__ > self._database.connect(reuse_if_open=True) > File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/peewee.py", line 3132, in connect > with __exception_wrapper__: > File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/peewee.py", line 2970, in __exit__ > reraise(new_type, new_type(exc_value, *exc_args), traceback) > File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/peewee.py", line 191, in reraise > raise value.with_traceback(tb) > File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/peewee.py", line 3133, in connect > self._state.set_connection(self._connect()) > File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/peewee.py", line 3478, in _connect > conn = sqlite3.connect(self.database, timeout=self._timeout, > peewee.OperationalError: unable to open database file I am not sure if it me or something else. But when I try to use the >sqlite_web /path/to/database.db it gives me this error message.
closed
2022-07-29T14:54:23Z
2022-07-29T15:01:35Z
https://github.com/coleifer/sqlite-web/issues/105
[]
HerodinSSS
1
tflearn/tflearn
data-science
840
How to get the true labels corresponding to 'net' within the graph?
with tf.Graph().as_default(): net = tflearn.input_data(shape=[None, 27, 27, 1]) **# As I understand, a mini-batch of 'train_data' is fed into 'net'** **# How can I get the true labels (train_label) corresponding to 'net' here ?** net = tflearn.fully_connected(net, 128, activation='relu') net = tflearn.fully_connected(net, 2, activation='softmax') mom = tflearn.Momentum(0.1, lr_decay=0.1, decay_step=7200, staircase=True) net = tflearn.regression(net, optimizer=mom, loss='categorical_crossentropy') model = tflearn.DNN(net, tensorboard_verbose=0,clip_gradients=0.) model.fit(train_data, train_label, n_epoch=10, batch_size=128, snapshot_epoch=False, snapshot_step=None, shuffle=True, run_id='nn')
open
2017-07-15T21:52:26Z
2017-07-20T19:31:17Z
https://github.com/tflearn/tflearn/issues/840
[]
zhao62
2
JaidedAI/EasyOCR
pytorch
759
TypeError: 'NoneType' object is not iterable in recognize method
I am using the recognize method provided with my own preset horizontal_list_agg list. The documentation says that free_list by default is None and that I only need either one of horizontal or free list, yet I get this error as seen below: ``` for bbox in free_list: TypeError: 'NoneType' object is not iterable in recognize method ```
closed
2022-06-19T11:45:26Z
2022-11-29T19:59:57Z
https://github.com/JaidedAI/EasyOCR/issues/759
[]
amroghoneim
1
noirbizarre/flask-restplus
api
456
Issue with @api.representation('application/xml')
I was following the example in help doc to create the below decorator. ```python @api.representation('application/xml') def xml(data, code, headers=None): resp = make_response(dicttoxml.dicttoxml(data), code) resp.headers.extend(headers or {}) return resp ``` And I also set up ``` api = Api(app, default_mediatype='application/json' ) ``` The problem is when I launched swagger dev portal and clicked link: http://localhost:5000/swagger.json ,it returned xml format rather than json format, which doesn't make sense.
closed
2018-06-01T15:02:18Z
2018-06-03T21:19:17Z
https://github.com/noirbizarre/flask-restplus/issues/456
[]
manbeing
0
openapi-generators/openapi-python-client
fastapi
176
Add a note about supported OpenAPI versions
**Describe the bug** The configuration I'm generating a client for declares `parameters` with `in: body`, the generation fails with openapi-python-client version: 0.5.4 however: ``` ERROR parsing POST /alerts within alert. Endpoint will not be generated. Parameter must be declared in path or query Parameter(name='alerts', param_in='body', description='The alerts to create', required=True, deprecated=False, allowEmptyValue=False, style=None, explode=False, allowReserved=False, param_schema=Reference(ref='#/definitions/postableAlerts'), example=None, examples=None, content=None) ``` **To Reproduce** Steps to reproduce the behavior: `openapi-python-client generate --url https://raw.githubusercontent.com/prometheus/alertmanager/master/api/v2/openapi.yaml` I'm not familiar with OpenAPI specs and versions, e.g. if this is relevant? https://swagger.io/docs/specification/2-0/describing-request-body/
closed
2020-09-02T17:42:17Z
2020-09-26T14:07:33Z
https://github.com/openapi-generators/openapi-python-client/issues/176
[ "📝 documentation", "✨ enhancement" ]
filippog
4
huggingface/diffusers
pytorch
10,315
cogvideo training error
### Describe the bug Fine tuning the model on both Gpus reports the following error: RuntimeError: CUDA driver error: invalid argument Do you know what the problem is? ### Reproduction [rank1]: ^^^^^^ [rank1]: File "/home/conda_env/controlnet/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl [rank1]: return self._call_impl(*args, **kwargs) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/conda_env/controlnet/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl [rank1]: return forward_call(*args, **kwargs) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/conda_env/controlnet/lib/python3.11/site-packages/diffusers-0.32.0.dev0-py3.11.egg/diffusers/models/transformers/cogvideox_transformer_3d.py", line 148, in forward [rank1]: ff_output = self.ff(norm_hidden_states) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/conda_env/controlnet/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl [rank1]: return self._call_impl(*args, **kwargs) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/conda_env/controlnet/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl [rank1]: return forward_call(*args, **kwargs) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/conda_env/controlnet/lib/python3.11/site-packages/diffusers-0.32.0.dev0-py3.11.egg/diffusers/models/attention.py", line 1242, in forward [rank1]: hidden_states = module(hidden_states) [rank1]: ^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/conda_env/controlnet/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl [rank1]: return self._call_impl(*args, **kwargs) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/conda_env/controlnet/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl [rank1]: return forward_call(*args, **kwargs) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/conda_env/controlnet/lib/python3.11/site-packages/diffusers-0.32.0.dev0-py3.11.egg/diffusers/models/activations.py", line 88, in forward [rank1]: hidden_states = self.proj(hidden_states) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/conda_env/controlnet/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl [rank1]: return self._call_impl(*args, **kwargs) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/conda_env/controlnet/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl [rank1]: return forward_call(*args, **kwargs) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/conda_env/controlnet/lib/python3.11/site-packages/torch/nn/modules/linear.py", line 125, in forward [rank1]: return F.linear(input, self.weight, self.bias) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: RuntimeError: CUDA driver error: invalid argument Steps: 0%| | 0/133600000 [00:12<?, ?it/s] [rank0]:[W1220 14:39:33.155016577 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_proce ss_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint ha s always been present, but this warning has only been added since PyTorch 2.4 (function operator()) W1220 14:39:35.723000 381051 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 381224 closing signal SIGTERM E1220 14:39:36.039000 381051 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 381223) of binary: /home/conda_env/controlnet/bin/python Traceback (most recent call last): File "/home/conda_env/controlnet/bin/accelerate", line 8, in <module> sys.exit(main()) ^^^^^^ File "/home/conda_env/controlnet/lib/python3.11/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main args.func(args) File "/home/conda_env/controlnet/lib/python3.11/site-packages/accelerate/commands/launch.py", line 1159, in launch_command multi_gpu_launcher(args) File "/home/conda_env/controlnet/lib/python3.11/site-packages/accelerate/commands/launch.py", line 793, in multi_gpu_launcher distrib_run.run(args) File "/home/conda_env/controlnet/lib/python3.11/site-packages/torch/distributed/run.py", line 910, in run elastic_launch( File "/home/conda_env/controlnet/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 138, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/conda_env/controlnet/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ train_controlnet.py FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2024-12-20_14:39:35 host : robot rank : 0 (local_rank: 0) exitcode : 1 (pid: 381223) error_file: <N/A> traceback : To enable traceback see: https:/pytorch.org/docs/stable/elastic/errors.html ============================================================ ### Logs _No response_ ### System Info ubuntu20.04 cuda 12.0 torch 2.5 diffusers 0.32.0.dev0 ### Who can help? _No response_
closed
2024-12-20T06:47:32Z
2025-01-12T05:44:35Z
https://github.com/huggingface/diffusers/issues/10315
[ "bug", "training" ]
linwenzhao1
4
sktime/pytorch-forecasting
pandas
1,178
How to make the model use partial future known information
PyTorch-Forecasting version: 0.10.2 PyTorch version: Python version: 3.8.5 Operating System: Windows The TFT accepts 3 inputs: static, observed (known up to t), future (known up to t+k), where k is the final point. Now, assume you want a forecast up to t+10. I have a some time series as Xs, which are known up to t, other up to t+1, other up to t+4 ecc, but my Y only up to t. I have so far only one choice, thus, to drop those additional values in the future a take the series back to t as Y. Then I plug these Xs into the bucket of observed input. In my opinion, it is a waste of information so I suppose there is a solution to this. On the other hand I could try to insert these variable into the known future, but they are not known up to the end t+10, only t+2, so I may zero-pad from t+3 to t+10. Would this work? I am concerned that this might run til the end somehow, but can I trust this? is there another workaround? This feature can be also useful for scenario analysis: what if gdp (x variable) drop of 5% in the next two samples (t+2), how my t+10 forecast of Y is affected by this? Can anybody advice?
open
2022-11-10T10:21:55Z
2022-12-12T16:58:19Z
https://github.com/sktime/pytorch-forecasting/issues/1178
[]
LuigiSimeone
1
aio-libs/aiohttp
asyncio
10,141
403 "Please enable cookies." when Request a Special Cloudflare Website
### Describe the bug When I access the website using aiohttp, a 403 "Please enable cookies" error occurs. But when I use other clients to access it, it works fine. ### To Reproduce ```python import sys import asyncio import aiohttp # ```pip install aiohttp[speedups]``` # https://docs.aiohttp.org/en/stable/index.html#installing-all-speedups-in-one-command # 版本常量 USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36 Edg/131.0.0.0 LightnovelSpider/5' # 客户端用户代理:前部与最新的Edge浏览器保持一致,尾部标记了本程序的版本。 # 环境常量:根据实际环境,可能需要在每次运行前手动变更的常量。 PROXY = 'http://127.0.0.1:7897' # 必填。如果连接到一个代理池,注意检查代理池中各个代理节点的可用性。 COOKIES = '''Hidden for Privacy''' # 必填。需要提供一组COOKIES供会话初始化。 async def fetch(url:str, referer:str=None, callback=lambda*args:None, cb_kwargs:dict=dict()) -> None: headers = { 'User-Agent': USER_AGENT } async with aiohttp.ClientSession( proxy= PROXY, cookies= {i.split("=")[0]:i.split("=")[-1] for i in COOKIES.split("; ")}, ) as session: async with session.get(url, headers=headers, ssl=False) as response: print(response.status) async def main(): await fetch('https://tw.linovelib.com/novel/1.html') await asyncio.sleep(1) # https://docs.aiohttp.org/en/stable/client_advanced.html#graceful-shutdown if __name__ == '__main__': # https://github.com/aio-libs/aiodns?tab=readme-ov-file#note-for-windows-users # https://github.com/aio-libs/aiodns/issues/86#issuecomment-1674906449 # https://github.com/nathom/streamrip/issues/729#issuecomment-2388503896 if sys.platform == 'win32': asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) asyncio.run(main()) ``` `403` ```html <!DOCTYPE html> <!--[if lt IE 7]> <html class="no-js ie6 oldie" lang="en-US"> <![endif]--> <!--[if IE 7]> <html class="no-js ie7 oldie" lang="en-US"> <![endif]--> <!--[if IE 8]> <html class="no-js ie8 oldie" lang="en-US"> <![endif]--> <!--[if gt IE 8]><!--> <html class="no-js" lang="en-US"> <!--<![endif]--> <head> <title>Attention Required! | Cloudflare</title> <meta charset="UTF-8" /> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=Edge" /> <meta name="robots" content="noindex, nofollow" /> <meta name="viewport" content="width=device-width,initial-scale=1" /> <link rel="stylesheet" id="cf_styles-css" href="https://tw.linovelib.com/cdn-cgi/styles/cf.errors.css" /> <!--[if lt IE 9]><link rel="stylesheet" id='cf_styles-ie-css' href="/cdn-cgi/styles/cf.errors.ie.css" /><![endif]--> <style>body{margin:0;padding:0}</style> <!--[if gte IE 10]><!--> <script> if (!navigator.cookieEnabled) { window.addEventListener('DOMContentLoaded', function () { var cookieEl = document.getElementById('cookie-alert'); cookieEl.style.display = 'block'; }) } </script> <!--<![endif]--> </head> <body> <div id="cf-wrapper"> <div class="cf-alert cf-alert-error cf-cookie-error" id="cookie-alert" data-translate="enable_cookies">Please enable cookies.</div> <div id="cf-error-details" class="cf-error-details-wrapper"> <div class="cf-wrapper cf-header cf-error-overview"> <h1 data-translate="block_headline">Sorry, you have been blocked</h1> <h2 class="cf-subheadline"><span data-translate="unable_to_access">You are unable to access</span> dns01.org</h2> </div><!-- /.header --> <div class="cf-section cf-highlight"> <div class="cf-wrapper"> <div class="cf-screenshot-container cf-screenshot-full"> <span class="cf-no-screenshot error"></span> </div> </div> </div><!-- /.captcha-container --> <div class="cf-section cf-wrapper"> <div class="cf-columns two"> <div class="cf-column"> <h2 data-translate="blocked_why_headline">Why have I been blocked?</h2> <p data-translate="blocked_why_detail">This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.</p> </div> <div class="cf-column"> <h2 data-translate="blocked_resolve_headline">What can I do to resolve this?</h2> <p data-translate="blocked_resolve_detail">You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.</p> </div> </div> </div><!-- /.section --> <div class="cf-error-footer cf-wrapper w-240 lg:w-full py-10 sm:py-4 sm:px-8 mx-auto text-center sm:text-left border-solid border-0 border-t border-gray-300"> <p class="text-13"> <span class="cf-footer-item sm:block sm:mb-1">Cloudflare Ray ID: <strong class="font-semibold">8ee307ac08f48591</strong></span> <span class="cf-footer-separator sm:hidden">&bull;</span> <span id="cf-footer-item-ip" class="cf-footer-item hidden sm:block sm:mb-1"> Your IP: <button type="button" id="cf-footer-ip-reveal" class="cf-footer-ip-reveal-btn">Click to reveal</button> <span class="hidden" id="cf-footer-ip">141.11.132.229</span> <span class="cf-footer-separator sm:hidden">&bull;</span> </span> <span class="cf-footer-item sm:block sm:mb-1"><span>Performance &amp; security by</span> <a rel="noopener noreferrer" href="https://www.cloudflare.com/5xx-error-landing" id="brand_link" target="_blank">Cloudflare</a></span> </p> <script>(function(){function d(){var b=a.getElementById("cf-footer-item-ip"),c=a.getElementById("cf-footer-ip-reveal");b&&"classList"in b&&(b.classList.remove("hidden"),c.addEventListener("click",function(){c.classList.add("hidden");a.getElementById("cf-footer-ip").classList.remove("hidden")}))}var a=document;document.addEventListener&&a.addEventListener("DOMContentLoaded",d)})();</script> </div><!-- /.error-footer --> </div><!-- /#cf-error-details --> </div><!-- /#cf-wrapper --> <script> window._cf_translation = {}; </script> <script>(function(){function c(){var b=a.contentDocument||a.contentWindow.document;if(b){var d=b.createElement('script');d.innerHTML="window.__CF$cv$params={r:'8ee307ac08f48591',t:'MTczMzU1ODkyOS4wMDAwMDA='};var a=document.createElement('script');a.nonce='';a.src='/cdn-cgi/challenge-platform/scripts/jsd/main.js';document.getElementsByTagName('head')[0].appendChild(a);";b.getElementsByTagName('head')[0].appendChild(d)}}if(document.body){var a=document.createElement('iframe');a.height=1;a.width=1;a.style.position='absolute';a.style.top=0;a.style.left=0;a.style.border='none';a.style.visibility='hidden';document.body.appendChild(a);if('loading'!==document.readyState)c();else if(window.addEventListener)document.addEventListener('DOMContentLoaded',c);else{var e=document.onreadystatechange||function(){};document.onreadystatechange=function(b){e(b);'loading'!==document.readyState&&(document.onreadystatechange=e,c())}}}})();</script></body> </html> ``` ### Expected behavior ```python import urllib3 urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) import requests # 版本常量 USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36 Edg/131.0.0.0 LightnovelSpider/5' # 客户端用户代理:前部与最新的Edge浏览器保持一致,尾部标记了本程序的版本。 # 环境常量:根据实际环境,可能需要在每次运行前手动变更的常量。 PROXY = 'http://127.0.0.1:7897' # 必填。如果连接到一个代理池,注意检查代理池中各个代理节点的可用性。 COOKIES = '''Hidden for Privacy''' # 必填。需要提供一组COOKIES供会话初始化。 if __name__ == '__main__': headers = { 'User-Agent': USER_AGENT } response = requests.get( url= 'https://tw.linovelib.com/novel/1.html', cookies= {i.split("=")[0]:i.split("=")[-1] for i in COOKIES.split("; ")}, proxies= { 'http': PROXY, 'https': PROXY }, headers= headers, verify= False ) print(response.status_code) ``` `200` ```html <!DOCTYPE html> <html lang="zh-Hant"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>惡魔高校DxD線上看_嗶哩輕小說</title> <meta name="keywords" content="惡魔高校DxD(High School DxD)在線閱讀,惡魔高校DxD(High School DxD)輕小說,石踏一榮,富士見文庫,嗶哩輕小說"> <meta name="description" content="惡魔高校DxD(High School DxD)是石踏一榮所寫的富士見文庫輕小說,嗶哩輕小說免費提供惡魔高校DxD(High School DxD)最新清爽乾淨的文字章節在線閱讀"> <meta name="applicable-device" content="mobile" /> <meta name="apple-mobile-web-app-capable" content="yes" /> <meta name="viewport" content="initial-scale=1.0,maximum-scale=1.0,minimum-scale=1.0,user-scalable=0,width=device-width" /> <meta name="theme-color" content="#232323" media="(prefers-color-scheme: dark)" /> <meta property="og:type" content="novel" /> <meta property="og:title" content="惡魔高校DxD" /> <meta property="og:description" content="我,兵藤一誠是個年齡=沒女朋友經歷的高二生。這樣的我也交到女朋友了!抱歉了朋友,我要先你們一步踏上大人的階梯!照理來說應該是這樣才對,為什麼我會被女朋友殺死? 明明什麼甜頭都沒嘗到,這個世界還有天理嗎? 然而全校第一美少女,莉雅絲·吉蒙里學姊救了這樣的我。自稱是惡魔的她,告訴我一個衝擊的事實。 「你已經轉生變成惡魔了。為我工作吧!」 在學姊的胸部和獎勵的引誘之下,我身為惡魔僕人的人生就此展開。 純粹靠著氣勢與煩惱,校園奇幻戰鬥戀愛物語第一集!" /> <meta property="og:image" content="https://tw.linovelib.com/files/article/image/0/1/1s.jpg" /> <meta property="og:novel:category" content="富士見文庫" /> <meta property="og:novel:author" content="石踏一榮" /> <meta property="og:novel:book_name" content="惡魔高校DxD" /> <meta property="og:novel:read_url" content="https://tw.linovelib.com/novel/1/catalog" /> <meta property="og:url" content="https://tw.linovelib.com/novel/1.html" /> <meta property="og:novel:status" content="完結" /> <meta property="og:novel:author_link" content="https://tw.linovelib.com/authorarticle/石踏一榮.html" /> <meta property="og:novel:update_time" content='2024-02-25 23:45:52' /> <meta property="og:novel:latest_chapter_name" content="短篇集 DX.4 學生會與利維坦 後記" /> <meta property="og:novel:latest_chapter_url" content="/novel/1/225508.html" /> <link rel="stylesheet" href="https://tw.linovelib.com/themes/zhmb/css/info.css?v0118b5"> <link rel="alternate" hreflang="zh-Hans" href="https://www.bilinovel.com/novel/1.html" /> <script src="https://tw.linovelib.com/themes/zhmb/js/jquery-3.3.1.js"></script> <script type="text/javascript" src="/scripts/darkmode.js"></script> <script src="https://tw.linovelib.com/themes/zhmb/js/core.js?v0821b2"></script> <script async src="https://tw.linovelib.com/themes/zhmb/js/lazysizes.min.js"></script> <script type="text/javascript" src="https://tw.linovelib.com/scripts/common.js?v0316b1" charset="UTF-8"></script> <script type="text/javascript">var ual = navigator.language.toLowerCase();if(ual == 'zh-cn'){window.location.replace("https://w.linovelib.com/novel/1.html");}</script> <style>}.guide-content{backdrop-filter: blur(50px);-webkit-backdrop-filter: blur(50px);background-color: rgba(255, 255, 255, 0.1);}.sp-dark-mode .guide-content{background-color: rgba(0, 0, 0, 0.1);color: #eee;}.sp-dark-mode .header{background-color: unset;}.book-detail-btn .icon {margin-right: 0.25rem;}.score-tip{position: relative;display: flex; justify-content: space-between; align-items: center;color: rgba(255,255,255,.7);line-height: .65rem; font-size: .6rem;}.orange{color:rgba(255,126,0,.65);}@media (prefers-color-scheme: dark){.guide-content{color: #eee;}}@media screen and (max-width: 600px){.header a, .header-operate>.icon, .header-operate-a{color: rgba(255,57,85, .15);}}</style> <style>@media screen and (min-width:768px){.page-fans .book-ol li, .trigger-book-comment .book-ol li{width: calc(50% - 6px);display: inline-block;vertical-align: top;}.trigger-book-comment .book-li:nth-child(3)::after, .page-fans .book-ol-comment .book-li:nth-child(3)::after{border-bottom: none;}.module-slide-ol{white-space: normal;padding-bottom: .75rem;}.module-slide-li {width: calc(19% - 5px);}.module-slide-a {width: 6.125rem;padding: 0.75rem 1.45rem 0.5rem;}.module-slide-img {height: 8.5rem;}.book-comment-p{min-height: 2.5625rem;}.corner{width: 45px;height: 45px;}.corner>em{right: -20px;line-height: 22px;font-size: 15px;}}</style> </head> <body id="scroll"> <script src="https://tw.linovelib.com/themes/zhmb/js/sprite.js"></script> <div class="page page-book-detail"> <div class="content"> <header id="header" class="header"><a href="/" class="header-back jsBack"><svg class="icon icon-arrow-l"><title>返回</title><use xlink:href="#icon-arrow-l"></use></svg></a> <!--<span class="header-back-title">惡魔高校DxD</span>--> <div class="header-operate"> <a id="openSearchPopup" href="javascript:" class="icon icon-search" title="搜索"><svg><use xlink:href="#icon-search"></use></svg></a> <a id="openGuide" href="javascript:" class="icon icon-more" title="更多" data-rel="guide"></a> </div> </header> <!-- 更多內容的導航 S --> <div id="guide" class="guide"> <i id="guideOverlay1" class="guide-overlay"></i> <div class="guide-content"> <nav class="guide-nav"> <a href="/" class="guide-nav-a"> <i class="icon icon-home"></i> <h4 class="guide-nav-h">首頁</h4> </a> <a href="/alltopics" class="guide-nav-a"> <i class="icon icon-sort"></i> <h4 class="guide-nav-h">圈子</h4> </a> <a href="/top.html" class="guide-nav-a"> <i class="icon icon-rank"></i> <h4 class="guide-nav-h">排行榜</h4> </a> <a href="/wenku/" class="guide-nav-a"> <i class="icon icon-fuli"></i> <h4 class="guide-nav-h">文庫</h4> </a> <a href="/topfull/postdate/1.html" class="guide-nav-a"> <i class="icon icon-end"></i> <h4 class="guide-nav-h">完本</h4> </a> <a href="/user.php" class="guide-nav-a "> <i class="icon icon-account"></i> <h4 class="guide-nav-h">賬戶</h4> </a> </nav> <div class="guide-footer"> <a href="/bookcase.php" class="btn-primary" data-size="14">我的收藏</a> </div> </div> </div> <!-- 更多內容的導航 E --> <!-- 公用頭部 E --> <div id="bookDetailWrapper" class="module module-merge book-detail-x"> <img src="https://tw.linovelib.com/files/article/image/0/1/1s.jpg?1708873318" class="book-cover-blur" alt="惡魔高校DxD"> <div class="book-detail-info"> <div class="book-layout"> <div class="module-book-cover"> <div class="module-item-cover"> <img src="https://tw.linovelib.com/files/article/image/0/1/1s.jpg?1708873318" class="book-cover" alt="惡魔高校DxD"> </div> </div> <div class="book-cell"> <h1 class="book-title">惡魔高校DxD</h1> <div class="book-rand-a"> <span class="authorname"><a href="/authorarticle/石踏一榮.html">石踏一榮</a></span><i class="char-pipe">,</i><span class="illname"><a href="/illustratorarticle/みやま零.html"><ruby>みやま零<rt>(插畫)</rt></ruby></a></span> 著 </div> <p class="book-meta book-layout-inline"><!-- 次閱讀<span class="char-pipe">/</span> -->26733 人收藏<span class="char-pipe">/</span>4361 次推薦<b class="dot-pipe">/</b></p><p class="book-meta book-layout-inline">398.1 萬字<span class="char-pipe">|</span>完結<span class="char-pipe">|</span>已動畫化</p> <p class="book-meta"> <span class="tag-small-group origin-left"> <em class="tag-small red"><a href="/tagarticle/63/1.html">校園</a></em><em class="tag-small red"><a href="/tagarticle/15/1.html">奇幻</a></em><em class="tag-small red"><a href="/tagarticle/18/1.html">戰鬥</a></em><em class="tag-small red"><a href="/tagarticle/48/1.html">後宮</a></em><em class="tag-small red"><a href="/tagarticle/225/1.html"> 青梅竹馬</a></em><em class="tag-small red"><a href="/tagarticle/223/1.html">人外</a></em><em class="tag-small orange"><a href="/wenku/lastupdate_223_0_0_0_0_0_0_1_0.html">富士見文庫</a></em><em class="tag-small gray"><a href="/wenku/lastupdate_0_0_0_1_0_0_0_1_0.html">日本輕小說</a></em> </span> </p> </div> </div> <div class="book-detail-btn" data-read-status="" data-bookshelf-status=""> <ul class="btn-group"> <li class="btn-group-cell"><a href="/novel/1/catalog" class="btn-normal red" id="btnReadBook">開始閱讀</a></li> <li class="btn-group-cell"><a href="javascript:" id="a_addbookcase" onclick="toggleBookcaseStatus(this, 1);" class="btn-normal white"><svg class="icon blue"><use xlink:href="#icon-booklist-star"></use></svg>書架</a></li> <li class="btn-group-cell"><a href="/download/1.html" class="btn-normal white"><svg class="icon blue"><use xlink:href="#icon-down"></use></svg>下載</a></li> <li class="btn-group-cell"><a href="/wiki/1.html" class="btn-normal purple"><svg class="icon white"><use xlink:href="#icon-qi"></use></svg>詳解</a></li> </ul><input id="shareurl" type="text" readonly style='opacity: 0;position:fixed;z-index: -1000;' value="share.linovelib.net/1-0" /> </div> </div> </div> <div class="module module-merge"> <section id="bookSummary" class="book-summary"> <content>我,兵藤一誠是個年齡=沒女朋友經歷的高二生。這樣的我也交到女朋友了!抱歉了朋友,我要先你們一步踏上大人的階梯!照理來說應該是這樣才對,為什麼我會被女朋友殺死?<br /> 明明什麼甜頭都沒嘗到,這個世界還有天理嗎?<br /> 然而全校第一美少女,莉雅絲·吉蒙里學姊救了這樣的我。自稱是惡魔的她,告訴我一個衝擊的事實。<br /> 「你已經轉生變成惡魔了。為我工作吧!」<br /> 在學姊的胸部和獎勵的引誘之下,我身為惡魔僕人的人生就此展開。<br /> 純粹靠著氣勢與煩惱,校園奇幻戰鬥戀愛物語第一集!</content> <div class="backupname"><em>別名</em><span class="bkname-body gray">High School DxD</span></div> <span class="book-summary-more"><svg class="icon icon-arrow-r"><use xlink:href="#icon-arrow-r"></use></svg></span></section> <a href="/novel/1/catalog" class="book-meta book-status"> <div class="book-meta-l"><strong class="book-spt">最后更新</strong><span class="char-dot">·</span>2024-02-25</div> <div class="book-meta-r"> <p class="gray ell">短篇集 DX.4 學生會與利維坦 後記</p> <svg class="icon icon-arrow-r"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-arrow-r"></use></svg> </div> </a> </div> <!-- 互動區 --> <div class="module"> <ul id="payTicketsX" class="btn-group"> <li class="btn-group-cell"> <a href="/gift.php?id=1" id="a_gift" class="book-pay book-pay-a"> <i class="icon icon-gift"></i> <h5 class="book-pay-h">鮮花</h5> <p class="book-pay-p"><span class="month-ticket-cnt">8547</span>束</p> </a> </li> <li class="btn-group-cell"> <a href="javascript:" id="a_uservote" onclick="Ajax.Tip('https://tw.linovelib.com/modules/article/uservote.php?id=1', {method: 'POST'});" class="book-pay book-pay-a"> <i class="icon icon-pay-like"></i> <h5 class="book-pay-h">推薦票</h5> <p class="book-pay-p">本週<span class="recomm-ticket-cnt">12</span>推薦</p> </a> </li> <li class="btn-group-cell"> <a href="/gift.php?id=1&type=egg" class="book-pay book-pay-a"> <i class="icon icon-checkin-egg"></i> <h5 class="book-pay-h">臭雞蛋</h5> <p class="book-pay-p"><span class="reward-week-cnt">20</span>個</p> </a> </li> <li class="btn-group-cell"> <a href="/fansheat_1_1.html" class="book-pay"> <i class="icon icon-fans"></i> <h5 class="book-pay-h">互動</h5> <p class="book-pay-p"><span class="reward-week-cnt">99</span>⁺</p> </a> </li> </ul> <!-- 評分區 --> <a href="javascript:" class="subject-rating-root" id="asideTrigger"> <div class="sub-score"> <div class="score-tip"><span>參與評價,表達對作品的態度</span><div class="myrating">我的評價<div class="star-display"><div class="rating"><div class="rating-stars"><div class="rateunit"><span style="width:calc(0/10*100%);" class="rpercent"></span></div></div></div></div><svg class="icon icon-arrow-r"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-arrow-r"></use></svg></div></div> <div class="sub-content" id="a_userate"> <div class="sub-rating"> <div class="score-num">9.9</div> <div class="star-display"><div class="rating"><div class="rating-stars"><div class="rateunit"><span style="width:calc(9.9/10*100%);" class="rpercent"></span></div></div></div><p class="done-count"><em>1799 人評價</em></p></div></div> <div class="sub-chart"><div class="star-stats"><div class="stars-wrap"><span class="rating-star-icon"></span><span class="rating-star-icon"></span><span class="rating-star-icon"></span><span class="rating-star-icon"></span><span class="rating-star-icon"></span></div><div class="chart-wrap"><span class="rating-progress" style="width: calc(1764/1799*100%);"></span></div></div><div class="star-stats"><div class="stars-wrap"><span class="rating-star-icon"></span><span class="rating-star-icon"></span><span class="rating-star-icon"></span><span class="rating-star-icon"></span></div><div class="chart-wrap"><span class="rating-progress" style="width: calc(19/1799*100%);"></span></div></div><div class="star-stats"><div class="stars-wrap"><span class="rating-star-icon"></span><span class="rating-star-icon"></span><span class="rating-star-icon"></span></div><div class="chart-wrap"><span class="rating-progress" style="width: calc(8/1799*100%);"></span></div></div><div class="star-stats"><div class="stars-wrap"><span class="rating-star-icon"></span><span class="rating-star-icon"></span></div><div class="chart-wrap"><span class="rating-progress" style="width: calc(3/1799*100%);"></span></div></div><div class="star-stats"><div class="stars-wrap"><span class="rating-star-icon"></span></div><div class="chart-wrap"><span class="rating-progress" style="width: calc(5/1799*100%);"></span></div></div> </div> </div> </div> </a> <!-- 粉絲榜 --> <style>.page-fans .book-ol-rank .book-layout {padding-top: 0.75rem;padding-bottom: 0.75rem;padding-left: 3.125rem;margin-left: -3.125rem;}.page-fans .book-title{font-size: .8125rem;}.fans-point {color: gray;font-size: .725rem;font-family: pingfang sc;}.page-fans .book-title-r img{position: absolute; top: 55%; bottom: 0; right: 1.125rem; width: 3.125rem;margin: auto; margin-top: -0.70625rem;}.book-ol-rank .book-li::before{font:1em/1.5em 'PingFang SC';position:absolute;top:50%;bottom:0;left:-2.125rem;margin:auto;margin-top:-.70625rem;content:counter(bookrank);counter-increment:bookrank;color:#b8b8b8;width:1.375rem;height:1.8125rem;text-align:center}.book-ol-rank .book-li-fir::before{top:46%;content:'1';width:1.375rem;height:1.9375rem;text-align:center;background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAA8CAYAAAFqVWgMAAAABGdBTUEAALGPC/xhBQAAAt5JREFUaAXtmj9ok0EUwN+7rxFSG0nTuLs4dEkTFx3ERRB1dFHp0A6VuHy1LkXc/QM6tM1iEFErFKnOLoKDVdwSrYOCg4KbJI1YqZWSO98Ndx5fLv8kF0O5D8J7d/fey7vfd/nyeAkCXZVSKKQ0L7RNSgNmWpl6XxfShwooXxCwIzIJa7rWjPo22YBVErXmaVI0dWuqpoGp7xpjBHyMiN/NzUkdK+VwEwSMRBeiY3K+wDoxlI5CiLu7Bl2UhBr7DSoSUlppMGTzppHSh5QCiCvp3NKkHgPcknqlNLtFxygudayWZ9foRB2Vg15ezEVQmaCVRS8y94E1RY/Co9AEtOLsVPx9Yun3MhX8hUPs+Fhm4Y05K/VqKbxDdWA+Oq/GuFEOr3EBVxFgJwiCk8mJhRdqsVtZLYcrQsB56ScfmzPyi7jbIE3tEX6mc4VEV9VW02CRBSovXjm5efIZ7ySw3IAPrG+jR+FRaAJa8afCo9AEtOJPhUehCWjlv5+KLcbgChUiD3VKbZSWRSFDWE7lClORGNO18ly2DvW1Vr2UxhIL8XcQ4InRzOLLSEDrkArB+1QITkcXdWDa5moqu3iOZEOXMupkG9fWLx3jdfGcyqs9tNPruFGaO8Wh/sxmPGhzDILTjCN/MmiJNctH5krNN7G3mcHAzVOuzj7PrjbrE3ZFVsX1hBUJV9ITdkVWxfWEFQlX0hN2RVbF9YQVCVfSE3ZFVsX1hBUJV9ITdkVWxfWEFQlX0hN2RVbFbdkEU0btJP34+Z5aUUUxIh6NHSz8MO3F18vxakWcReAXqd102Fz7F133wDp1pv9OfQCEYhAMLyczN2ud+pl24uN8ora9PUlNuTxtImuutdPlL7mfyemA1RDxEwosxuLDD/aN36habXo0Sa3LJAc+Ra3LvAAxbgtLd/ELCrEa1N69nhGc5xDZeozFniYmbn+zOfR7brMU7t9h7AwXIkN39m0qO3rvD+RJ0v+birTCAAAAAElFTkSuQmCC);color:#d7943e;background-repeat:no-repeat;background-size:cover}.book-ol-rank .book-li-sec::before{top:46%;content:'2';width:1.375rem;height:1.9375rem;text-align:center;background:url(/readnovelm/img/second-9b77d0630e.png);background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAA8CAYAAAFqVWgMAAAABGdBTUEAALGPC/xhBQAAAtFJREFUaAXtmj9o00EUx9+7xEqtlYKINS6uLlInHcRFEHV0UenQDpWo+CddRNx1cfilTgYpaoUi1dlFcFDBVXRwcNAlv0hFKlpapOb3fDfccb1c0kRyMZT3g/Deu3v3zbvP7/rr/Y4g8JVU66Ste2GoUScoN8v1e9oxvTeH+oM5dVgXESw3WFHPGhuwaqLBOl2Krh8s1U1w/U2TjABPEPCHOzntYznNfhHRdr/DjxWq86qdRD0wo+z+pkHnkzCxTNCQ0DZIA5W67iYZP28cRJgvFXLjJmZ7R/tJNVsBoEHt6yX6mlfeER1089LLueuiusAgi25ULsKWoqAQFJaAdaKtCvvEsl+1zsFVyOOx6d34dl0zB+W0fo8Iin67ifnpVr/FCTcRcS2n8MSVUXxpOju1rDXPWuf0OJxJaUr/I+5UpFk+F7hcKqjhjnZbzcT8dhZ/E+Xm6Wd8FGE9AxG291FQCApLwDqyKgSFJWAdWRWCwhKwzv9dFbwBWUFQNwDhkS1pA6flppBfbef41XbC05hMFmkM/5B+o2166NKwxeKjm9+AeLxUwFeeYDAsV+sP+PRx0u90hRd4M3eWp91wSukPCsXllI7yDF7wS/oAz/Q2JjU6CVn2PJTcd21KnVJA9LTvCmtWENeqCx5q1t937VxrtL/nWJOVgmORNbpC2JCIZYVwLLJGVwgbErGsEI5F1ugKYUMilhXCscgaXSFsSMSyQjgWWaMrhA2JWFYIxyJrdFsegpmkDS3iB4VYwQF4fHUn/nTzF4gGq1/hDBJd4COnQ27fv/juGVhb4/mM7CMfwFW2bIO5SyO41NYgL2n2Gw0vr8E4T6BIQGNed8tQ/wDmMw/cF8riwj6Bwkp+CB5e3oHfQzndakuWaEStwgRPoMj17A/pMqwvyLcsl9Zgin8JdhBQvd+K8OziKC6GBvS67W6NdmUEp4GyA1zbu2t7YPYvGM7VPd16ihYAAAAASUVORK5CYII=);color:#6c9acb;background-repeat:no-repeat;background-size:cover}.book-ol-rank .book-li-thi::before{top:46%;content:'3';width:1.375rem;height:1.9375rem;text-align:center;background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAA8CAYAAAFqVWgMAAAABGdBTUEAALGPC/xhBQAAAuRJREFUaAXtmUFrU0EQgGdeYrBoQRRvgmKTgxepJz2IF0VN0pMXlYhJQ6X+AhHvevEPGMS8tkKR6EVIchEUbNGr6MFDohE8CIpWaxop2oyz6G7WZE36QjaFsg8eM7s7O2/e9zb7JvMQ+KgU8ySkfqCpUxh4upWuD3UgNpFFcYYxfEQEYQzXGNHQOjuwCqLGOHWKum4MVTfQ9U1jjID3+Pyq35zQBbrvLLe3D3S0ES4JGr0NxUyC25sGXQeJvx3uBnUyRhqIeEU3knpYKoA4H0tOplQb4KbQq0W/QUAjQsdqMb/AW+lR0Rjk4dlwKgI0shhE5M6xouhQOBSKgFKsrYrWjqWu1VL4xfqDQt7xWDz9vNX7R6uW/FtENN3eL9u8u/nXeau7Bgg/EeF0NJF9LAeDymopP08E58U8rJTyU+JFHNRJF/s6Z4ejgbKtLs7+GUKARSsPT+zxVhyL8J1j9RAdCodCEVCKWxUOhSKgFLcqHApFQCkbuyo4AWl4gFc5l5tVIfVQuieFCHPRZDbd5iNTK/njv4gWuP+/tZTOFAthNQTeyf3JzNM2h8YmZ50+Z52Z9kHlmDPNwlhi8hz/Q+cMKfjxtjRzbI3oEZd6IozsBlbLs3FqrpWDuxr+DPRCCY+oeX/4l+7viiJWD4i29Td9A2ZxrNZ+z7ZuxwVsi6z06whLErakI2yLrPTrCEsStqQjbIus9OsISxK2pCNsi6z06whLErakI2yLrPTrCEsStqQjbIus9Nu1CCaNekkuQ70iwNxODN/dlbiwrNu/f1YYWV1aOcsfSS/z98zD+lg/uqqBrXsywmsuDebCFJnbO5FaWvc8zfDT4sPR5W9fUk2AaS7kjGtDPVXkal6Nq3n7TJZcfKuAh7mtEZzZcyL92WQzqL7aE39HswHpJvD3bIIDJr/8JN8hUSH0plyf4q/Qh7i++JK2hB5ET138aJow7L4P5cLuFaqf4eV2kKugL8bimTu/AXHgyM0f56+xAAAAAElFTkSuQmCC);background-repeat:no-repeat;color:#947360;background-size:cover}.book-ol-rank .book-li-fou::before{top:46%;content:'4';width:1.375rem;height:1.9375rem;text-align:center;background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAA8CAMAAAAqjKioAAAAP1BMVEVHcEze3t7e3t7e3t7f39/e3t7e3t7e3t7k5OTf39/e3t7////f39/d3d3f39/d3d3e3t7i4uLd3d3e3t7e3t6sJ12jAAAAFHRSTlMAp2PFadQ99RFzlwIzoEnfhQmh2oqDbp8AAAB/SURBVHja7c5BDoMwDETRIXGwk7RA29z/rAWBkLMBV+rSf+PN08hA4GaKAzA1cxOiHUewHTPaDzl27NixY8eOHf8J85Dzk004kWDtFeId5veIs0fhC5yyoEsodVh9qkbVvPoe834/erRP8jE/Y9lHK64ah21+QaVCgruqUAn1C6rCwGvN0XspAAAAAElFTkSuQmCC);background-repeat:no-repeat;color:#bbb2ac;background-size:cover}.page-book-detail .page-fans::before{margin: 0 1rem;}.page-fans .book-ol-comment .book-li::after {margin-left: 3.375rem;margin-right: 1rem;}@media screen and (min-width:768px){.book-li-fir .book-title-r img, .book-li-thi .book-title-r img{right: 1.5rem;}.page-fans .book-li-sec .book-layout, .page-fans .book-li-fou .book-layout{margin-left: -2.125rem;}.book-ol-rank .book-li-sec::before, .book-ol-rank .book-li-fou::before{left: -1.125rem;}}</style> <div class="module-content page-fans"> <ol class="book-ol book-ol-comment book-ol-rank"> <li class="book-li book-li-fir"><!--<em></em>--> <div class="book-layout"> <div class="book-author-vv fans-avatar"> <img class="book-author-avatar lazyload" data-src="https://tw.linovelib.com/files/system/avatar/334/334259.jpg" src="https://tw.linovelib.com/images/noavatar.jpg" data-expand='50' onerror="this.src='https://tw.linovelib.com/images/noavatar.jpg';this.onerror=null;" alt="國家隊-零貳頭像"> </div> <div class="book-cell"> <div class="book-title-x"> <div class="book-title-r"> <img src="/images/fans/label_top01.png"> </div> <div class="book-title-l"> <p class="book-title" title="粉絲榜第1位">國家隊-零貳</p> <span class="fans-point">熱力值 4784</span> </div> </div> </div> </div> </li> <li class="book-li book-li-sec"><!--<em></em>--> <div class="book-layout"> <div class="book-author-vv fans-avatar"> <img class="book-author-avatar lazyload" data-src="https://tw.linovelib.com/files/system/avatar/531/531512.jpg" src="https://tw.linovelib.com/images/noavatar.jpg" data-expand='50' onerror="this.src='https://tw.linovelib.com/images/noavatar.jpg';this.onerror=null;" alt="真尋歐尼醬頭像"> </div> <div class="book-cell"> <div class="book-title-x"> <div class="book-title-r"> <img src="/images/fans/label_top03.png"> </div> <div class="book-title-l"> <p class="book-title" title="粉絲榜第2位">真尋歐尼醬</p> <span class="fans-point">熱力值 621</span> </div> </div> </div> </div> </li> <li class="book-li book-li-thi"><!--<em></em>--> <div class="book-layout"> <div class="book-author-vv fans-avatar"> <img class="book-author-avatar lazyload" data-src="https://tw.linovelib.com/files/system/avatar/243/243404.jpg" src="https://tw.linovelib.com/images/noavatar.jpg" data-expand='50' onerror="this.src='https://tw.linovelib.com/images/noavatar.jpg';this.onerror=null;" alt="kinson72002頭像"> </div> <div class="book-cell"> <div class="book-title-x"> <div class="book-title-r"> <img src="/images/fans/label_top03.png"> </div> <div class="book-title-l"> <p class="book-title" title="粉絲榜第3位">kinson72002</p> <span class="fans-point">熱力值 541</span> </div> </div> </div> </div> </li> <li class="book-li book-li-fou"><!--<em></em>--> <div class="book-layout"> <div class="book-author-vv fans-avatar"> <img class="book-author-avatar lazyload" data-src="https://tw.linovelib.com/files/system/avatar/248/248331.jpg" src="https://tw.linovelib.com/images/noavatar.jpg" data-expand='50' onerror="this.src='https://tw.linovelib.com/images/noavatar.jpg';this.onerror=null;" alt="1051207658頭像"> </div> <div class="book-cell"> <div class="book-title-x"> <div class="book-title-r"> <img src="/images/fans/label_top03.png"> </div> <div class="book-title-l"> <p class="book-title" title="粉絲榜第4位">1051207658</p> <span class="fans-point">熱力值 500</span> </div> </div> </div> </div> </li> </ol> <a href="/fans/1" class="book-li-more book-li-more-comments">更多粉絲</a> </div> </div> <!-- 書評區 --> <div class="module trigger-book-comment"> </div> <!-- 作者區 --> <div class="module"> <div class="module-header"> <div class="module-header-l"> <h3 class="module-title">本書作者</h3> </div> </div> <div class="module-content"> <ol class="book-ol book-ol-author"> <li class="book-li"> <a href="https://tw.linovelib.com/authorarticle/石踏一榮.html" class="book-layout books-author"> <div class="book-author-vv" role="option"> <img class="book-author-avatar" src="/images/noauthor.jpg" alt="石踏一榮的頭像"> <div><aria>作者等級:</aria><em class="tag-honor orange"><i>Lv.5</i></em></div> </div> <div class="book-cell"> <div class="book-title-x"> <h4 class="book-title">石踏一榮</h4> </div> <p class="book-desc">輕小說作家</p> </div> <svg class="icon icon-arrow-r" aria-hidden="true"><use xlink:href="#icon-arrow-r"></use></svg> </a> </li> </ol> <style>.page-book-detail .book-ol-author .books-author{padding-bottom:0}</style><div class="module-slide"><ol class="module-slide-ol" id="book-friend-list-container"><li class="module-slide-li"><a href="/novel/1.html" class="module-slide-a"><span class="corner orange"><em>9.9</em></span><img src="https://tw.linovelib.com/themes/zhmb/images/book-cover-no.svg" data-src="https://tw.linovelib.com/files/article/image/0/1/1s.jpg?1708873318" class="module-slide-img lazyload" data-expand='50' alt="惡魔高校DxD"><figcaption class="module-slide-caption">惡魔高校DxD</figcaption><p class="module-slide-author"></p></a></li><li class="module-slide-li"><a href="/novel/2570.html" class="module-slide-a"><span class="corner orange"><em>9.9</em></span><img src="https://tw.linovelib.com/themes/zhmb/images/book-cover-no.svg" data-src="https://tw.linovelib.com/files/article/image/2/2570/2570s.jpg?1717421328" class="module-slide-img lazyload" data-expand='50' alt="真惡魔高校DxD"><figcaption class="module-slide-caption">真惡魔高校DxD</figcaption><p class="module-slide-author"></p></a></li><li class="module-slide-li"><a href="/novel/2510.html" class="module-slide-a"><span class="corner orange"><em>9.9</em></span><img src="https://tw.linovelib.com/themes/zhmb/images/book-cover-no.svg" data-src="https://tw.linovelib.com/files/article/image/2/2510/2510s.jpg?1708879030" class="module-slide-img lazyload" data-expand='50' alt="墮天的狗神 -SLASHDOG-"><figcaption class="module-slide-caption">墮天的狗神 -SLASHDOG-</figcaption><p class="module-slide-author"></p></a></li></ol></div> </div> </div> <!-- 譯者者區 --> <!-- 標籤區 --> <div class="module"> <div class="module-header"> <div class="module-header-l"> <h3 class="module-title">作品標籤</h3> </div> </div> <div class="module-content"> <div class="search-tags"> <a href="/wenku/lastupdate_63_0_0_0_0_0_0_1_0.html" class="btn-line-gray">校園</a> <a href="/wenku/lastupdate_15_0_0_0_0_0_0_1_0.html" class="btn-line-gray">奇幻</a> <a href="/wenku/lastupdate_18_0_0_0_0_0_0_1_0.html" class="btn-line-gray">戰鬥</a> <a href="/wenku/lastupdate_48_0_0_0_0_0_0_1_0.html" class="btn-line-gray">後宮</a> <a href="/wenku/lastupdate_225_0_0_0_0_0_0_1_0.html" class="btn-line-gray">青梅竹馬</a> <a href="/wenku/lastupdate_223_0_0_0_0_0_0_1_0.html" class="btn-line-gray">人外</a> </div> </div> </div> <!-- 推薦區 --> <div class="module" data-load-status="wait"> <div class="module-header"> <div class="module-header-l"> <h3 class="module-title">富士見文庫輕小說推薦</h3> </div> </div> <div class="module-content"> <div class="module-slide"> <ol class="module-slide-ol" id="book-friend-list-container"> <li class="module-slide-li"><a href="/novel/41.html" class="module-slide-a"><span class="corner orange"><em>9.9</em></span><img src="https://tw.linovelib.com/themes/zhmb/images/book-cover-no.svg" data-src="https://tw.linovelib.com/files/article/image/0/41/41s.jpg?1705762530" class="module-slide-img lazyload" data-expand='50' alt="不起眼女主角培育法"><figcaption class="module-slide-caption">不起眼女主角培育法</figcaption><p class="module-slide-author"></p></a></li><li class="module-slide-li"><a href="/novel/2399.html" class="module-slide-a"><span class="corner orange"><em>9.9</em></span><img src="https://tw.linovelib.com/themes/zhmb/images/book-cover-no.svg" data-src="https://tw.linovelib.com/files/article/image/2/2399/2399s.jpg?1730201784" class="module-slide-img lazyload" data-expand='50' alt="放學後,到異世界咖啡廳喝杯咖啡"><figcaption class="module-slide-caption">放學後,到異世界咖啡廳喝杯咖啡</figcaption><p class="module-slide-author"></p></a></li><li class="module-slide-li"><a href="/novel/39.html" class="module-slide-a"><span class="corner orange"><em>8.1</em></span><img src="https://tw.linovelib.com/themes/zhmb/images/book-cover-no.svg" data-src="https://tw.linovelib.com/files/article/image/0/39/39s.jpg?1726420675" class="module-slide-img lazyload" data-expand='50' alt="爆肝工程師的異世界狂想曲"><figcaption class="module-slide-caption">爆肝工程師的異世界狂想曲</figcaption><p class="module-slide-author"></p></a></li><li class="module-slide-li"><a href="/novel/3805.html" class="module-slide-a"><span class="corner orange"><em>9.7</em></span><img src="https://tw.linovelib.com/themes/zhmb/images/book-cover-no.svg" data-src="https://tw.linovelib.com/files/article/image/3/3805/3805s.jpg?1729580634" class="module-slide-img lazyload" data-expand='50' alt="我買下了與她的每周密會 ~以五千圓為借口,共度兩人時光~"><figcaption class="module-slide-caption">我買下了與她的每周密會 ~以五千圓為借口,共度兩人時光~</figcaption><p class="module-slide-author"></p></a></li><li class="module-slide-li"><a href="/novel/4109.html" class="module-slide-a"><span class="corner orange"><em>9.8</em></span><img src="https://tw.linovelib.com/themes/zhmb/images/book-cover-no.svg" data-src="https://tw.linovelib.com/files/article/image/4/4109/4109s.jpg?1727421064" class="module-slide-img lazyload" data-expand='50' alt="貴族千金只願意親近我。"><figcaption class="module-slide-caption">貴族千金只願意親近我。</figcaption><p class="module-slide-author"></p></a></li><li class="module-slide-li"><a href="/novel/2117.html" class="module-slide-a"><span class="corner orange"><em>9.9</em></span><img src="https://tw.linovelib.com/themes/zhmb/images/book-cover-no.svg" data-src="https://tw.linovelib.com/files/article/image/2/2117/2117s.jpg?1728826516" class="module-slide-img lazyload" data-expand='50' alt="不正經的魔術講師與禁忌教典"><figcaption class="module-slide-caption">不正經的魔術講師與禁忌教典</figcaption><p class="module-slide-author"></p></a></li><li class="module-slide-li"><a href="/novel/2734.html" class="module-slide-a"><span class="corner orange"><em>9.8</em></span><img src="https://tw.linovelib.com/themes/zhmb/images/book-cover-no.svg" data-src="https://tw.linovelib.com/files/article/image/2/2734/2734s.jpg?1721415581" class="module-slide-img lazyload" data-expand='50' alt="轉生公主與天才千金的魔法革命"><figcaption class="module-slide-caption">轉生公主與天才千金的魔法革命</figcaption><p class="module-slide-author"></p></a></li><li class="module-slide-li"><a href="/novel/2865.html" class="module-slide-a"><span class="corner orange"><em>9.9</em></span><img src="https://tw.linovelib.com/themes/zhmb/images/book-cover-no.svg" data-src="https://tw.linovelib.com/files/article/image/2/2865/2865s.jpg?1700724513" class="module-slide-img lazyload" data-expand='50' alt="不起眼女主角培育法 Memorial"><figcaption class="module-slide-caption">不起眼女主角培育法 Memorial</figcaption><p class="module-slide-author"></p></a></li><li class="module-slide-li"><a href="/novel/1.html" class="module-slide-a"><span class="corner orange"><em>9.9</em></span><img src="https://tw.linovelib.com/themes/zhmb/images/book-cover-no.svg" data-src="https://tw.linovelib.com/files/article/image/0/1/1s.jpg?1708873318" class="module-slide-img lazyload" data-expand='50' alt="惡魔高校DxD"><figcaption class="module-slide-caption"> 惡魔高校DxD</figcaption><p class="module-slide-author"></p></a></li><li class="module-slide-li"><a href="/novel/784.html" class="module-slide-a"><span class="corner orange"><em>9.8</em></span><img src="https://tw.linovelib.com/themes/zhmb/images/book-cover-no.svg" data-src="https://tw.linovelib.com/files/article/image/0/784/784s.jpg?1698675991" class="module-slide-img lazyload" data-expand='50' alt="夜想譚教誨師"><figcaption class="module-slide-caption">夜想譚教誨師</figcaption><p class="module-slide-author"></p></a></li> </ol> </div> </div> </div> </div> </div> <!-- 呼出評分功能 --> <link rel="stylesheet" href="/sfont/font_1356455_c5d3d3ohlbq.css"> <style>.noscroll{position: relative;}.rate-content input{-webkit-appearance:none;-moz-appearance:none;border:none;outline:none;cursor:pointer;}.rate-content{display:inline-block;}.stars-content{display:flex;flex-flow:row-reverse}.stars-content input[type="radio"]{font-family:"iconfont";font-size:1.875rem;width:1.75rem;height:1.75rem;margin:0.45rem 0.75rem 0rem 0.75rem;text-align:center;transition:transform .2s ease;}input[type="radio"]::after{content:"\e645";color:#999;transition:color .4s ease;}input[type="radio"]:checked::after,input[type="radio"]:checked~input[type='radio']::after,input[type="radio"]:hover::after,input[type="radio"]:hover~input[type='radio']::after{content:"\e73c";color:#ffa822;}input[type="radio"]:hover{transform:scale(1.3);}.popup-pay-footer{max-width: 1000px;margin: auto;border-top:0;height:6.75rem;}.popup-pay-main{min-height:5.0625rem;}.gifts-options{margin-top:0;padding:0;height:0;}.rate-content .tip{position:absolute;top:2rem;}.rate-content .tip li{margin:0rem 0.875rem 0rem 0.875rem;display:inline-block;font-size:.75rem;text-align:center;}.popup-pay-submit [type=submit]{position:unset;width:100%;height:100%;right:0;top:0;margin:0;opacity:unset;}.popup-pay-footer .btn-submit{min-width:6.125rem;padding:0 1rem;height:2.75rem;font-size:1rem;text-align:center;border-color:unset;border-width:0;border-style:unset;}.sp-dark-mode .aside-content,.sp-dark-mode .aside-popup{background-color:#232323;}.sp-dark-mode .rate-content .tip li{color: #9e9e9e;}@media (prefers-color-scheme:dark){.aside-content,.aside-popup{background-color:#232323;}}</style> <aside id="myModal" class="modal"> <div class="modal-content"> <i style="font-size:xx-small;">提示:收藏可以選擇是否進行分組</i> <ul class="group-list"> <li onclick="selectGroup(0)"> <svg class="icon blue"><use xlink:href="#icon-booklist-detail"></use></svg> <label for="group0">正在閱讀</label> </li> <li onclick="selectGroup(1)"> <svg class="icon blue"><use xlink:href="#icon-booklist-detail"></use></svg> <label for="group1">新書關注</label> </li> <li onclick="selectGroup(2)"> <svg class="icon blue"><use xlink:href="#icon-booklist-detail"></use></svg> <label for="group2">以後再看</label> </li> <li onclick="selectGroup(3)"> <svg class="icon blue"><use xlink:href="#icon-booklist-detail"></use></svg> <label for="group3">已經看完</label> </li> </ul> </div> </aside> <aside id="aside" class="aside" data-is-bind-scroll="true"> <i id="asideOverlay" class="aside-overlay"></i> <div class="aside-popup"> <div class="popup-pay-body"> <!-- 容器 --> <form id="popupPayMain1" class="popup-pay-content active" name="rate" id="rate" action="/modules/article/rating.php?do=submit" method="post" enctype="mulgiftart/form-data"> <div class="popup-pay-footer"> <div class="gift-options-lf"> <div class="rate-content"> <div class="stars-content"> <input type="radio" name="score" value="10" > <input type="radio" name="score" value="8" > <input type="radio" name="score" value="6" > <input type="radio" name="score" value="4" > <input type="radio" name="score" value="2" > </div> <ul class="tip"><li>差勁</li><li>無聊</li><li>一般</li><li>好看</li><li>力薦</li></ul> </div> </div> <!-- 提交按鈕 --> <div class="popup-pay-submit"> <button type="button" class="btn-submit btn-charge-orignal">提交評價</button> <input type="hidden" name="id" value="1" /> </div> </div> </form> </div> </div> </aside> <footer class="footer"> <div class="footer-link fuli-footer-link"> <a href="/bookcase.php" class="footer-link-a dark">書架</a> <a href="/report.php" class="footer-link-a dark">反饋</a> <a href="/help.html" class="footer-link-a dark">幫助</a> <a href="//w.linovelib.com/novel/1.html" class="footer-link-a dark">简体</a> <a href="//cdn.a.ln.yodu.app/" class="footer-link-a dark" id="app"><b>客戶端</b></a> </div> <div class="footer-copy">Copyright &copy; 2024 嗶哩輕小說</div></footer> <div id="searchPopup" class="search-popup" style="display: none;"> <header class="header"> <form id="searchForm" action="https://cse.google.com/cse" class="search-form"> <div class="search-area"> <svg class="icon icon-search"> <use xlink:href="#icon-search"></use> </svg> <input name="cx" type="hidden" value="649de34f5e63448cb"> <input autofocus="autofocus" id="keyword" type="text" name="q" class="search-input" autocomplete="off"> <button id="clearSearchKeyword" type="button" class="search-reset" hidden> <i class="icon icon-clear"> <svg> <use xlink:href="#icon-clear"></use> </svg> </i> </button> </div> <a id="closeSearchPopup" href="javascript:" class="search-cancel">取消</a></form> </header> <div id="searchHotHistory" class="search-hot-history"> <div id="searchPopularWords" class="search-popular loading-animation" style="overflow:hidden;transition:height .2s ease 0s;height:auto"> <div class="search-title-bar"> <h5 class="search-title">大家都在搜</h5></div> <div class="search-tags"> <a href="/novel/1.html" class="btn-line-gray jsSearchLink">惡魔高校DxD</a> <a href="/novel/2.html" class="btn-line-gray jsSearchLink">果然我的青春戀愛喜劇搞錯了</a> <a href="/novel/3.html" class="btn-line-gray jsSearchLink">在地下城尋求邂逅是否搞錯了什麼</a> <a href="/novel/4.html" class="btn-line-gray jsSearchLink">精靈使的劍舞</a> <a href="/novel/5.html" class="btn-line-gray jsSearchLink">問題兒童都來自異世界?</a> <a href="/novel/6.html" class="btn-line-gray jsSearchLink">關於我轉生變成史萊姆這檔事</a> <a href="/novel/7.html" class="btn-line-gray jsSearchLink">Campione 弒神者!</a> <a href="/novel/8.html" class="btn-line-gray jsSearchLink">歡迎來到實力至上主義的教室</a> </div><hr><div style="line-height:1.4em;padding:1.2em;">支持 書名、作者、標籤<br><br><div>如果不記得書名,請輸入關鍵詞進行搜索<br>比如:青春|異世界|豬頭少年</div></div></div> </div> </div> <script async src="https://www.googletagmanager.com/gtag/js?id=G-NG72YQN6TX"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-NG72YQN6TX'); </script> <script> var _hmt = _hmt || []; (function() { var hm = document.createElement("script"); hm.src = "https://hm.baidu.com/hm.js?1251eb70bc6856bd02196c68e198ee56"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(hm, s); })(); </script> <script> if ('serviceWorker' in navigator) { navigator.serviceWorker.getRegistrations().then(function(registrations) { for (let registration of registrations) { registration.unregister(); } }); } </script> <script>(function(){function c(){var b=a.contentDocument||a.contentWindow.document;if(b){var d=b.createElement('script');d.innerHTML="window.__CF$cv$params={r:'8ee32f819c200447',t:'MTczMzU2MDU2MS4wMDAwMDA='};var a=document.createElement('script');a.nonce='';a.src='/cdn-cgi/challenge-platform/scripts/jsd/main.js';document.getElementsByTagName('head')[0].appendChild(a);";b.getElementsByTagName('head')[0].appendChild(d)}}if(document.body){var a=document.createElement('iframe');a.height=1;a.width=1;a.style.position='absolute';a.style.top=0;a.style.left=0;a.style.border='none';a.style.visibility='hidden';document.body.appendChild(a);if('loading'!==document.readyState)c();else if(window.addEventListener)document.addEventListener('DOMContentLoaded',c);else{var e=document.onreadystatechange||function(){};document.onreadystatechange=function(b){e(b);'loading'!==document.readyState&&(document.onreadystatechange=e,c())}}}})();</script></body> </html> ``` ### Logs/tracebacks ```python-traceback None. ``` ### Python Version ```console $ python --version Python 3.13.0 ``` ### aiohttp Version ```console $ python -m pip show aiohttp Name: aiohttp Version: 3.11.8 Summary: Async http client/server framework (asyncio) Home-page: https://github.com/aio-libs/aiohttp Author: Author-email: License: Apache-2.0 Location: C:\Users\14485\AppData\Roaming\Python\Python313\site-packages Requires: aiohappyeyeballs, aiosignal, attrs, frozenlist, multidict, propcache, yarl Required-by: ``` ### multidict Version ```console $ python -m pip show multidict Name: multidict Version: 6.1.0 Summary: multidict implementation Home-page: https://github.com/aio-libs/multidict Author: Andrew Svetlov Author-email: andrew.svetlov@gmail.com License: Apache 2 Location: C:\Users\14485\AppData\Roaming\Python\Python313\site-packages Requires: Required-by: aiohttp, yarl ``` ### propcache Version ```console $ python -m pip show propcache Name: propcache Version: 0.2.0 Summary: Accelerated property cache Home-page: https://github.com/aio-libs/propcache Author: Andrew Svetlov Author-email: andrew.svetlov@gmail.com License: Apache-2.0 Location: C:\Users\14485\AppData\Roaming\Python\Python313\site-packages Requires: Required-by: aiohttp, yarl ``` ### yarl Version ```console $ python -m pip show yarl Name: yarl Version: 1.18.0 Summary: Yet another URL library Home-page: https://github.com/aio-libs/yarl Author: Andrew Svetlov Author-email: andrew.svetlov@gmail.com License: Apache-2.0 Location: C:\Users\14485\AppData\Roaming\Python\Python313\site-packages Requires: idna, multidict, propcache Required-by: aiohttp ``` ### OS Windows 11 ### Related component Client ### Additional context _No response_ ### Code of Conduct - [X] I agree to follow the aio-libs Code of Conduct
closed
2024-12-07T09:32:54Z
2024-12-10T17:02:26Z
https://github.com/aio-libs/aiohttp/issues/10141
[ "bug" ]
CodingMoeButa
14
horovod/horovod
tensorflow
2,940
[MAC OS] libc++abi.dylib: terminating with uncaught exception of type gloo::EnforceNotMet
**Environment:** 1. Framework: (TensorFlow, Keras, PyTorch, MXNet) Tensorflow 2. Framework version: 2.4.1 3. Horovod version: 0.22.0 4. MPI version: (NO, with gloo) 5. CUDA version: 6. NCCL version: 7. Python version: 3.7 8. Spark / PySpark version: 9. Ray version: 10. OS and version: macOS Big Sur 11.2.3 11. GCC version: ``` Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/4.2.1 Apple clang version 12.0.0 (clang-1200.0.32.29) Target: x86_64-apple-darwin20.3.0 Thread model: posix ``` 12. CMake version: 3.20.1 **Bug report:** Please describe erroneous behavior you're observing and steps to reproduce it. ```python import horovod.tensorflow as hvd hvd.init() ``` then, the python command line terminated, with the following exception message: ``` libc++abi.dylib: terminating with uncaught exception of type gloo::EnforceNotMet: [enforce fail at /private/var/folders/l3/18pljl2s6q3d3h_kk1y5nv8h0000gn/T/pip-install-xy0ow8d0/horovod_98a796e5f3cf40929f5431d36dccb56b/third_party/compatible_gloo/gloo/transport/uv/device.cc:129] rp != nullptr. Unable to find address for: 314159 ``` In other platform: linux (cpu), linux(NCCL2 + GPU), works fine. So why such error occurred in mac, and how to solve it? Any help will be highly appreciated.
open
2021-05-26T07:19:13Z
2021-08-14T00:53:01Z
https://github.com/horovod/horovod/issues/2940
[ "bug" ]
ericxsun
4
feder-cr/Jobs_Applier_AI_Agent_AIHawk
automation
682
[HELP WANTED]: <Errno2 during job application: No such file or directory: 'data_folder\\output\\failed.json'>
### Issue description Everything works until it attempts to apply to a job, then I ger the following error: 2024-10-30 12:47:20.673 | DEBUG | src.aihawk_job_manager:extract_job_information_from_tile:459 - Job information extracted: Senior Quality Engineer at Brightpath Associates LLC 2024-10-30 12:47:20.694 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Fire Protection Engineer at Insight Global 2024-10-30 12:47:20.694 | ERROR | src.aihawk_job_manager:start_applying:158 - Error during job application: [Errno 2] No such file or directory: 'data_folder\\output\\failed.json' ### Specific tasks _No response_ ### Additional resources _No response_ ### Additional context _No response_
closed
2024-10-30T16:49:25Z
2024-11-07T00:45:30Z
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/682
[ "help wanted" ]
LouisLeBoeuf
5
Evil0ctal/Douyin_TikTok_Download_API
fastapi
255
[BUG] Cannot read property 'JS_MD5_NO_COMMON_JS' of null
***Platform where the error occurred?*** `Douyin` ***The endpoint where the error occurred?*** `Web APP` ***Submitted input value?*** `https://v.douyin.com/iJoaujh1/` ***Have you tried again?*** Yes, the error still exists after X time after the error occurred. ***Have you checked the readme or interface documentation for this project?*** Yes, and it is very sure that the problem is caused by the program. All parsing logs are as follows ``` Use http://10.1.1.2:80/ to access the application 当前链接平台为:douyin 正在获取视频ID... 正在通过抖音分享链接获取原始链接... 获取原始链接成功, 原始链接为: https://www.iesdouyin.com/share/video/7268837076525665596/ 获取到的抖音视频ID为: 7268837076525665596 获取视频ID成功,视频ID为:7268837076525665596 正在获取视频数据... 正在获取抖音视频数据... 获取抖音视频数据失败!原因:TypeError: Cannot read property 'JS_MD5_NO_COMMON_JS' of null 正在获取抖音视频数据... 获取抖音视频数据失败!原因:TypeError: Cannot read property 'JS_MD5_NO_COMMON_JS' of null 正在获取抖音视频数据... 获取抖音视频数据失败!原因:TypeError: Cannot read property 'JS_MD5_NO_COMMON_JS' of null 正在获取抖音视频数据... 获取抖音视频数据失败!原因:TypeError: Cannot read property 'JS_MD5_NO_COMMON_JS' of null exception calling callback for <Future at 0x10b8b3d90 state=finished raised RetryError> Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in __call__ result = await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/Caelebs/Desktop/Douyin_TikTok_Download_API-main/scraper.py", line 268, in get_douyin_video_data raise e File "/Users/Caelebs/Desktop/Douyin_TikTok_Download_API-main/scraper.py", line 253, in get_douyin_video_data api_url = self.generate_x_bogus_url(api_url) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/Caelebs/Desktop/Douyin_TikTok_Download_API-main/scraper.py", line 199, in generate_x_bogus_url xbogus = execjs.compile(open('./X-Bogus.js').read()).call('sign', query, self.headers['User-Agent']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execjs/_abstract_runtime_context.py", line 37, in call return self._call(name, *args) ^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execjs/_external_runtime.py", line 92, in _call return self._eval("{identifier}.apply(this, {args})".format(identifier=identifier, args=args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execjs/_external_runtime.py", line 78, in _eval return self.exec_(code) ^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execjs/_abstract_runtime_context.py", line 18, in exec_ return self._exec_(source) ^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execjs/_external_runtime.py", line 88, in _exec_ return self._extract_result(output) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execjs/_external_runtime.py", line 167, in _extract_result raise ProgramError(value) execjs._exceptions.ProgramError: TypeError: Cannot read property 'JS_MD5_NO_COMMON_JS' of null The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 340, in _invoke_callbacks callback(self) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pywebio/session/coroutinebased.py", line 347, in _wakeup self.step(future.result()) ^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/Users/Caelebs/Desktop/Douyin_TikTok_Download_API-main/scraper.py", line 432, in hybrid_parsing data = await self.get_douyin_video_data( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped return await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__ do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 326, in iter raise retry_exc from fut.exception() tenacity.RetryError: RetryError[<Future at 0x10b497d90 state=finished raised ProgramError>] ```
closed
2023-08-28T14:21:48Z
2023-08-28T14:26:41Z
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/255
[ "BUG", "enhancement" ]
Caelebs
1
pydata/pandas-datareader
pandas
828
Yahoo Finance data for an specific hour
Hi, I want to download hystorical data of stock value **for an determinated hour**. Does anyone know how to do? I am using this code but extracts only to the end of the day. I want to extract data for a midday: ``` failed=[] passed=[] def collect_data(data): mydata = pd.DataFrame() for t in data: try: mydata[t] = wb.DataReader(t,data_source='yahoo',start='01-10-2019')['Adj Close'] passed.append(t) except (IOError, KeyError): msg= 'NaN' failed.append(t) print(mydata) return mydata) ``` Tahank you!
open
2020-09-22T17:25:17Z
2021-08-08T22:32:34Z
https://github.com/pydata/pandas-datareader/issues/828
[]
victorsm123
2
pydantic/logfire
pydantic
859
Add support to track disk usage
### Description Currently, there is no support for tracking disk usage. It will help to set up alerts where disk size grows beyond a set threshold.
open
2025-02-12T13:49:37Z
2025-02-14T09:42:27Z
https://github.com/pydantic/logfire/issues/859
[ "Feature Request" ]
rishabhc32
2