repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
slackapi/python-slack-sdk
asyncio
1,218
SlackObjectFormationError: options attribute must have between 2 and 5 items whereas API allows a single option
### Reproducible in: #### The Slack SDK version slack-sdk==3.16.1 #### Python runtime version Python 3.10.3 #### OS info ProductName: macOS ProductVersion: 12.3.1 BuildVersion: 21E258 Darwin Kernel Version 21.4.0: Fri Mar 18 00:46:32 PDT 2022; root:xnu-8020.101.4~15/RELEASE_ARM64_T6000 #### Steps to reproduce: ```py Message( text="placeholder", blocks=[ SectionBlock( text="placeholder", accessory=OverflowMenuElement( options=[Option(value="placeholder", text="placeholder")] ), ) ], ).to_dict() ``` ^ This will fail with `slack_sdk.errors.SlackObjectFormationError: options attribute must have between 2 and 5 item` It used to be that the API required a minimum of two option entries in the overflow menu, but for a number of years now it has permitted just a single one. ### Expected result: No error is thrown and the object is converted to a dict. Block kit builder screenshot to confirm that the API allows a single option: <img width="1316" alt="image" src="https://user-images.githubusercontent.com/7718702/170281869-ce3b3f66-29b8-4bc5-9860-2445efd366f2.png"> ```json { "blocks": [ { "type": "section", "text": { "type": "mrkdwn", "text": "This is a section block with an overflow menu with a single option." }, "accessory": { "type": "overflow", "options": [ { "text": { "type": "plain_text", "text": "A single option", "emoji": true } } ] } } ] } ``` ### Actual result: An error is thrown: `slack_sdk.errors.SlackObjectFormationError: options attribute must have between 2 and 5 item`
closed
2022-05-25T14:08:33Z
2022-05-27T04:44:44Z
https://github.com/slackapi/python-slack-sdk/issues/1218
[ "bug", "web-client", "Version: 3x" ]
wilhelmklopp
2
OthersideAI/self-operating-computer
automation
133
'source' is not recognized as an internal or external command, operable program or batch file.
'source' is not recognized as an internal or external command, operable program or batch file. ![Screenshot (111)](https://github.com/OthersideAI/self-operating-computer/assets/155235981/df756526-17a4-480b-9a04-c790a9be415d)
closed
2024-01-13T19:18:50Z
2024-01-14T13:53:26Z
https://github.com/OthersideAI/self-operating-computer/issues/133
[]
theexpeert
4
ultralytics/ultralytics
pytorch
19,253
(HUB) Export HMZ-Compatible .onnx or .hef
### Search before asking - [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests. ### Description The Hailo model zoo compiler expects specific names for Yolo output layers. Add support for hailo-compatible onnx or hailo-optimized .hef models. Unfortunately, you cannot simply export an ultralytics onnx model and compile it for Hailo 8 at the moment. ### Use case Quickly training models for edge devices integrating Hailo-8 ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
open
2025-02-14T20:50:09Z
2025-02-15T10:29:50Z
https://github.com/ultralytics/ultralytics/issues/19253
[ "enhancement", "HUB", "embedded", "exports" ]
bhjelstrom
2
microsoft/nni
tensorflow
5,607
Error on creating custom trial
**Describe the issue**: Hi, I have a problem when I add a custom trial to the experiment from the webui. When pressing the button of "customized trial" in the trial jobs list, I create a new trial and start running, but suddenly the experiment gives an error because the trial doesn't have the parameter_id field. Is there any solution for it? Thank you. **Environment**: - NNI version: 2.10.1 - Training service (local|remote|pai|aml|etc): local - Client OS: Ubuntu 22.04.2 LTS - Server OS (for remote mode only): - Python version: 3.10.11 - PyTorch/TensorFlow version: PyTorch 1.13.1 - Is conda/virtualenv/venv used?: conda - Is running in Docker?: No **Configuration**: ``` search_space: n_rec: _type: choice _value: [ 64, 128, 256 ] lr: _type: loguniform _value: [ 0.000001, 0.1 ] thr: _type: uniform _value: [ 0.0, 10.0 ] n_ref: _type: loguniform _value: [ 0.0001, 0.5 ] tau_v: _type: loguniform _value: [ 3.0, 1000. ] tau_a: _type: loguniform _value: [ 3.0, 1000. ] tau_o: _type: loguniform _value: [ 3.0, 1000. ] theta: _type: uniform _value: [ 0.0, 10.0 ] sg_scale: _type: uniform _value: [ 0.03, 1.0 ] freq: _type: uniform _value: [ 10.0, 100.0 ] trial_command: python nni_experiment.py trial_code_directory: . trial_concurrency: 1 max_trial_number: 100 assessor: name: Medianstop class_args: start_step: 50 tuner: name: TPE class_args: optimize_mode: maximize training_service: platform: local ``` **Log message**: - nnimanager.log: ``` [2023-06-12 11:03:46] INFO (main) Start NNI manager [2023-06-12 11:03:46] INFO (NNIDataStore) Datastore initialization done [2023-06-12 11:03:46] INFO (RestServer) Starting REST server at port 8080, URL prefix: "/" [2023-06-12 11:03:46] INFO (RestServer) REST server started. [2023-06-12 11:03:46] INFO (NNIManager) Starting experiment: c1qxbaym [2023-06-12 11:03:47] INFO (NNIManager) Setup training service... [2023-06-12 11:03:47] INFO (LocalTrainingService) Construct local machine training service. [2023-06-12 11:03:47] INFO (NNIManager) Setup tuner... [2023-06-12 11:03:47] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: RUNNING [2023-06-12 11:03:48] INFO (NNIManager) Add event listeners [2023-06-12 11:03:48] INFO (LocalTrainingService) Run local machine training service. [2023-06-12 11:03:48] INFO (NNIManager) NNIManager received command from dispatcher: ID, [2023-06-12 11:03:48] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"n_rec": 256, "lr": 0.011448074517379067, "thr": 3.597306519090515, "n_ref": 0.008715046131089057, "tau_v": 7.573031603546667, "tau_a": 29.990669571487363, "tau_o": 128.18680794623737, "theta": 2.102562278413914, "sg_scale": 0.9624626013431545, "freq": 45.56603200372938}, "parameter_index": 0} [2023-06-12 11:03:48] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 1, "parameter_source": "algorithm", "parameters": {"n_rec": 256, "lr": 3.3218768175937776e-05, "thr": 7.576678438876874, "n_ref": 0.4766486382392336, "tau_v": 4.566249658132258, "tau_a": 34.182390708234216, "tau_o": 3.093741894351405, "theta": 8.826553595697263, "sg_scale": 0.5791697231669481, "freq": 68.55767486427999}, "parameter_index": 0} [2023-06-12 11:03:53] INFO (NNIManager) submitTrialJob: form: { sequenceId: 0, hyperParameters: { value: '{"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"n_rec": 256, "lr": 0.011448074517379067, "thr": 3.597306519090515, "n_ref": 0.008715046131089057, "tau_v": 7.573031603546667, "tau_a": 29.990669571487363, "tau_o": 128.18680794623737, "theta": 2.102562278413914, "sg_scale": 0.9624626013431545, "freq": 45.56603200372938}, "parameter_index": 0}', index: 0 }, placementConstraint: { type: 'None', gpus: [] } } [2023-06-12 11:03:53] INFO (NNIManager) submitTrialJob: form: { sequenceId: 1, hyperParameters: { value: '{"parameter_id": 1, "parameter_source": "algorithm", "parameters": {"n_rec": 256, "lr": 3.3218768175937776e-05, "thr": 7.576678438876874, "n_ref": 0.4766486382392336, "tau_v": 4.566249658132258, "tau_a": 34.182390708234216, "tau_o": 3.093741894351405, "theta": 8.826553595697263, "sg_scale": 0.5791697231669481, "freq": 68.55767486427999}, "parameter_index": 0}', index: 0 }, placementConstraint: { type: 'None', gpus: [] } } [2023-06-12 11:03:58] INFO (NNIManager) Trial job ExCBx status changed from WAITING to RUNNING [2023-06-12 11:03:58] INFO (NNIManager) Trial job ciNRn status changed from WAITING to RUNNING [2023-06-12 11:14:50] INFO (NNITensorboardManager) tensorboard: ExCBx,ciNRn /home/sic/nni-experiments/c1qxbaym/trials/ExCBx/tensorboard,/home/sic/nni-experiments/c1qxbaym/trials/ciNRn/tensorboard [2023-06-12 11:14:50] INFO (NNITensorboardManager) tensorboard start command: tensorboard,--bind_all --logdir_spec=0-ExCBx:/home/sic/nni-experiments/c1qxbaym/trials/ExCBx/tensorboard,1-ciNRn:/home/sic/nni-experiments/c1qxbaym/trials/ciNRn/tensorboard,--port=6006 [2023-06-12 11:14:50] INFO (NNITensorboardManager) tensorboard task id: zSWdn [2023-06-12 11:14:50] INFO (NNIRestHandler) TensorboardTaskDetail { id: 'zSWdn', status: 'RUNNING', trialJobIdList: [ 'ExCBx', 'ciNRn' ], trialLogDirectoryList: [ '/home/sic/nni-experiments/c1qxbaym/trials/ExCBx/tensorboard', '/home/sic/nni-experiments/c1qxbaym/trials/ciNRn/tensorboard' ], pid: 2519812, port: '6006' } [2023-06-12 11:14:50] INFO (NNITensorboardManager) tensorboardTask zSWdn status update: RUNNING to DOWNLOADING_DATA [2023-06-12 11:14:50] INFO (NNITensorboardManager) tensorboardTask zSWdn status update: DOWNLOADING_DATA to RUNNING [2023-06-12 11:17:08] INFO (NNIManager) User cancelTrialJob: ExCBx [2023-06-12 11:17:09] INFO (NNIManager) Trial job ExCBx status changed from RUNNING to USER_CANCELED [2023-06-12 11:17:09] INFO (NNIManager) submitTrialJob: form: { sequenceId: 2, hyperParameters: { value: '{"parameter_id":null,"parameter_source":"customized","parameters":{"n_rec":256,"lr":0.0005,"thr":1,"n_ref":0.05,"tau_v":1000,"tau_a":1000,"tau_o":1000,"theta":5,"sg_scale":0.3,"freq":100}}', index: 0 } } [2023-06-12 11:17:09] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 2, "parameter_source": "algorithm", "parameters": {"n_rec": 256, "lr": 2.228695710339134e-05, "thr": 2.6451070628903253, "n_ref": 0.002383796963784488, "tau_v": 21.571986754085653, "tau_a": 3.8068845699351344, "tau_o": 3.970901210844971, "theta": 5.8263479076965226, "sg_scale": 0.7507240392417996, "freq": 30.6873476711282}, "parameter_index": 0} [2023-06-12 11:17:14] INFO (NNIManager) Trial job SjU2n status changed from WAITING to RUNNING [2023-06-12 11:17:32] INFO (NNITensorboardManager) Forced stopping all tensorboard task. [2023-06-12 11:17:32] INFO (NNITensorboardManager) tensorboardTask zSWdn status update: RUNNING to STOPPING [2023-06-12 11:17:32] INFO (NNITensorboardManager) tensorboardTask zSWdn status update: STOPPING to STOPPED [2023-06-12 11:17:32] INFO (NNITensorboardManager) All tensorboard task stopped. [2023-06-12 11:19:15] ERROR (tuner_command_channel.WebSocketChannel) Error: Error: tuner_command_channel: Tuner closed connection at WebSocket.handleWsClose (/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:83:26) at WebSocket.emit (node:events:538:35) at WebSocket.emitClose (/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:246:10) at Socket.socketOnClose (/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:1127:15) at Socket.emit (node:events:526:28) at TCP.<anonymous> (node:net:687:12) ``` - dispatcher.log: ``` [2023-06-12 11:03:48] INFO (nni.tuner.tpe/MainThread) Using random seed 606063535 [2023-06-12 11:03:48] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher started [2023-06-12 11:17:09] WARNING (medianstop_Assessor/Thread-2 (command_queue_worker)) trial_end: trial_job_id does not exist in running_history [2023-06-12 11:18:34] ERROR (nni.runtime.msg_dispatcher_base/Thread-2 (command_queue_worker)) '<=' not supported between instances of 'NoneType' and 'int' Traceback (most recent call last): File "/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni/runtime/msg_dispatcher_base.py", line 109, in command_queue_worker self.process_command(command, data) File "/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni/runtime/msg_dispatcher_base.py", line 155, in process_command command_handlers[command](data) File "/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni/runtime/msg_dispatcher.py", line 135, in handle_report_metric_data if self.is_created_in_previous_exp(data['parameter_id']): File "/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni/recoverable.py", line 47, in is_created_in_previous_exp return param_id <= self.recovered_max_param_id TypeError: '<=' not supported between instances of 'NoneType' and 'int' [2023-06-12 11:19:15] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher exiting... [2023-06-12 11:19:15] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher terminiated ``` - nnictl stdout and stderr: nnictl_stderr.log ``` -------------------------------------------------------------------------------- Experiment c1qxbaym start: 2023-06-12 11:03:45.935189 -------------------------------------------------------------------------------- node:events:504 throw er; // Unhandled 'error' event ^ Error: tuner_command_channel: Tuner closed connection at WebSocket.handleWsClose (/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:83:26) at WebSocket.emit (node:events:538:35) at WebSocket.emitClose (/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:246:10) at Socket.socketOnClose (/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:1127:15) at Socket.emit (node:events:526:28) at TCP.<anonymous> (node:net:687:12) Emitted 'error' event at: at WebSocketChannelImpl.handleError (/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:135:22) at WebSocket.handleWsClose (/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:83:14) at WebSocket.emit (node:events:538:35) [... lines matching original stack trace ...] at TCP.<anonymous> (node:net:687:12) Thrown at: at handleWsClose (/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:83:26) at emit (node:events:538:35) at emitClose (/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:246:10) at socketOnClose (/home/sic/anaconda3/envs/ENV/lib/python3.10/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:1127:15) at emit (node:events:526:28) at node:net:687:12 ``` nnictl_stdout.log ``` -------------------------------------------------------------------------------- Experiment c1qxbaym start: 2023-06-12 11:03:45.935189 -------------------------------------------------------------------------------- {'n_rec': {'_type': 'choice', '_value': [64, 128, 256]}, 'lr': {'_type': 'loguniform', '_value': [1e-06, 0.1]}, 'thr': {'_type': 'uniform', '_value': [0, 10]}, 'n_ref': {'_type': 'loguniform', '_value': [0.0001, 0.5]}, 'tau_v': {'_type': 'loguniform', '_value': [3, 1000]}, 'tau_a': {'_type': 'loguniform', '_value': [3, 1000]}, 'tau_o': {'_type': 'loguniform', '_value': [3, 1000]}, 'theta': {'_type': 'uniform', '_value': [0, 10]}, 'sg_scale': {'_type': 'uniform', '_value': [0.03, 1]}, 'freq': {'_type': 'uniform', '_value': [10, 100]}} 2 {'parameter_id': 0, 'trial_job_id': 'ExCBx', 'type': 'PERIODICAL', 'sequence': 0, 'value': '0.04779411852359772'} {'parameter_id': 1, 'trial_job_id': 'ciNRn', 'type': 'PERIODICAL', 'sequence': 0, 'value': '0.04779411852359772'} {'parameter_id': 0, 'trial_job_id': 'ExCBx', 'type': 'PERIODICAL', 'sequence': 1, 'value': '0.048253677785396576'} {'parameter_id': 1, 'trial_job_id': 'ciNRn', 'type': 'PERIODICAL', 'sequence': 1, 'value': '0.048253677785396576'} {'parameter_id': 0, 'trial_job_id': 'ExCBx', 'type': 'PERIODICAL', 'sequence': 2, 'value': '0.04641544073820114'} {'parameter_id': 1, 'trial_job_id': 'ciNRn', 'type': 'PERIODICAL', 'sequence': 2, 'value': '0.04641544073820114'} {'parameter_id': 0, 'trial_job_id': 'ExCBx', 'type': 'PERIODICAL', 'sequence': 3, 'value': '0.048713237047195435'} {'parameter_id': 1, 'trial_job_id': 'ciNRn', 'type': 'PERIODICAL', 'sequence': 3, 'value': '0.048713237047195435'} {'parameter_id': 0, 'trial_job_id': 'ExCBx', 'type': 'PERIODICAL', 'sequence': 4, 'value': '0.048253677785396576'} {'parameter_id': 1, 'trial_job_id': 'ciNRn', 'type': 'PERIODICAL', 'sequence': 4, 'value': '0.048253677785396576'} {'parameter_id': 0, 'trial_job_id': 'ExCBx', 'type': 'PERIODICAL', 'sequence': 5, 'value': '0.049172792583703995'} {'parameter_id': 1, 'trial_job_id': 'ciNRn', 'type': 'PERIODICAL', 'sequence': 5, 'value': '0.049172792583703995'} {'parameter_id': 0, 'trial_job_id': 'ExCBx', 'type': 'PERIODICAL', 'sequence': 6, 'value': '0.04963235184550285'} {'parameter_id': 1, 'trial_job_id': 'ciNRn', 'type': 'PERIODICAL', 'sequence': 6, 'value': '0.04963235184550285'} {'parameter_id': 0, 'trial_job_id': 'ExCBx', 'type': 'PERIODICAL', 'sequence': 7, 'value': '0.046875'} {'parameter_id': 1, 'trial_job_id': 'ciNRn', 'type': 'PERIODICAL', 'sequence': 7, 'value': '0.046875'} {'parameter_id': 0, 'trial_job_id': 'ExCBx', 'type': 'PERIODICAL', 'sequence': 8, 'value': '0.04779411852359772'} {'parameter_id': 1, 'trial_job_id': 'ciNRn', 'type': 'PERIODICAL', 'sequence': 8, 'value': '0.04779411852359772'} {'parameter_id': 0, 'trial_job_id': 'ExCBx', 'type': 'PERIODICAL', 'sequence': 9, 'value': '0.048253677785396576'} {'parameter_id': 1, 'trial_job_id': 'ciNRn', 'type': 'PERIODICAL', 'sequence': 9, 'value': '0.048253677785396576'} {'parameter_id': 0, 'trial_job_id': 'ExCBx', 'type': 'PERIODICAL', 'sequence': 10, 'value': '0.046875'} {'parameter_id': 1, 'trial_job_id': 'ciNRn', 'type': 'PERIODICAL', 'sequence': 10, 'value': '0.046875'} {'trial_job_id': 'ExCBx', 'event': 'USER_CANCELED', 'hyper_params': '{"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"n_rec": 256, "lr": 0.011448074517379067, "thr": 3.597306519090515, "n_ref": 0.008715046131089057, "tau_v": 7.573031603546667, "tau_a": 29.990669571487363, "tau_o": 128.18680794623737, "theta": 2.102562278413914, "sg_scale": 0.9624626013431545, "freq": 45.56603200372938}, "parameter_index": 0}'} 1 {'parameter_id': 1, 'trial_job_id': 'ciNRn', 'type': 'PERIODICAL', 'sequence': 11, 'value': '0.048713237047195435'} {'parameter_id': None, 'trial_job_id': 'SjU2n', 'type': 'PERIODICAL', 'sequence': 0, 'value': '0.04779411852359772'} ```
closed
2023-06-12T09:52:39Z
2023-06-13T14:54:54Z
https://github.com/microsoft/nni/issues/5607
[]
ferqui
2
CorentinJ/Real-Time-Voice-Cloning
python
768
Error training in Spanish
Hello, I am trying to train in Spanish, I don't know how to solve this error. Windows 10, rtx 3090. use a dataset to LibriTTS, modify to spanish syntetize-utils-simbol, and in hparams.py I changed it to basic_cleaners The first step he did well I think, he did not make any mistakes. ![syntetizador](https://user-images.githubusercontent.com/68554553/120906721-2251a900-c65c-11eb-878d-294d9358ec18.jpg) In this step I had to modify the texts since the vowels in upper case "Á" at the beginning of a word gave me an error, in lower case "á" worked well. The second step using: python synthesizer_preprocess_embeds.py datasets_root/SV2TTS/synthesizer I get the following error: ![error](https://user-images.githubusercontent.com/68554553/120906761-807e8c00-c65c-11eb-9055-fb0befb75c09.jpg) I am a beginner in self-taught programming, and I spent hours in google and did not find how to solve it.
closed
2021-06-05T22:34:05Z
2021-06-06T12:06:13Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/768
[]
necrcon
2
OFA-Sys/Chinese-CLIP
nlp
280
请问在三个图文检索数据集上微调时,有使用两阶段微调的方式吗?
我直接使用最新master提供的muge_finetune_vit-b-16_rbt-base.sh进行训练,freeze_vision="",1张v100,其他参数没变,微调结束后结果提交到官网,评估结果比zero-shot低。 zero-shot:Recall@1=52.16, Recall@5=76.22, Recall@5=83.97, Mean Recall=70.78 finetune:Recall@1=48.82, Recall@5=75.8, Recall@5=84.59, Mean Recall=69.74 如果微调你们用了两阶段的话,参数有调整吗?像学习率、epoch等这些 另外请问你们最近有预训练其他的图文匹配模型吗,比如ALBEF、BLIP2等
open
2024-03-27T07:06:20Z
2024-04-21T11:16:02Z
https://github.com/OFA-Sys/Chinese-CLIP/issues/280
[]
xiuxiuxius
6
HIT-SCIR/ltp
nlp
484
No function named chomp in src/utils/strutils.hpp:54
src/parser/conllreader.h:54 call the function chomp(line); but no function named chomp in all the source files. In fact , trim should be called here. c++ ltp-version=3.4.0
open
2021-01-26T06:37:45Z
2021-01-26T06:38:48Z
https://github.com/HIT-SCIR/ltp/issues/484
[]
dooothink
0
PrefectHQ/prefect
data-science
17,085
Prefect Worker logging concurrency empty queue error
### Bug summary We've run into a logging error which occurs in our Prefect worker deployed in AWS EKS. This is only a logging issue. The worker seems to work fine and flows can run to completion as expected. This logging error occurs with or without flows running. Traceback with `PREFECT_DEBUG_MODE=1` ``` │ 21:29:53.117 | DEBUG | QueueingSpanExporterThread | prefect._internal.concurrency - Running call get(timeout=1.999997219 │ │ 999841) in thread 'QueueingSpanExporterThread' │ │ 21:29:53.117 | DEBUG | QueueingSpanExporterThread | prefect._internal.concurrency - <WatcherThreadCancelScope, name='get │ │ ' RUNNING, runtime=0.00> entered │ │ 21:29:53.338 | DEBUG | QueueingLogExporterThread | prefect._internal.concurrency - <WatcherThreadCancelScope, name='get' │ │ COMPLETED, runtime=2.00> exited │ │ 21:29:53.338 | DEBUG | QueueingLogExporterThread | prefect._internal.concurrency - Encountered exception in call get(<dr │ │ opped>) │ │ Traceback (most recent call last): │ │ File "/usr/local/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 363, in _run_sync │ │ result = self.fn(*self.args, **self.kwargs) │ │ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ │ │ File "/usr/local/lib/python3.11/queue.py", line 179, in get │ │ raise Empty │ │ _queue.Empty ``` ### Version info ```Text # Prefect version I’ve tried various Prefect + Python versions for our Prefect worker. The Docker images tried are: `prefecthq/prefect:3.1.15-python3.10-kubernetes` `prefecthq/prefect:3.1.15-python3.11-kubernetes` `prefecthq/prefect:3.1.15-python3.12-kubernetes` # AWS EKS information * AWS EKS v1.3.1 or v1.3.2 (bug exists in both versions) * EKS cluster has stable, typical addons: kube-proxy, coredns, etc. * EKS default Managed Nodegroups or Managed Nodepools (bug occurs in both compute environments) * AWS EKS AutoMode (bug exists even with AWS managed EKS services such as kube-proxy, coredns, etc) # HELM Information helm version: version.BuildInfo{Version:"v3.14.4", GitCommit:"81c902a123462fd4052bc5e9aa9c513c4c8fc142", GitTreeState:"clean", GoVersion:"go1.22.2 helm chart version info: NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION prefect-worker-3 prefect 1 2025-01-31 08:09:41.422611 -0700 MST deployed ``` ### Additional context My `values.yaml` ends up being ``` ## Common parameters # -- partially overrides common.names.name nameOverride: "" # -- fully override common.names.fullname fullnameOverride: "prefect-worker-3" # -- fully override common.names.namespace namespaceOverride: "prefect" # -- labels to add to all deployed objects commonLabels: {} # -- annotations to add to all deployed objects commonAnnotations: {} ## Deployment Configuration worker: autoscaling: # -- enable autoscaling for the worker enabled: false # -- minimum number of replicas to scale down to minReplicas: 1 # -- maximum number of replicas to scale up to maxReplicas: 1 # -- target CPU utilization percentage for scaling the worker targetCPUUtilizationPercentage: 80 # -- target memory utilization percentage for scaling the worker targetMemoryUtilizationPercentage: 80 # -- unique cluster identifier, if none is provided this value will be infered at time of helm install clusterUid: "" initContainer: # -- the resource specifications for the sync-base-job-template initContainer # Defaults to the resources defined for the worker container resources: {} # -- the requested resources for the sync-base-job-template initContainer # requests: # memory: 256Mi # cpu: 100m # -- the requested limits for the sync-base-job-template initContainer # limits: # memory: 1Gi # cpu: 1000m ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container # -- security context for the sync-base-job-template initContainer containerSecurityContext: # -- set init containers' security context runAsUser runAsUser: 1001 # -- set init containers' security context runAsNonRoot runAsNonRoot: true # -- set init containers' security context readOnlyRootFilesystem readOnlyRootFilesystem: true # -- set init containers' security context allowPrivilegeEscalation allowPrivilegeEscalation: false # -- set init container's security context capabilities capabilities: {} image: # -- worker image repository repository: prefecthq/prefect ## prefect tag is pinned to the latest available image tag at packaging time. Update the value here to ## override pinned tag # -- prefect image tag (immutable tags are recommended) prefectTag: 3.1.15-python3.11-kubernetes # -- worker image pull policy pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## e.g: ## pullSecrets: ## - myRegistryKeySecretName # -- worker image pull secrets pullSecrets: [] # -- enable worker image debug mode debug: false ## general configuration of the worker config: # -- the work pool that your started worker will poll. workPool: "jupiter-eos-prefect-3" # -- one or more work queue names for the worker to pull from. if not provided, the worker will pull from all work queues in the work pool workQueues: [jupiter-eos-prefect-3] # -- how often the worker will query for runs queryInterval: 5 # -- when querying for runs, how many seconds in the future can they be scheduled prefetchSeconds: 10 # -- connect using HTTP/2 if the server supports it (experimental) http2: true ## You can set the worker type here. ## The default image includes only the type "kubernetes". ## Custom workers must be properly registered with the prefect cli. ## See the guide here: https://docs.prefect.io/2.11.3/guides/deployment/developing-a-new-worker-type/ # -- specify the worker type type: kubernetes ## one of 'always', 'if-not-present', 'never', 'prompt' # -- install policy to use workers from Prefect integration packages. installPolicy: prompt # -- the name to give to the started worker. If not provided, a unique name will be generated. name: null # -- maximum number of flow runs to start simultaneously (default: unlimited) limit: null ## If unspecified, Prefect will use the default base job template for the given worker type. If the work pool already exists, this will be ignored. ## e.g.: ## baseJobTemplate: ## configuration: | ## { ## "variables": { ## ... ## }, ## "job_configuration": { ## ... ## } ## } ## OR ## baseJobTemplate: ## existingConfigMapName: "my-existing-config-map" baseJobTemplate: # -- the name of an existing ConfigMap containing a base job template. NOTE - the key must be 'baseJobTemplate.json' existingConfigMapName: "" # -- JSON formatted base job template. If data is provided here, the chart will generate a configmap and mount it to the worker pod configuration: null # -- optionally override the default name of the generated configmap # name: "" ## connection settings # -- one of 'cloud', 'selfHosted', or 'server' apiConfig: "cloud" cloudApiConfig: # -- prefect account ID accountId: "824f4d2f-31c4-4b2a-a025-42b8433da9fe" # -- prefect workspace ID workspaceId: "38c82ec2-e6c7-47ca-8218-d3173a4b4667" apiKeySecret: # -- prefect API secret name name: prefect-api-key # -- prefect API secret key key: api_key # -- prefect cloud API url; the full URL is constructed as https://cloudUrl/accounts/accountId/workspaces/workspaceId cloudUrl: https://api.prefect.cloud/api # selfHostedCloudApiConfig: # # -- prefect API url (PREFECT_API_URL) # apiUrl: "" # # -- prefect account ID # accountId: "" # # -- prefect workspace ID # workspaceId: "" apiKeySecret: # -- prefect API secret name name: prefect-api-key # -- prefect API secret key key: api_key # -- self hosted UI url # uiUrl: "" serverApiConfig: # If the prefect server is located external to this cluster, set a fully qualified domain name as the apiUrl # If the prefect server pod is deployed to this cluster, use the cluster DNS endpoint: http://<prefect-server-service-name>.<namespace>.svc.cluster.local:<prefect-server-port>/api # -- prefect API url (PREFECT_API_URL) apiUrl: "" # -- prefect UI url uiUrl: http://localhost:4201 # -- the number of old ReplicaSets to retain to allow rollback revisionHistoryLimit: 10 # -- number of worker replicas to deploy replicaCount: 1 resources: # -- the requested resources for the worker container requests: memory: 256Mi cpu: 100m # -- the requested limits for the worker container limits: memory: 1Gi cpu: 1000m # ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ livenessProbe: enabled: false config: # -- The number of seconds to wait before starting the first probe. initialDelaySeconds: 10 # -- The number of seconds to wait between consecutive probes. periodSeconds: 10 # -- The number of seconds to wait for a probe response before considering it as failed. timeoutSeconds: 5 # -- The number of consecutive failures allowed before considering the probe as failed. failureThreshold: 3 # -- The minimum consecutive successes required to consider the probe successful. successThreshold: 1 ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod podSecurityContext: # -- set worker pod's security context runAsUser runAsUser: 1001 # -- set worker pod's security context runAsNonRoot runAsNonRoot: true # -- set worker pod's security context fsGroup fsGroup: 1001 # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass # -- priority class name to use for the worker pods; if the priority class is empty or doesn't exist, the worker pods are scheduled without a priority class priorityClassName: "" ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container containerSecurityContext: # -- set worker containers' security context runAsUser runAsUser: 1001 # -- set worker containers' security context runAsNonRoot runAsNonRoot: true # -- set worker containers' security context readOnlyRootFilesystem readOnlyRootFilesystem: true # -- set worker containers' security context allowPrivilegeEscalation allowPrivilegeEscalation: false # -- set worker container's security context capabilities capabilities: {} ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ # -- extra labels for worker pod podLabels: {} ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ # -- extra annotations for worker pod podAnnotations: {} ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity # -- affinity for worker pods assignment affinity: {} ## ref: https://kubernetes.io/docs/user-guide/node-selection/ # -- node labels for worker pods assignment nodeSelector: {} ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ # -- tolerations for worker pods assignment tolerations: [] ## List of extra env vars ## e.g: ## extraEnvVars: ## - name: FOO ## value: "bar" # -- array with extra environment variables to add to worker nodes extraEnvVars: [] # -- name of existing ConfigMap containing extra env vars to add to worker nodes extraEnvVarsCM: "" # -- name of existing Secret containing extra env vars to add to worker nodes extraEnvVarsSecret: "" # -- additional sidecar containers extraContainers: [] # -- array with extra volumes for the worker pod extraVolumes: [] # -- array with extra volumeMounts for the worker pod extraVolumeMounts: [] # -- array with extra Arguments for the worker container to start with extraArgs: [] ## ServiceAccount configuration serviceAccount: # -- specifies whether a ServiceAccount should be created create: false # -- the name of the ServiceAccount to use. if not set and create is true, a name is generated using the common.names.fullname template name: "" # -- additional service account annotations (evaluated as a template) annotations: {} ## Role configuration role: # -- specifies whether a Role should be created create: true ## List of extra role permissions ## e.g: extraPermissions: - apiGroups: [""] resources: ["pods", "services"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # -- array with extra permissions to add to the worker role # extraPermissions: [] ## RoleBinding configuration rolebinding: # -- specifies whether a RoleBinding should be created create: true ``` We’ve tried setting `PREFECT_CLOUD_ENABLE_ORCHESTRATION_TELEMETRY:false` but the logging error persists with a slightly different traceback (`APILogWorkerThread` instead of `QueueingLogExporterThread`) ``` │ 15:46:49.701 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - <WatcherThreadCancelScope, name='get' │ │ 15:46:49.701 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - Encountered exception in call get(<dro │ │ Traceback (most recent call last): │ │ File "/usr/local/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 364, in _run_sync │ │ result = self.fn(*self.args, **self.kwargs) │ │ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ │ │ File "/usr/local/lib/python3.11/queue.py", line 179, in get │ │ raise Empty │ │ _queue.Empty │ │ 15:46:49.702 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - Running call get(timeout=1.99996715999 ```
closed
2025-02-11T00:31:04Z
2025-02-11T21:45:56Z
https://github.com/PrefectHQ/prefect/issues/17085
[ "bug" ]
JupiterPucciarelli
2
widgetti/solara
jupyter
406
Discord link broken
The discord link is invalid.
closed
2023-11-30T19:22:57Z
2023-11-30T19:51:33Z
https://github.com/widgetti/solara/issues/406
[]
langestefan
2
aio-libs-abandoned/aioredis-py
asyncio
638
Connection pool is idle for a long time, resulting in unreachable redis
When the connection pool is idle for a long time, about 10 minutes. When I request a connection from the connection pool again, my program will block and wait for about 15 minutes. Then raise exception: connection time out. What is the cause of this? I hope I can get your help. Thank you.
closed
2019-09-19T07:13:08Z
2021-03-18T23:55:33Z
https://github.com/aio-libs-abandoned/aioredis-py/issues/638
[ "need investigation", "resolved-via-latest" ]
Zhao-Panpan
0
ymcui/Chinese-LLaMA-Alpaca
nlp
308
如何根据不同的GPU显存来设置batch_size
### 详细描述问题 我有三个nvidia Tesla的V100卡,1个16G,2个32G,该如何设置让这三个卡能够充分利用呢?不知道有没有大佬能够帮忙解答这个问题,试了好几次都没成功。现在最好的就是在2个32G的上面来跑,16G的就闲着用不了
closed
2023-05-11T05:38:40Z
2023-05-23T22:02:33Z
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/308
[ "stale" ]
heshuguo
6
geopandas/geopandas
pandas
2,566
ENH: Improve error message for attempting to write multiple geometry columns using `to_file`
xref #2565 ```python In [16]: gdf = gpd.GeoDataFrame({"a":[1,2]}, geometry=[Point(1,1), Point(1,2)]) In [17]: gdf['geom2'] = gdf.geometry In [18]: gdf.to_file("tmp.gpkg") ----------------------------------- File ...\venv\lib\site-packages\geopandas\io\file.py:596, in infer_schema.<locals>.convert_type(column, in_type) 594 out_type = types[str(in_type)] 595 else: --> 596 out_type = type(np.zeros(1, in_type).item()).__name__ 597 if out_type == "long": 598 out_type = "int" TypeError: Cannot interpret '<geopandas.array.GeometryDtype object at 0x0000025FA88908E0>' as a data type ```
closed
2022-09-25T12:14:04Z
2023-07-16T22:40:57Z
https://github.com/geopandas/geopandas/issues/2566
[ "enhancement", "good first issue" ]
m-richards
8
pytest-dev/pytest-cov
pytest
78
mark/fixture to turn coverage recording off
I don't know how feasible this is, but it'd be nice to have a mark and/or contextmanager-fixture to turn coverage collection off: ``` python @pytest.mark.coverage_disabled def test_foo(): # do some integration test or whatever pass def test_bar(coverage): with coverage.disabled(): pass ```
closed
2015-08-12T15:47:05Z
2017-11-24T22:34:13Z
https://github.com/pytest-dev/pytest-cov/issues/78
[]
The-Compiler
12
pallets-eco/flask-sqlalchemy
flask
451
order_by descending
Hello, I have the following model ``` class Server(db.Model): serverId = db.Column(db.Integer, primary_key=True) serverName = db.Column(db.String(20)) serverIP = db.Column(db.String(20)) serverNote = db.Column(db.String(200)) serverStatus = db.Column(db.String(10)) serverBlNr = db.Column(db.Integer) lastChecked = db.Column(db.DateTime(), default=datetime.utcnow, onupdate=datetime.utcnow) def __init__(self, Name, IP, Note, Status): self.serverName = Name self.serverIP = IP self.serverNote = Note self.serverStatus = Status self.serverBlNr = BlackNumber ``` And I've written the following route: ``` @app.route('/api/servers/', methods=['GET']) def api_get_servers(): server_list = Server.query.order_by(Server.lastChecked).limit(10) data = {} for srv in server_list: data[srv.serverId] = { 'serverName' : srv.serverName, 'serverIP' : srv.serverIP, 'serverStatus' : srv.serverStatus, 'serverNote' : srv.serverNote, 'lastChecked' : srv.lastChecked } return jsonify({'servers': data}) ``` Is this the correct way to get the olderst 10 results ? ``` server_list = Server.query.order_by(Server.lastChecked).limit(10) ```
closed
2016-12-12T11:58:07Z
2016-12-13T01:46:04Z
https://github.com/pallets-eco/flask-sqlalchemy/issues/451
[]
tuwid
4
scikit-learn-contrib/metric-learn
scikit-learn
127
tests failing due to scikit-learn update
Some tests are failing in master, due to scikit-learn update v0.20.0. There are two types of tests that are failing: - Tests that uses the iris dataset: this is probably due to the fact that the iris dataset has changed in scikit learn v.0.20: cf http://scikit-learn.org/stable/whats_new.html#sklearn-datasets For now maybe we can do a quickfix to pass the tests but maybe in the future we should replace these tests that test hard numerical values ? - Tests using sklearn's check_estimator: they fail for the check `check_fit2d_1sample` which was improved in v0.20 by checking the actual error message that is returned: see diff between lines 594-595 of v0.19.0 and lines 843-845 of v0.20 [here](https://github.com/scikit-learn/scikit-learn/compare/0.19.0...0.20.0) in Files changed, `sklearn/utils/estimator_checks.py` (the git diff didn't really worked here so these lines are not aligned). They can be fixed by setting `ensure_min_samples=2` in the checks that are done at fit, but we should probably think more about what are the minimum number of samples needed to make the algorithms work (since scikit-learn only tests this error is thrown for 1 sample, but maybe the algorithm won't work for 2 samples either and return the bad error message as before, which is not ideal...)
closed
2018-10-10T13:24:00Z
2019-09-03T08:06:01Z
https://github.com/scikit-learn-contrib/metric-learn/issues/127
[]
wdevazelhes
1
dynaconf/dynaconf
fastapi
662
[bug] Lazy validation fails with TypeError: __call__() takes 2 positional arguments but 3 were given
**Describe the bug** Tried to introduce Lazy Validators as described in https://www.dynaconf.com/validation/#computed-values i.e. ``` from dynaconf.utils.parse_conf import empty, Lazy Validator("FOO", default=Lazy(empty, formatter=my_function)) ``` First bug (documentation): The above fails with ` ImportError: cannot import name 'empty' from 'dynaconf.utils.parse_conf'` --> "empty" seems to be now in `dynaconf.utils.functional.empty` Second bug: Lazy Validator fails with ` TypeError: __call__() takes 2 positional arguments but 3 were given` **To Reproduce** Steps to reproduce the behavior: ``` from dynaconf.utils.parse_conf import Lazy from dynaconf.utils.functional import empty def lazy_foobar(s, v): return "foobar" from dynaconf import Dynaconf s = Dynaconf( validators=[Validator("FOOBAR", default=Lazy(empty, formatter=lazy_foobar)),], ) print(s.FOOBAR) ``` stacktrace: ``` print(s.FOOBAR) File "c:\ta\virtualenv\dconf\lib\site-packages\dynaconf\base.py", line 113, in __getattr__ self._setup() File "c:\ta\virtualenv\dconf\lib\site-packages\dynaconf\base.py", line 164, in _setup settings_module=settings_module, **self._kwargs File "c:\ta\virtualenv\dconf\lib\site-packages\dynaconf\base.py", line 236, in __init__ only=self._validate_only, exclude=self._validate_exclude File "c:\ta\virtualenv\dconf\lib\site-packages\dynaconf\validator.py", line 417, in validate validator.validate(self.settings, only=only, exclude=exclude) File "c:\ta\virtualenv\dconf\lib\site-packages\dynaconf\validator.py", line 198, in validate settings, settings.current_env, only=only, exclude=exclude File "c:\ta\virtualenv\dconf\lib\site-packages\dynaconf\validator.py", line 227, in _validate_items if callable(self.default) TypeError: __call__() takes 2 positional arguments but 3 were given ``` **Expected behavior** Work as documented in https://www.dynaconf.com/validation/#computed-values **Environment (please complete the following information):** - OS: Windows 10 Pro - Dynaconf Version: 3.1.7 (also 3.1.4) - Frameworks in use: N/A **Additional context** Add any other context about the problem here.
closed
2021-10-04T13:54:00Z
2021-10-30T08:52:50Z
https://github.com/dynaconf/dynaconf/issues/662
[ "bug", "hacktoberfest" ]
yahman72
7
RobertCraigie/prisma-client-py
asyncio
63
Improve experience working with Json fields
## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> Currently the following code will generate a slew of type errors. ```py user = await client.user.find_unique(where={'id': 'abc'}) print(user.extra['pets'][0]['name']) ``` While I do think that naively working with JSON objects should raise type errors, it should be easy to cast the data to the expected types. For example, the above code written in a type safe manner would look like this: ```py class Extra(TypedDict, total=False): pets: List['Pet'] class Pet(TypedDict): name: str user = await client.user.find_unique(where={'id': 'abc'}) extra = cast(Extra, user.extra) pets = extra.get('pets') if pets: print(pets[0]['name']) ``` However there are a few problems with this: - There is no guarantee that the data at runtime will match the types which nullifies lots of the benefits of static type checking - Types are not persisted, i.e. the whole dance to cast the type will have to be done every time ## Suggested solution <!-- A clear and concise description of what you want to happen. --> Once #59 is implemented, this would be easier, for example, this _should_ work: ```py class User(prisma.models.User): extra: Extra user = await User.prisma().find_unique(where={'id': 'abc'}) pets = user.extra.get('pets') if pets: print(pets[0]['name']) ``` And for client based access, #24 would make this easier, for example: ```py user = await client.user.find_unique(where={'id': 'abc'}) extra = prisma.validate(Extra, user.extra) pets = extra.get('pets') if pets: print(pets[0]['name']) ``` The purpose of this issue is for tracking the aforementioned issues and documentation.
open
2021-09-07T18:15:12Z
2022-11-20T19:58:13Z
https://github.com/RobertCraigie/prisma-client-py/issues/63
[ "kind/improvement", "topic: types", "priority/medium", "level/unknown" ]
RobertCraigie
2
jofpin/trape
flask
54
500 Internal Server Error
Linux Version >/`Linux Galaxy 4.17.0-kali1-amd64 #1 SMP Debian 4.17.8-1kali1 (2018-07-24) x86_64 GNU/Linux` Python Version >/`Python 2.7.15` Pip Version > /`pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7)` Error >/ `Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1982, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1614, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1517, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1612, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1598, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/root/Desktop/trape/core/victim.py", line 96, in redirectVictim html = victim_inject_code(opener.open(url).read(), 'vscript') File "/usr/lib/python2.7/urllib2.py", line 421, in open protocol = req.get_type() File "/usr/lib/python2.7/urllib2.py", line 283, in get_type raise ValueError, "unknown url type: %s" % self.__original ValueError: unknown url type: www.facebook.com [2018-09-19 11:49:48,855] ERROR in app: Exception on /redv [GET] `
closed
2018-09-19T15:55:00Z
2018-09-19T16:26:04Z
https://github.com/jofpin/trape/issues/54
[]
V3rB0se
1
babysor/MockingBird
deep-learning
161
这个能支持多卡训练吗?
我对pytorch不太清楚...
open
2021-10-20T09:27:16Z
2021-11-06T14:44:55Z
https://github.com/babysor/MockingBird/issues/161
[]
atiyit
3
Kludex/mangum
asyncio
322
mangum.io Domain Expired
I know this might not be the right place for this, but the docs site domain has expired and the site is down. edit: this implies that this project may have been abandoned. For others looking for the doc site and finding this go here https://github.com/jordaneremieff/mangum/blob/main/docs/index.md
closed
2024-05-14T16:06:38Z
2024-09-23T03:17:50Z
https://github.com/Kludex/mangum/issues/322
[]
jglien
10
abhiTronix/vidgear
dash
40
WriteGear Bare-Minimum example (Non-Compression) not working
## Description 1. I followed the demo here: https://github.com/abhiTronix/vidgear/wiki/Non-Compression-Mode:-OpenCV#1-writegear-bare-minimum-examplenon-compression-mode 2. Run the code 3. The following error showed: ``` Compression Mode is disabled, Activating OpenCV In-built Writer! InputFrame => Height:360 Width:640 Channels:1 FILE_PATH: /******/Output.mp4, FOURCC = 1196444237, FPS = 30.0, WIDTH = 640, HEIGHT = 360, BACKEND = OpenCV: FFMPEG: tag 0x47504a4d/'MJPG' is not supported with codec id 7 and format 'mp4 / MP4 (MPEG-4 Part 14)' OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v' Warning: RGBA and 16-bit grayscale video frames are not supported by OpenCV yet, switch to `compression_mode` to use them! Traceback (most recent call last): File "cam_demo.py", line 31, in <module> writer.write(gray) File "/Users/*****/lib/python3.7/site-packages/vidgear/gears/writegear.py", line 221, in write raise ValueError('All frames in a video should have same size') ValueError: All frames in a video should have same size ``` ### Acknowledgment <!--- By posting an issue you acknowledge the following: --> - [x] A brief but descriptive Title of your issue - [x] I have searched the [issues](https://github.com/abhiTronix/vidgear/issues) for my issue and found nothing related or helpful. - [x] I have read the [FAQ](https://github.com/abhiTronix/vidgear/wiki/FAQ-&-Troubleshooting). - [x] I have read the [Wiki](https://github.com/abhiTronix/vidgear/wiki#vidgear). - [x] I have read the [Contributing Guidelines](https://github.com/abhiTronix/vidgear/blob/master/contributing.md). ### Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * VidGear version: 0.1.5 * Branch: <!--- Master/Testing/Development/PyPi --> PyPi * Python version: 3.7.3 * pip version: 19.1.1 * Operating System and version: macOS 10.14.3 ### Expected Behavior <!--- Tell us what should happen --> Write frame to file. ### Actual Behavior <!--- Tell us what happens instead --> Frames in different sizes. ### Possible Fix <!--- Not obligatory, but suggest a fix or reason for the bug --> WriteGear specify the output frame size? ### Steps to reproduce <!-- How would you describe your issue to someone who doesn’t know you or your project? Try to write a sequence of steps that anybody can repeat to see the issue. --> (Write your steps here:) See description. ### Optional <!--- Provide screenshots where appropriate -->
closed
2019-07-27T19:49:44Z
2019-07-30T07:13:44Z
https://github.com/abhiTronix/vidgear/issues/40
[ "INVALID :stop_sign:" ]
iflyingboots
2
xzkostyan/clickhouse-sqlalchemy
sqlalchemy
26
Database addition error with superset
Unable to add new database, it shows the error <img width="1188" alt="screen shot 2018-08-16 at 12 28 30 pm" src="https://user-images.githubusercontent.com/11755543/44193272-eddd6d00-a14f-11e8-8662-8eb86abc74f9.png"> There is no exception thrown on the server, As well as no other error response.
closed
2018-08-16T07:00:02Z
2022-06-13T10:54:06Z
https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/26
[]
hemantjadon
5
deepspeedai/DeepSpeed
deep-learning
6,985
[BUG] Invalidate trace cache warning
**Describe the bug** During training I receive the following warning multiple times per epoch: `Invalidate trace cache @ step 1 and module 1: cache has only 1 modules` This is a bit of an odd message, I have done some digging, and these trace warnings seem to be a common issue people report here, but none like this one. Is there any input from the developers about what could be at the root of it? I spent quite some time figuring it out, but couldn't rely figure it out. Any help would be appreciated! Also it would be really nice if there is an option to suppress this warning, in case it's not relevant. The way it's coded this is quite difficult. **ds_report output** ``` [2025-01-30 13:15:38,204] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect) Warning: The cache directory for DeepSpeed Triton autotune, /cluster/home/michaes/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path. -------------------------------------------------- DeepSpeed C++/CUDA extension op report -------------------------------------------------- NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op. -------------------------------------------------- JIT compiled ops requires ninja ninja .................. [OKAY] -------------------------------------------------- op name ................ installed .. compatible -------------------------------------------------- [WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-dev package with apt [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. async_io ............... [NO] ....... [NO] fused_adam ............. [NO] ....... [OKAY] cpu_adam ............... [NO] ....... [OKAY] cpu_adagrad ............ [NO] ....... [OKAY] cpu_lion ............... [NO] ....... [OKAY] [WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH evoformer_attn ......... [NO] ....... [NO] [WARNING] FP Quantizer is using an untested triton version (3.1.0), only 2.3.(0, 1) and 3.0.0 are known to be compatible with these kernels fp_quantizer ........... [NO] ....... [NO] fused_lamb ............. [NO] ....... [OKAY] fused_lion ............. [NO] ....... [OKAY] [WARNING] gds requires the dev libaio .so object and headers but these were not found. [WARNING] gds: please install the libaio-dev package with apt [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. gds .................... [NO] ....... [NO] transformer_inference .. [NO] ....... [OKAY] inference_core_ops ..... [NO] ....... [OKAY] cutlass_ops ............ [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] ragged_device_ops ...... [NO] ....... [OKAY] ragged_ops ............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.5 [WARNING] using untested triton version (3.1.0), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] -------------------------------------------------- DeepSpeed general environment info: torch install path ............... ['/cluster/home/michaes/.miniforge/envs/hyperion/lib/python3.11/site-packages/torch'] torch version .................... 2.5.0+cu121 deepspeed install path ........... ['/cluster/home/michaes/.miniforge/envs/hyperion/lib/python3.11/site-packages/deepspeed'] deepspeed info ................... 0.16.3, unknown, unknown torch cuda version ............... 12.1 torch hip version ................ None nvcc version ..................... 12.1 deepspeed wheel compiled w. ...... torch 0.0, cuda 0.0 shared memory (/dev/shm) size .... 250.73 GB ```
closed
2025-01-30T12:29:36Z
2025-02-18T20:49:54Z
https://github.com/deepspeedai/DeepSpeed/issues/6985
[ "bug", "training" ]
leachim
1
coqui-ai/TTS
deep-learning
3,754
[Bug] Anyway to run this as docker-compose ?
### Describe the bug Anyway to run this as docker-compose ? ### To Reproduce Anyway to run this as docker-compose ? ### Expected behavior Anyway to run this as docker-compose ? ### Logs ```shell Anyway to run this as docker-compose ? ``` ### Environment ```shell Anyway to run this as docker-compose ? ``` ### Additional context Anyway to run this as docker-compose ?
closed
2024-05-21T14:01:47Z
2024-07-18T23:47:03Z
https://github.com/coqui-ai/TTS/issues/3754
[ "bug", "wontfix" ]
PeterTucker
3
newpanjing/simpleui
django
52
django2.2.3使用中出现的问题
**bug描述** 简单的描述下遇到的bug: 1.增加加,删除页面谷歌浏览器不适配 2.增加内容之后得点击返回多次才能回到对应的列表页 3.再list_filter中使用SimpleListFilter时报 Exception: object has no attribute 'field' Exception Location: \venv\lib\site-packages\simpleui\templatetags\simpletags.py in load_dates, line 40 **环境** 1.操作系统: 2.python版本:3.7.3 3.django版本:2.2.3 4.simpleui版本:2.1 **其他描述**
closed
2019-05-22T09:51:01Z
2019-05-23T07:39:37Z
https://github.com/newpanjing/simpleui/issues/52
[ "bug" ]
yanlianhanlin
4
plotly/dash-core-components
dash
327
styling tabs via css classes
Hi Plotly Team, I can't get styling of the tabs component via css classes to function properly. Without knowing much about it, a guess could be it has to do with inconsistent naming of the `selected_className` property in the Tabs and Tab component. I placed the following css file in the assets folder. ```css .tab-style { width: inherit; border: none; background: white; padding-top: 0; padding-bottom: 0; height: 42px; } .selected-tab-style { width: inherit; box-shadow: none; border-left: none; border-right: none; border-top: none; border-bottom: 2px #004A96 solid; background: white; padding-top: 0; padding-bottom: 0; height: 42px; } ``` and used the following app to reproduce the behavior. ```python import dash import dash_core_components as dcc import dash_html_components as html TAB_STYLE = { 'width': 'inherit', 'border': 'none', 'boxShadow': 'inset 0px -1px 0px 0px lightgrey', 'background': 'white', 'paddingTop': 0, 'paddingBottom': 0, 'height': '42px', } SELECTED_STYLE = { 'width': 'inherit', 'boxShadow': 'none', 'borderLeft': 'none', 'borderRight': 'none', 'borderTop': 'none', 'borderBottom': '2px #004A96 solid', 'background': 'white', 'paddingTop': 0, 'paddingBottom': 0, 'height': '42px', } app = dash.Dash() app.layout = html.Div([ dcc.Tabs( id='tabs-1', value='tab-1', children=[ dcc.Tab( label='Tab 1', value='tab-1', style=TAB_STYLE, selected_style=SELECTED_STYLE, ), dcc.Tab( label='Tab 2', value='tab-2', style=TAB_STYLE, selected_style=SELECTED_STYLE, ), ] ), html.Br(), dcc.Tabs( id='tabs-2', value='tab-1', children=[ dcc.Tab( label='Tab 1', value='tab-1', className='tab-style', selected_className='selected-tab-style', ), dcc.Tab( label='Tab 2', value='tab-2', className='tab-style', selected_className='selected-tab-style', ), ] ) ]) app.css.config.serve_locally = True app.scripts.config.serve_locally = True if __name__ == '__main__': app.run_server(debug=True) ``` Any insight on this is greatly appreciated.
closed
2018-10-14T18:12:05Z
2022-10-10T10:10:00Z
https://github.com/plotly/dash-core-components/issues/327
[]
roeap
4
biolab/orange3
numpy
6,642
OCR (optical character recognition) in images
**What's your use case?** I would like to extract text in image/pdf document. **What's your proposed solution?** An OCR module in the image add-on to extract text. Some modules exists in python for that. Some documentation here (well; sadly in french https://www.aranacorp.com/fr/reconnaissance-de-texte-avec-python/) and what I understand is that you could use pytesseract. **Are there any alternative solutions?** No
closed
2023-11-19T09:03:37Z
2023-11-26T18:51:23Z
https://github.com/biolab/orange3/issues/6642
[]
simonaubertbd
2
ultralytics/yolov5
pytorch
13,332
WARNING ⚠️ NMS time limit 0.340s exceeded
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report. ### YOLOv5 Component _No response_ ### Bug Hi YOLO comunnity. so im running training on my cpu and i have this probleme notice that ive already checked on the previous simular issues and i found this time_limit = 0.1 + 0.02 * bs # seconds to quit after i applied it but the issue still here raidhani@raidhani-All-Series:~/catkin_ws/src/yolov5$ python3 train.py --img 640 --batch 6 --epochs 100 --data /home/raidhani/catkin_ws/src/data/data.yaml --weights yolov5s.pt train: weights=yolov5s.pt, cfg=, data=/home/raidhani/catkin_ws/src/data/data.yaml, hyp=data/hyps/hyp.scratch-low.yaml, epochs=100, batch_size=6, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data/hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs/train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False github: up to date with https://github.com/ultralytics/yolov5 ✅ YOLOv5 🚀 v7.0-368-gb163ff8d Python-3.8.10 torch-1.11.0+cpu CPU hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0 Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5 🚀 runs in Comet TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/ Overriding model.yaml nc=80 with nc=10 from n params module arguments 0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] 2 -1 1 18816 models.common.C3 [64, 64, 1] 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] 4 -1 2 115712 models.common.C3 [128, 128, 2] 5 -1 1 295424 models.common.Conv [128, 256, 3, 2] 6 -1 3 625152 models.common.C3 [256, 256, 3] 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2] 8 -1 1 1182720 models.common.C3 [512, 512, 1] 9 -1 1 656896 models.common.SPPF [512, 512, 5] 10 -1 1 131584 models.common.Conv [512, 256, 1, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 1 361984 models.common.C3 [512, 256, 1, False] 14 -1 1 33024 models.common.Conv [256, 128, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 1 90880 models.common.C3 [256, 128, 1, False] 18 -1 1 147712 models.common.Conv [128, 128, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 1 296448 models.common.C3 [256, 256, 1, False] 21 -1 1 590336 models.common.Conv [256, 256, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 1 1182720 models.common.C3 [512, 512, 1, False] 24 [17, 20, 23] 1 40455 models.yolo.Detect [10, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]] Model summary: 214 layers, 7046599 parameters, 7046599 gradients, 16.0 GFLOPs Transferred 343/349 items from yolov5s.pt optimizer: SGD(lr=0.01) with parameter groups 57 weight(decay=0.0), 60 weight(decay=0.000515625), 60 bias train: Scanning /home/raidhani/catkin_ws/src/data/train/labels.cache... 1008 images, 120 backgrounds, 0 corrupt: 100%|█████████ val: Scanning /home/raidhani/catkin_ws/src/data/valid/labels.cache... 230 images, 31 backgrounds, 0 corrupt: 100%|██████████| 2 AutoAnchor: 4.51 anchors/target, 0.997 Best Possible Recall (BPR). Current anchors are a good fit to dataset ✅ Plotting labels to runs/train/exp11/labels.jpg... Image sizes 640 train, 640 val Using 6 dataloader workers Logging results to runs/train/exp11 Starting training for 100 epochs... Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 0/99 0G 0.1032 0.06545 0.05506 64 640: 100%|██████████| 169/169 [07:32<00:00, 2.68s/it Class Images Instances P R mAP50 mAP50-95: 0%| | 0/20 [00:00<?, ?it/sWARNING ⚠️ NMS time limit 0.340s exceeded Class Images Instances P R mAP50 mAP50-95: 5%|▌ | 1/20 [00:01<00:29, WARNING ⚠️ NMS time limit 0.340s exceeded Class Images Instances P R mAP50 mAP50-95: 10%|█ | 2/20 [00:03<00:27, WARNING ⚠️ NMS time limit 0.340s exceeded Class Images Instances P R mAP50 mAP50-95: 15%|█▌ | 3/20 [00:04<00:26, WARNING ⚠️ NMS time limit 0.340s exceeded Class Images Instances P R mAP50 mAP50-95: 20%|██ | 4/20 [00:06<00:24, WARNING ⚠️ NMS time limit 0.340s exceeded Class Images Instances P R mAP50 mAP50-95: 25%|██▌ | 5/20 [00:07<00:24, Class Images Instances P R mAP50 mAP50-95: 25%|██▌ | 5/20 [00:08<00:24, Traceback (most recent call last): File "train.py", line 986, in <module> ### Environment YOLOv5 🚀 v7.0-368-gb163ff8d Python-3.8.10 torch-1.11.0+cpu CPU ### Minimal Reproducible Example python3 train.py --img 640 --batch 6 --epochs 100 --data /home/raidhani/catkin_ws/src/data/data.yaml --weights yolov5s.pt ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
open
2024-09-25T01:24:00Z
2024-11-09T14:46:48Z
https://github.com/ultralytics/yolov5/issues/13332
[ "bug" ]
haniraid
2
influxdata/influxdb-client-python
jupyter
379
query_data_frame returns object data type for all columns
<!-- Thank you for reporting a bug. * Please add a :+1: or comment on a similar existing bug report instead of opening a new one. * https://github.com/influxdata/influxdb-client-python/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+is%3Aclosed+sort%3Aupdated-desc+label%3Abug+ * Please check whether the bug can be reproduced with the latest release. * The fastest way to fix a bug is to open a Pull Request. * https://github.com/influxdata/influxdb-client-python/pulls --> __Steps to reproduce:__ 1. Transferring data types (numeric, object etc.) to dataframe __Expected behavior:__ Returning the data types of the data pulled from the database with __query_api__ as it should be. __Actual behavior:__ When I use the following query with the __query_data_frame__ module in the __query_api__, all columns except _time are returned as objects; even though the data stored in the database is not string. ```console query_api = influxdb_client.query_api() bucket, measurement, start, stop ="test", "test", "0", "now()" query = 'import \"influxdata/influxdb/schema\"' \ f' from(bucket:\"{bucket}\")' \ f' |> range(start: {start}, stop: {stop})' \ f' |> filter(fn: (r) => r._measurement == \"{measurement}\")' \ f' |> drop(columns: ["_start", "_stop","_measurement"])' \ f' |> schema.fieldsAsCols()' result = query_api.query_data_frame(query=query) print(result.dtypes) _time datetime64[ns, UTC] attribute_1 object attribute_2 object attribute_3 object ``` When I transform the data with the code below, I can see that the data is numeric, not object. The data which is saved in the database is as follows: ```console result = json.loads(result.to_json(orient='records')) print(result) [{ "_time": 1638959688782, "attribute_1": 1, "attribute_2": 2, "attribute_3": "test_message", }] ``` __Specifications:__ - Client Version: 1.23.0 - InfluxDB Version: 2.2.1 - Platform: Windows 10 & Python3.8.11 (running on Docker)
closed
2021-12-08T08:12:25Z
2021-12-14T13:31:37Z
https://github.com/influxdata/influxdb-client-python/issues/379
[ "bug" ]
buraketmen
1
fastapi-admin/fastapi-admin
fastapi
5
No model with name 'User' registered in app 'diff_models'
After installing (from the dev branch) and setting up fastapi-admin, I attempted to run `aerich migrate`, which failed with the error: ``` tortoise.exceptions.ConfigurationError: No model with name 'User' registered in app 'diff_models'. ``` But I didn't register any app named `diff_models`, and I don't see it in the code for this repo... so I'm pretty stumped as to how to fix this. If you could please point me in the right direction, that would be amazing.
closed
2020-07-01T19:57:47Z
2020-07-02T14:33:16Z
https://github.com/fastapi-admin/fastapi-admin/issues/5
[]
mosheduminer
4
vitalik/django-ninja
pydantic
428
properly name functions
**Is your feature request related to a problem? Please describe.** I am using django-prometheus for statistics. The view calls counters are bundled for function names and thus, all my api calls are in the name of `GET /api-1.0.0:ninja.operation._sync_view` I would prefer to have the original function's name or the api-name **Describe the solution you'd like** https://github.com/vitalik/django-ninja/blob/3d745e7ad7b3cf4d92644143a9e342bdbc986273/ninja/operation.py#L314-L322 we could set `func.__module__` and `func.__name__` here
open
2022-04-22T06:04:48Z
2024-08-19T23:45:07Z
https://github.com/vitalik/django-ninja/issues/428
[ "help wanted" ]
hiaselhans
6
plotly/dash
data-visualization
3,003
Temporary failure in name resolution
This is occurring in latest Dash 2.18.1. Doing this app.run_server(host='0.0.0.0') .... did nothing [https://github.com/plotly/dash/issues/1480](url) Referring to old issue. It expects HOST environment variable to be set under conda. Passing this host parameter is not overriding.
open
2024-09-14T22:49:33Z
2024-10-01T14:31:18Z
https://github.com/plotly/dash/issues/3003
[ "regression", "bug", "P1" ]
summa-code
5
ray-project/ray
data-science
51,497
CI test windows://python/ray/tests:test_basic is consistently_failing
CI test **windows://python/ray/tests:test_basic** is consistently_failing. Recent failures: - https://buildkite.com/ray-project/postmerge/builds/8965#0195aaf1-9737-4a02-a7f8-1d7087c16fb1 - https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4156-97c5-9793049512c1 - https://buildkite.com/ray-project/postmerge/builds/8932#01959723-61ec-4376-b407-ff595262d8dc DataCaseName-windows://python/ray/tests:test_basic-END Managed by OSS Test Policy
closed
2025-03-19T00:05:56Z
2025-03-19T21:52:20Z
https://github.com/ray-project/ray/issues/51497
[ "bug", "triage", "core", "flaky-tracker", "ray-test-bot", "ci-test", "weekly-release-blocker", "stability" ]
can-anyscale
3
FlareSolverr/FlareSolverr
api
1,118
[1337x] (testing) Exception (1337x): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 55.0 seconds.: FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 55.0 seconds.
### Have you checked our README? - [X] I have checked the README ### Have you followed our Troubleshooting? - [X] I have followed your Troubleshooting ### Is there already an issue for your problem? - [X] I have checked older issues, open and closed ### Have you checked the discussions? - [X] I have read the Discussions ### Environment ```markdown - FlareSolverr version:3.3.13 - Last working FlareSolverr version: - Operating system: Ubuntu Server 22.04 (Linux-5.15.0-100-generic-x86_64-with-glibc2.35) - Are you using Docker: no - FlareSolverr User-Agent (see log traces or / endpoint): Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 - Are you using a VPN: no - Are you using a Proxy: no - Are you using Captcha Solver: no - If using captcha solver, which one: - URL to test this issue: https://x1337x.eu/ ``` ### Description Flaresolverr throws an error when testing for "https://x1337x.eu/" in Jackett: Torrent[CORE] & iDope also throw timeout errors. TorrentQQ works as expected. `Error: Error solving the challenge. Timeout after 55.0 seconds` ### Logged Error Messages ```text INFO ReqId 140435798140480 Incoming request => POST /v1 body: {'maxTimeout': 55000, 'cmd': 'request.get', 'url': 'https://x1337x.eu/cat/Movies/time/desc/1/'} DEBUG ReqId 140435798140480 Launching web browser... DEBUG ReqId 140435798140480 Started executable: `/var/lib/deluge/.local/share/undetected_chromedriver/chromedriver` in a child process with pid: 3082135 DEBUG ReqId 140435798140480 New instance of webdriver has been created to perform the request DEBUG ReqId 140435431355968 Navigating to... https://x1337x.eu/cat/Movies/time/desc/1/ INFO ReqId 140435431355968 Challenge detected. Title found: Just a moment... DEBUG ReqId 140435431355968 Waiting for title (attempt 1): Just a moment... DEBUG ReqId 140435431355968 Timeout waiting for selector DEBUG ReqId 140435431355968 Try to find the Cloudflare verify checkbox... DEBUG ReqId 140435431355968 Cloudflare verify checkbox not found on the page. DEBUG ReqId 140435431355968 Try to find the Cloudflare 'Verify you are human' button... DEBUG ReqId 140435431355968 The Cloudflare 'Verify you are human' button not found on the page. ERROR ReqId 140435798140480 Error: Error solving the challenge. Timeout after 55.0 seconds. DEBUG ReqId 140435798140480 Response => POST /v1 body: {'status': 'error', 'message': 'Error: Error solving the challenge. Timeout after 55.0 seconds.', 'startTimestamp': 1710100791025, 'endTimestamp': 1710100846760, 'version': '3.3.13'} INFO ReqId 140435798140480 Response in 55.735 s INFO ReqId 140435798140480 127.0.0.1 POST http://localhost:8191/v1 500 Internal Server Error ``` ### Screenshots _No response_
closed
2024-03-10T20:01:59Z
2024-03-11T06:34:16Z
https://github.com/FlareSolverr/FlareSolverr/issues/1118
[ "duplicate" ]
fcmircea
6
minimaxir/textgenrnn
tensorflow
177
CUDA Error during training constantly
I am trying to use textgenrnn but it trains for about an hour and then it does [this](https://pastebin.com/sDysPnX0) i tried shrinking my training data from 50000 lines to 25000 lines but it made no difference my GPU (gtx 1070) is latest driver it's not broken other than the hdmi port being mangled which I dont think would affect CUDA I dont know of any other logs that could help point out the problem i'm using Python 3.7.6, CUDA 10.1, 422.50 driver, windows 10 1909
open
2020-03-03T23:48:32Z
2020-03-03T23:48:32Z
https://github.com/minimaxir/textgenrnn/issues/177
[]
guyman70718
0
robinhood/faust
asyncio
739
Wrong time zone parameter name in user guide for crontab
https://faust.readthedocs.io/en/latest/userguide/tasks.html#guide-tasks says that the parameter name to specify crontab timezone is tz. The api reference says that it is timezone.
open
2021-10-14T18:00:17Z
2021-10-14T18:00:17Z
https://github.com/robinhood/faust/issues/739
[]
llamarble
0
nvbn/thefuck
python
464
Please rename types.py
I believe in that all software should be installed with one system package manager so I've tried to create ebuild for my Gentoo to install thefuck instead of suggested `pip install`. The normal way to do it is to use eclass named distutils-r1 which, in general, execute `setup.py`. But this call fails with trace: ``` python2_7: running distutils-r1_run_phase distutils-r1_python_compile /usr/bin/python2.7 setup.py build Traceback (most recent call last): File "/usr/lib64/python2.7/site.py", line 62, in <module> import os File "/usr/lib64/python2.7/os.py", line 49, in <module> import posixpath as path File "/usr/lib64/python2.7/posixpath.py", line 17, in <module> import warnings File "/usr/lib64/python2.7/warnings.py", line 8, in <module> import types File "types.py", line 2, in <module> from subprocess import Popen, PIPE File "/usr/lib64/python2.7/subprocess.py", line 430, in <module> import pickle File "/usr/lib64/python2.7/pickle.py", line 30, in <module> from copy_reg import dispatch_table File "/usr/lib64/python2.7/copy_reg.py", line 7, in <module> from types import ClassType as _ClassType ImportError: cannot import name ClassType ``` This is happens because python tries to import `ClassType` from `types.py` in your package instead of `/usr/lib64/python2.7/types.py`. So please rename it, I think fucktypes.py wil be fine in this case. Messing with standard libraries names isn't good.
closed
2016-02-21T02:52:31Z
2016-02-22T19:20:53Z
https://github.com/nvbn/thefuck/issues/464
[]
kapsh
2
ultralytics/yolov5
deep-learning
12,917
Image not found error
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question I want to custom train yolov6-seg, but I keep getting an error that the image file was not found. I set all paths to absolute. I'm running it on google colab, I've mounted the drive, and I've checked that the image files exist in the image path, so what could be the problem? Could this be because my dataset is not in a subfolder of the YOLOv6 folder? I noticed that 'Train_custom_data.md' says it should be in the YOLOv6 folder. > This is train code. ``` !python tools/train.py --batch 32 --conf /content/YOLOv6/configs/yolov6s_seg.py --epochs 50 --img-size 416 --data /content/drive/MyDrive/[DILab_data]/Computer_Vision/Fire_detection/FST1/FST1/data.yaml --device 0 ``` > This is data.yaml file. ``` train: /content/drive/MyDrive/[DILab_data]/Computer_Vision/Fire_detection/FST1/FST1/train/images val: /content/drive/MyDrive/[DILab_data]/Computer_Vision/Fire_detection/FST1/FST1/valid/images test: /content/drive/MyDrive/[DILab_data]/Computer_Vision/Fire_detection/FST1/FST1/test/images nc: 2 names: ['fire', 'smoke'] ``` ![v6_1](https://github.com/meituan/YOLOv6/assets/80905105/615e9a52-e4ff-43de-b6a4-e45ebf0dcef4) ![v6_2](https://github.com/meituan/YOLOv6/assets/80905105/2a4b2c49-5852-4023-863b-838fa3348e7b) ### Additional _No response_
closed
2024-04-14T12:50:21Z
2024-05-26T00:23:58Z
https://github.com/ultralytics/yolov5/issues/12917
[ "question", "Stale" ]
Cho-Hong-Seok
2
Skyvern-AI/skyvern
api
1,603
Feature Request: Dashboard Authentication
Please allow users to enable authentication on the dashboard (port 8080). This way, hosting on home network is more secure. Hosting on a public server is currently insanity without dashboard authentication and without tunneling services like Cloudflare where you can glue authentication on top of it.
open
2025-01-21T11:29:56Z
2025-02-25T19:52:59Z
https://github.com/Skyvern-AI/skyvern/issues/1603
[ "help wanted" ]
vincentcox
3
anselal/antminer-monitor
dash
76
errors after 20+ ASICs
In last release I'm encouraging errors after adding more than 20 miners, I did try in several farms with D3s and S9s mostly and same. `File "C:\Python27\lib\site-packages\flask\app.py", line 1997, in __call__ return self.wsgi_app(environ, start_response) File "C:\Python27\lib\site-packages\flask\app.py", line 1985, in wsgi_app response = self.handle_exception(e) File "C:\Python27\lib\site-packages\flask\app.py", line 1540, in handle_exception reraise(exc_type, exc_value, tb) File "C:\Python27\lib\site-packages\flask\app.py", line 1982, in wsgi_app response = self.full_dispatch_request() File "C:\Python27\lib\site-packages\flask\app.py", line 1614, in full_dispatch_request rv = self.handle_user_exception(e) File "C:\Python27\lib\site-packages\flask\app.py", line 1517, in handle_user_exception reraise(exc_type, exc_value, tb) File "C:\Python27\lib\site-packages\flask\app.py", line 1612, in full_dispatch_request rv = self.dispatch_request() File "C:\Python27\lib\site-packages\flask\app.py", line 1598, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "C:\antminer-monitor-master\app\views\antminer.py", line 58, in miners worker = miner_pools['POOLS'][0]['User'] KeyError: 'POOLS'` Also I came up to some ideas that would be awesome to have, some of them are already listed here, some of them not. Login screen - Email notifications - also to choose what errors must be notified by email Some of my S9s are more than a year old so they have a bunch of damaged chips, it would be great to choose what errors to display/send and what not. Also I saw an idea of scanning the network option to identify miners, this also can be done just by adding an option to add an IP range, for example 10.98.64.20-59 then tick which models are in this range for example S9s and add. and lastly thank you for your awesome work, if you will need any kind of ASIC field for testing feel free to contact me
closed
2018-03-01T07:46:25Z
2018-03-05T09:50:33Z
https://github.com/anselal/antminer-monitor/issues/76
[ ":dancing_men: duplicate", ":octocat: help wanted" ]
shotaatosh
4
CTFd/CTFd
flask
2,297
Add .ruff_cache to .gitignore and .dockerignore
Now that we've switched to ruff we should make sure to ignore the ruff cache files and such in our builds (e.g. `.gitignore` and `.dockerignore`)
open
2023-05-02T21:33:07Z
2023-05-02T21:33:07Z
https://github.com/CTFd/CTFd/issues/2297
[ "easy" ]
ColdHeat
0
horovod/horovod
deep-learning
3,413
RayExecutor does not work with Horovod Settings
The documentation of RayExecutor says `You can use a standard Horovod Settings object` for the `settings` parameter, but this fails with: ``` Traceback (most recent call last): File "bug.py", line 9, in <module> executor.start() File "/home/enrico/Work/git/articles/venv/lib/python3.8/site-packages/horovod/ray/runner.py", line 284, in start return self._maybe_call_ray(self.driver.start, **kwargs_) File "/home/enrico/Work/git/articles/venv/lib/python3.8/site-packages/horovod/ray/runner.py", line 360, in _maybe_call_ray return driver_func(**kwargs) File "/home/enrico/Work/git/articles/venv/lib/python3.8/site-packages/horovod/ray/runner.py", line 452, in start self.workers, node_workers = self.strategy.create_workers() File "/home/enrico/Work/git/articles/venv/lib/python3.8/site-packages/horovod/ray/strategy.py", line 174, in create_workers pg_timeout=self.settings.placement_group_timeout_s) AttributeError: 'Settings' object has no attribute 'placement_group_timeout_s' ``` Here is some code reproducing the issue: ```python from horovod.ray import RayExecutor from horovod.runner.common.util.settings import Settings import ray ray.init() settings = Settings() executor = RayExecutor(settings, num_workers=1) executor.start() ``` As the error says, the Horovod `Settings` class does not contain the `placement_group_timeout_s` attribute.
open
2022-02-20T21:09:04Z
2023-02-13T11:13:00Z
https://github.com/horovod/horovod/issues/3413
[ "bug", "ray" ]
EnricoMi
1
allenai/allennlp
pytorch
4,997
RoBERTa on SuperGLUE's 'Winograd Schema Challenge' task
WSC is one of the tasks of the [SuperGLUE](https://super.gluebenchmark.com) benchmark. The task is to re-trace the steps of Facebook's RoBERTa paper (https://arxiv.org/pdf/1907.11692.pdf) and build an AllenNLP config that reads the WSC data and fine-tunes a model on it. We expect scores in the range of their entry on the [SuperGLUE leaderboard](https://super.gluebenchmark.com/leaderboard). This can be formulated as a classification task, using the [`TransformerClassificationTT`](https://github.com/allenai/allennlp-models/blob/Imdb/allennlp_models/classification/models/transformer_classification_tt.py) model, analogous to the IMDB model. You can start with the [experiment config](https://github.com/allenai/allennlp-models/blob/Imdb/training_config/tango/imdb.jsonnet) and [dataset reading step](https://github.com/allenai/allennlp-models/blob/Imdb/allennlp_models/classification/tango/imdb.py#L13) from IMDB, and adapt them to your needs.
open
2021-02-18T23:46:36Z
2021-08-27T21:58:01Z
https://github.com/allenai/allennlp/issues/4997
[ "Contributions welcome", "Models", "easy" ]
dirkgr
0
pallets/flask
python
5,209
After initialization/before run hook
I'm writing a Flask extension with non trivial setup and the need to do consistency checks before allowing it to run. I have not found a hook/signal which would tell me that Flask is ready to run, before any request has arrived, so that I could plug these checks. Such a hook would be appreciated. Waiting for the first request is too late to report a configuration error.
closed
2023-07-24T06:11:24Z
2023-08-08T00:05:55Z
https://github.com/pallets/flask/issues/5209
[]
zx80
11
neuml/txtai
nlp
53
All methods should operate on batches
For search, similarity, extractive qa and labels, all methods should operate on batches for the best performance. - Extractive QA already supports this. - Search, similarity and labels should work with batches. Separate methods (if necessary) can be retained to provide existing functionality for a single record.
closed
2021-01-07T02:23:01Z
2021-05-13T15:05:40Z
https://github.com/neuml/txtai/issues/53
[]
davidmezzetti
0
dynaconf/dynaconf
fastapi
1,186
[bug] setup.py deprecation on pip 25.0 - move to pyproject/uv
```bash DEPRECATION: Legacy editable install of dynaconf==3.3.0.dev0 from file:///src/dynaconf (setup.py develop) is deprecated. pip 25.0 will enforce this behaviour change. A possible replacement is to add a pyproject.toml or enable --use-pep517, and use setuptools >= 64. If the resulting installation is not behaving as expected, try using --config-settings editable_mode=compat. Please consult the setuptools documentation for more information. Discussion can be found at https://github.com/pypa/pip/issues/11457 ```
open
2024-10-11T12:38:41Z
2024-10-11T12:38:42Z
https://github.com/dynaconf/dynaconf/issues/1186
[ "bug", "HIGH" ]
rochacbruno
0
pandas-dev/pandas
python
60,737
BUG: The `.to_sql()` of `dtype` argument does not strictly ensure the column name and datatype
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python from sqlalchemy import types dtypezzz = { 'date':types.Date(), 'bus_plates':types.Text(), 'driver_names':types.Text(), 'behavior':types.Text() } df_bus.to_sql('mytable', con=engine, schema='myschema', if_exists='append', index=False, dtype=dtypezzz) # df_bus have columns of bus_id and bus_plates ``` ### Issue Description I spend time reading articles on how to ensure the datatype of dataframe is followed by the table datatype by using method `.to_sql()`. Most of it said that we can use the `dtype` argument in the method and put in a dict of column name and data type using sqlAlchemy `from sqlalchemy import type`. However, I have tested this, to see if the method return an error by passing a dataframe that has totally different column name and datatype. My expectation is it should return an error, but instead of returning error due to wrong column name or data type, it allowed the dataframe to be inserted within the database table. This is very bad as I cannot ensure the data integrity within the table and because of the nature of this method that replace and alter the datatype, it could lead to many issue like wrongly inserted dataframe in the table and wrong datatype inserted without returning a useful error. ### Expected Behavior Expected behavior should be the `to_sql()` with `dtype` argument used will return error if dataframe has different column name/datatype to ensure the data integrity within the database table (this is possible in SQL as of we declare the datatype during the table creation) ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 0691c5cf90477d3503834d983f69350f250a6ff7 python : 3.10.16 python-bits : 64 OS : Linux OS-release : 5.15.167.4-microsoft-standard-WSL2 Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : C.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.2.3 numpy : 2.2.2 pytz : 2024.2 dateutil : 2.9.0.post0 pip : 23.0.1 Cython : None sphinx : None IPython : 8.31.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : 2.9.10 pymysql : None pyarrow : None pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : 2.0.37 tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2024.2 qtpy : None pyqt5 : None </details>
open
2025-01-20T05:26:05Z
2025-01-27T21:40:51Z
https://github.com/pandas-dev/pandas/issues/60737
[ "Bug", "IO SQL", "Closing Candidate" ]
ammarsaf
1
gunthercox/ChatterBot
machine-learning
1,878
Edits on documentation made for university student project
For a university school project I will be making some edits to your documentation which will aim to make writing more technical Please add edits if you would like :)
closed
2019-11-26T03:57:36Z
2020-08-29T20:38:33Z
https://github.com/gunthercox/ChatterBot/issues/1878
[]
compSciKai
3
d2l-ai/d2l-en
computer-vision
1,989
Implementing Pretraining BERT
I forgot to tune the `num_workers` so the `train_bert()` ran for an extended time. I simply tuned the `num_worker` to `0` and completed training in less than 1 minute on a single RTX 3090 GPU.
open
2021-12-11T13:03:09Z
2021-12-11T13:03:54Z
https://github.com/d2l-ai/d2l-en/issues/1989
[]
bjohn22
1
roboflow/supervision
deep-learning
809
No module name 'supervision', installed supervision==0.18.0 and imported supervision as sv
### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report. ### Bug Traceback (most recent call last): File "/home/shivakrishnakarnati/Documents/Programming/ROS/ros_supervision_obj/install/object_det/lib/object_det/object_det_node", line 33, in <module> sys.exit(load_entry_point('object-det==0.0.0', 'console_scripts', 'object_det_node')()) File "/home/shivakrishnakarnati/Documents/Programming/ROS/ros_supervision_obj/install/object_det/lib/object_det/object_det_node", line 25, in importlib_load_entry_point return next(matches).load() File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 171, in load module = import_module(match.group('module')) File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/home/shivakrishnakarnati/Documents/Programming/ROS/ros_supervision_obj/install/object_det/lib/python3.10/site-packages/object_det/object_det_node.py", line 19, in <module> import supervision as sv ModuleNotFoundError: No module named 'supervision' [ros2run]: Process exited with failure 1 ### Environment - supervision 0.18.0 ### Minimal Reproducible Example import supervision as sv ### Additional _No response_ ### Are you willing to submit a PR? - [X] Yes I'd like to help by submitting a PR!
closed
2024-01-29T14:29:43Z
2024-01-29T15:21:54Z
https://github.com/roboflow/supervision/issues/809
[ "bug" ]
shivakarnati
7
vitalik/django-ninja
rest-api
305
Is the JWT tool or integrate in progress?
This is very important for the separation of front-end and back-end, and it is also positive to provide everyone with a unified verification method
open
2021-12-23T08:36:18Z
2023-12-07T20:50:22Z
https://github.com/vitalik/django-ninja/issues/305
[]
wu-clan
16
awesto/django-shop
django
794
Wrong usage of Compressor constructor
Hi there. Recently i've tried to generate demo instance of django-shop from cookiecutter template. It was generated successfully. But when i've tried to up services, web application started throwing errors. So, i've tried to research problem and if I understood correctly, usage of Compressor constructor implies to pass positional `resource_kind` arg. Seems the problem was added here: https://github.com/awesto/django-shop/commit/807c7c5d1797cb6353795223927cb9dfd0c849e7#diff-59d012f061f377c7836bdb130f0a5ad4R25 Here you can see Compressor constructor, if i figured out it correct: https://github.com/django-compressor/django-compressor/blame/2.4/compressor/base.py#L35 Seems this issue reproduced in certain cookiecutter configuration - with uwsgi server which a was select.
open
2020-02-16T12:52:34Z
2020-02-16T20:05:15Z
https://github.com/awesto/django-shop/issues/794
[ "bug" ]
elipavlov
2
graphistry/pygraphistry
jupyter
362
[FEA] org_name support for File POSTs
dataset uploads pass org_name but afaict file uploads do not ---- Good: Dataset path: https://github.com/graphistry/pygraphistry/blob/bb8dd2483d01f2f9dc7b9e1a640cc803ac569ec2/graphistry/arrow_uploader.py#L247 ```python graphistry.register(..., org_name=x) graphistry.nodes(df, 'n').edges(df2, 's', 'd').plot() ``` --- Missing: When files are made as individual `File` objects: ```python graphistry.register(..., org_name=x) graphistry.nodes(df, 'n').edges(df2, 's', 'd').plot(as_files=True) ``` https://github.com/graphistry/pygraphistry/blob/bb8dd2483d01f2f9dc7b9e1a640cc803ac569ec2/graphistry/ArrowFileUploader.py#L72
closed
2022-06-07T06:03:32Z
2022-08-04T00:52:39Z
https://github.com/graphistry/pygraphistry/issues/362
[ "enhancement", "p2" ]
lmeyerov
2
vastsa/FileCodeBox
fastapi
205
希望增加授权用户
希望可以增加一点儿扩展用户的功能,不能只有一个后台管理员,比如增加能扩展授权用户的功能。。 比如我想将授权分享给其他人但只给他们授权自己的文件权限,而不允许后台访问。这些授权用户可以向管理员一样正常上传分享文件,但是不能查看和修改任何后端数据。 目前的后台如果禁止游客上传,那就只有管理员能上传文件,相当于是一个完全私人的系统,并不能分享给授权。
closed
2024-09-09T05:25:49Z
2024-10-09T06:25:29Z
https://github.com/vastsa/FileCodeBox/issues/205
[]
li7355608
1
AUTOMATIC1111/stable-diffusion-webui
deep-learning
16,327
[Bug]: Automatic1111 fails to load weights
### Checklist - [X] The issue exists after disabling all extensions - [X] The issue exists on a clean installation of webui - [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui - [X] The issue exists in the current version of the webui - [X] The issue has not been reported before recently - [ ] The issue has been reported before but has not been fixed yet ### What happened? automatic try to load some wights and fail in it :( ### Steps to reproduce the problem 1. Install Automatic 2, Use it ### What should have happened? Press any key and nothing worked :( ### What browsers do you use to access the UI ? Google Chrome ### Sysinfo { "Platform": "Windows-10-10.0.22631-SP0", "Python": "3.10.9", "Version": "v1.10.1", "Commit": "82a973c04367123ae98bd9abdf80d9eda9b910e2", "Git status": "On branch master\nYour branch is up to date with 'origin/master'.\n\nChanges not staged for commit:\n (use \"git add <file>...\" to update what will be committed)\n (use \"git restore <file>...\" to discard changes in working directory)\n\tmodified: webui-user.bat\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n\tlocalizations/my.json\n\tscripts/detect_extension.py\n\tscripts/postprocessing_caption.py\n\tscripts/postprocessing_create_flipped_copies.py\n\tscripts/postprocessing_focal_crop.py\n\tscripts/postprocessing_split_oversized.py\n\tscripts/processing_autosized_crop.py\n\tuser.css.txt\n\twebui-user ORIGINAL.bat\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")", "Script path": "D:\\StableDiffusion\\sadsadsadsadasw", "Data path": "D:\\StableDiffusion\\sadsadsadsadasw", "Extensions dir": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions", "Checksum": "003ab805b8002f1dfe10149c9030b6f7aa91316f2655442d8a38fa970a068f95", "Commandline": [ "launch.py", "--dump-sysinfo" ], "Torch env info": { "torch_version": "2.1.2+cu121", "is_debug_build": "False", "cuda_compiled_version": "12.1", "gcc_version": null, "clang_version": null, "cmake_version": null, "os": "Майкрософт Windows 11 Pro", "libc_version": "N/A", "python_version": "3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] (64-bit runtime)", "python_platform": "Windows-10-10.0.22631-SP0", "is_cuda_available": "True", "cuda_runtime_version": null, "cuda_module_loading": "LAZY", "nvidia_driver_version": "560.70", "nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 3080", "cudnn_version": null, "pip_version": "pip3", "pip_packages": [ "numpy==1.26.2", "open-clip-torch==2.20.0", "pytorch-lightning==1.9.4", "pytorch_optimizer==2.12.0", "torch==2.1.2+cu121", "torchdiffeq==0.2.3", "torchmetrics==1.4.0.post0", "torchsde==0.2.6", "torchvision==0.16.2+cu121" ], "conda_packages": null, "hip_compiled_version": "N/A", "hip_runtime_version": "N/A", "miopen_runtime_version": "N/A", "caching_allocator_config": "", "is_xnnpack_available": "True", "cpu_info": [ "Architecture=9", "CurrentClockSpeed=3701", "DeviceID=CPU0", "Family=107", "L2CacheSize=3072", "L2CacheSpeed=", "Manufacturer=AuthenticAMD", "MaxClockSpeed=3701", "Name=AMD Ryzen 5 5600X 6-Core Processor ", "ProcessorType=3", "Revision=8450" ] }, "Exceptions": [], "CPU": { "model": "AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD", "count logical": 12, "count physical": 6 }, "RAM": { "total": "16GB", "used": "8GB", "free": "8GB" }, "Extensions": [ { "name": "a1111-sd-webui-tagcomplete", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\a1111-sd-webui-tagcomplete", "commit": "fe32ad739d60194cde0fd8a4fec217af056226cf", "branch": "main", "remote": "https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git" }, { "name": "adetailer", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\adetailer", "commit": "a7d961131e879ea8a930034a21a2dee21b173e8c", "branch": "main", "remote": "https://github.com/Bing-su/adetailer.git" }, { "name": "canvas-zoom", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\canvas-zoom", "commit": "63ca81ef1d2b24e337cddb614a675ad91297f162", "branch": "main", "remote": "https://github.com/richrobber2/canvas-zoom.git" }, { "name": "kohya_ss", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\kohya_ss", "commit": "726ea78657fb5c09dffec22a3979d583b9e84cbf", "branch": "master", "remote": "https://github.com/bmaltais/kohya_ss.git" }, { "name": "sd-canvas-editor", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\sd-canvas-editor", "commit": "e4c9e15f09257b9f22434555bd8ac955f21365ef", "branch": "master", "remote": "https://github.com/jtydhr88/sd-canvas-editor.git" }, { "name": "sd-civitai-browser-plus", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\sd-civitai-browser-plus", "commit": "0b97a482ad4e3c0fec797a797414a0f0eeef08fa", "branch": "main", "remote": "https://github.com/BlafKing/sd-civitai-browser-plus.git" }, { "name": "sd-dynamic-prompts", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\sd-dynamic-prompts", "commit": "de056ff8d80e4ad120e13a90cf200f3383f427c6", "branch": "main", "remote": "https://github.com/adieyal/sd-dynamic-prompts.git" }, { "name": "sd-webui-ar", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\sd-webui-ar", "commit": "3973c86a7a0f9722b5f61dd1a141ba6821f60245", "branch": "main", "remote": "https://github.com/alemelis/sd-webui-ar.git" }, { "name": "sd-webui-aspect-ratio-helper", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\sd-webui-aspect-ratio-helper", "commit": "99fcf9b0a4e3f8c8cac07b12d17b66f12297b828", "branch": "main", "remote": "https://github.com/thomasasfk/sd-webui-aspect-ratio-helper.git" }, { "name": "sd-webui-e621-prompt", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\sd-webui-e621-prompt", "commit": "74ce0a50d788abfa42d67b0614f7195d3f71294c", "branch": "main", "remote": "https://github.com/nochnoe/sd-webui-e621-prompt.git" }, { "name": "sd-webui-prompt-all-in-one", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\sd-webui-prompt-all-in-one", "commit": "d69645e6a6701c5117e0a874d1ef80d5cb5d55cc", "branch": "main", "remote": "https://github.com/Physton/sd-webui-prompt-all-in-one.git" }, { "name": "sd-webui-regional-prompter", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\sd-webui-regional-prompter", "commit": "4802faca6bcc40c4d1033920e8ad9fd7542eca79", "branch": "main", "remote": "https://github.com/hako-mikan/sd-webui-regional-prompter.git" }, { "name": "sd_dreambooth_extension", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\sd_dreambooth_extension", "commit": "45a12fe5950bf93205b6ef2b7511eb94052a241f", "branch": "main", "remote": "https://github.com/d8ahazard/sd_dreambooth_extension.git" }, { "name": "stable-diffusion-ps-pea", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\stable-diffusion-ps-pea", "commit": "f0ebea7aa5ef715a17aa52af8bcc9605366dedb2", "branch": "main", "remote": "https://github.com/huchenlei/stable-diffusion-ps-pea.git" }, { "name": "stable-diffusion-webui-localization-ru_RU", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\stable-diffusion-webui-localization-ru_RU", "commit": "575a585761809f790943fac12c774e2c8c2c1c62", "branch": "main", "remote": "https://github.com/Northerner1/stable-diffusion-webui-localization-ru_RU.git" }, { "name": "StylePile", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\StylePile", "commit": "85b549a10ff6c5b1604e980cd2107253b088e832", "branch": "main", "remote": "https://github.com/some9000/StylePile.git" }, { "name": "Stylez", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\Stylez", "commit": "f93f662331f0556ad8074b21dfe3653058253ff9", "branch": "main", "remote": "https://github.com/javsezlol1/Stylez.git" }, { "name": "ultimate-upscale-for-automatic1111", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\ultimate-upscale-for-automatic1111", "commit": "2322caa480535b1011a1f9c18126d85ea444f146", "branch": "master", "remote": "https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git" } ], "Inactive extensions": [ { "name": "sd-webui-photopea-embed", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\sd-webui-photopea-embed", "commit": "99ea83f925f187a959b177318b16f840a3cdc11d", "branch": "main", "remote": "https://github.com/yankooliveira/sd-webui-photopea-embed.git" }, { "name": "stable-diffusion-webui-images-browser", "path": "D:\\StableDiffusion\\sadsadsadsadasw\\extensions\\stable-diffusion-webui-images-browser", "commit": "a42c7a30181636a05815e62426d5eff4d3340529", "branch": "main", "remote": "https://github.com/yfszzx/stable-diffusion-webui-images-browser.git" } ], "Environment": { "COMMANDLINE_ARGS": " --dump-sysinfo", "GRADIO_ANALYTICS_ENABLED": "False" }, "Config": { "ldsr_steps": 100, "ldsr_cached": false, "SCUNET_tile": 256, "SCUNET_tile_overlap": 8, "SWIN_tile": 192, "SWIN_tile_overlap": 8, "SWIN_torch_compile": false, "hypertile_enable_unet": false, "hypertile_enable_unet_secondpass": false, "hypertile_max_depth_unet": 3, "hypertile_max_tile_unet": 256, "hypertile_swap_size_unet": 3, "hypertile_enable_vae": false, "hypertile_max_depth_vae": 3, "hypertile_max_tile_vae": 128, "hypertile_swap_size_vae": 3, "tac_tagFile": "e621.csv", "tac_active": true, "tac_activeIn.txt2img": true, "tac_activeIn.img2img": true, "tac_activeIn.negativePrompts": true, "tac_activeIn.thirdParty": true, "tac_activeIn.modelList": "", "tac_activeIn.modelListMode": "Blacklist", "tac_slidingPopup": true, "tac_maxResults": 15.0, "tac_showAllResults": true, "tac_resultStepLength": 100, "tac_delayTime": 100, "tac_useWildcards": true, "tac_sortWildcardResults": true, "tac_wildcardExclusionList": "", "tac_skipWildcardRefresh": false, "tac_useEmbeddings": true, "tac_includeEmbeddingsInNormalResults": false, "tac_useHypernetworks": true, "tac_useLoras": true, "tac_useLycos": true, "tac_useLoraPrefixForLycos": true, "tac_showWikiLinks": true, "tac_showExtraNetworkPreviews": true, "tac_modelSortOrder": "Name", "tac_useStyleVars": false, "tac_replaceUnderscores": true, "tac_escapeParentheses": true, "tac_appendComma": true, "tac_appendSpace": true, "tac_alwaysSpaceAtEnd": true, "tac_modelKeywordCompletion": "Never", "tac_modelKeywordLocation": "Start of prompt", "tac_wildcardCompletionMode": "To next folder level", "tac_alias.searchByAlias": true, "tac_alias.onlyShowAlias": false, "tac_translation.translationFile": "None", "tac_translation.oldFormat": false, "tac_translation.searchByTranslation": true, "tac_translation.liveTranslation": false, "tac_extra.extraFile": "extra-quality-tags.csv", "tac_extra.addMode": "Insert before", "tac_chantFile": "demo-chants.json", "tac_keymap": "{\n \"MoveUp\": \"ArrowUp\",\n \"MoveDown\": \"ArrowDown\",\n \"JumpUp\": \"PageUp\",\n \"JumpDown\": \"PageDown\",\n \"JumpToStart\": \"Home\",\n \"JumpToEnd\": \"End\",\n \"ChooseSelected\": \"Enter\",\n \"ChooseFirstOrSelected\": \"Tab\",\n \"Close\": \"Escape\"\n}", "tac_colormap": "{\n \"danbooru\": {\n \"-1\": [\"red\", \"maroon\"],\n \"0\": [\"lightblue\", \"dodgerblue\"],\n \"1\": [\"indianred\", \"firebrick\"],\n \"3\": [\"violet\", \"darkorchid\"],\n \"4\": [\"lightgreen\", \"darkgreen\"],\n \"5\": [\"orange\", \"darkorange\"]\n },\n \"e621\": {\n \"-1\": [\"red\", \"maroon\"],\n \"0\": [\"lightblue\", \"dodgerblue\"],\n \"1\": [\"gold\", \"goldenrod\"],\n \"3\": [\"violet\", \"darkorchid\"],\n \"4\": [\"lightgreen\", \"darkgreen\"],\n \"5\": [\"tomato\", \"darksalmon\"],\n \"6\": [\"red\", \"maroon\"],\n \"7\": [\"whitesmoke\", \"black\"],\n \"8\": [\"seagreen\", \"darkseagreen\"]\n }\n}", "tac_refreshTempFiles": "Refresh TAC temp files", "polotno_api_key": "bHEpG9Rp0Nq9XrLcwFNu", "canvas_editor_default_width": 1024, "canvas_editor_default_height": 1024, "use_aria2": true, "disable_dns": false, "show_log": false, "split_aria2": 64, "aria2_flags": "", "unpack_zip": false, "save_api_info": false, "auto_save_all_img": false, "custom_api_key": "be317f70aa843485436808957f3a9000", "hide_early_access": true, "use_LORA": false, "dot_subfolders": true, "use_local_html": false, "local_path_in_html": false, "page_header": false, "video_playback": true, "individual_meta_btn": true, "model_desc_to_json": true, "civitai_not_found_print": true, "civitai_send_to_browser": false, "image_location": "", "sub_image_location": true, "save_to_custom": false, "custom_civitai_proxy": "", "cabundle_path_proxy": "", "disable_sll_proxy": false, "insert_sub_1": false, "insert_sub_2": false, "insert_sub_3": false, "insert_sub_4": false, "insert_sub_5": false, "insert_sub_6": false, "insert_sub_7": false, "insert_sub_8": false, "insert_sub_9": false, "insert_sub_10": false, "insert_sub_11": false, "insert_sub_12": false, "insert_sub_13": false, "insert_sub_14": false, "Checkpoint_subfolder": "None", "LORA_LoCon_subfolder": "None", "TextualInversion_subfolder": "None", "Poses_subfolder": "None", "Controlnet_subfolder": "None", "Hypernetwork_subfolder": "None", "MotionModule_subfolder": "None", "SWINIR_upscale_subfolder": "None", "REALESRGAN_upscale_subfolder": "None", "GFPGAN_upscale_subfolder": "None", "BSRGAN_upscale_subfolder": "None", "ESRGAN_upscale_subfolder": "None", "VAE_subfolder": "None", "AestheticGradient_subfolder": "None", "Wildcards_subfolder": "None", "Workflows_subfolder": "None", "Other_subfolder": "None", "arh_javascript_aspect_ratio_show": true, "arh_javascript_aspect_ratio": "1:1, 3:2, 4:3, 5:4, 16:9", "arh_ui_javascript_selection_method": "Aspect Ratios Dropdown", "arh_hide_accordion_by_default": true, "arh_expand_by_default": false, "arh_ui_component_order_key": "MaxDimensionScaler, MinDimensionScaler, PredefinedAspectRatioButtons, PredefinedPercentageButtons", "arh_show_max_width_or_height": false, "arh_max_width_or_height": 1024.0, "arh_show_min_width_or_height": false, "arh_min_width_or_height": 1024.0, "arh_show_predefined_aspect_ratios": false, "arh_predefined_aspect_ratio_use_max_dim": false, "arh_predefined_aspect_ratios": "1:1, 4:3, 16:9, 9:16, 21:9", "arh_show_predefined_percentages": false, "arh_predefined_percentages": "25, 50, 75, 125, 150, 175, 200", "arh_predefined_percentages_display_key": "Incremental/decremental percentage (-50%, +50%)", "e621_prompt_username": "", "e621_prompt_api_key": "", "e621_prompt_user_agent": "sd-webui-e621-prompt (nochnoe)", "e621_prompt_use_proxy": false, "e621_prompt_proxy_url": "", "e621_prompt_excluded_tags": "comic, watermark, text, sign, patreon_logo, internal, censored, censored_genitalia, censored_penis, censored_pussy, censored_text, censored_anus, multiple_poses, multiple_images, dialogue, speech_bubble, english_text, dialogue_box, subtitled, thought_bubble, cutaway, conditional_dnp", "e621_prompt_appended_tags": "", "e621_prompt_replace_underscores": true, "e621_prompt_replace_underscores_in_appended": true, "e621_prompt_artist_prefix": "", "e621_prompt_meta_prefix": "", "e621_prompt_species_prefix": "", "e621_prompt_character_prefix": "", "e621_prompt_lore_prefix": "", "e621_prompt_general_prefix": "", "e621_prompt_copyright_prefix": "", "e621_prompt_invalid_prefix": "", "e621_prompt_rating_prefix": "rating:", "e621_prompt_rating_format": "short", "image_browser_active_tabs": "txt2img, img2img, txt2img-grids, img2img-grids, Extras, Favorites, Others, All, Maintenance", "image_browser_hidden_components": [], "image_browser_with_subdirs": true, "image_browser_preload": false, "image_browser_copy_image": false, "image_browser_delete_message": true, "image_browser_txt_files": true, "image_browser_debug_level": "0 - none", "image_browser_delete_recycle": true, "image_browser_scan_exif": true, "image_browser_mod_shift": false, "image_browser_mod_ctrl_shift": false, "image_browser_ranking_pnginfo": false, "image_browser_page_columns": 6, "image_browser_page_rows": 6, "image_browser_pages_perload": 20, "image_browser_height_auto": false, "image_browser_use_thumbnail": false, "image_browser_thumbnail_size": 200, "image_browser_thumbnail_crop": false, "image_browser_swipe": false, "image_browser_img_tooltips": true, "image_browser_show_progress": true, "image_browser_info_add": false, "image_browser_video_pos": "Above", "image_browser_video_x": 640, "image_browser_video_y": 640, "sd_model_checkpoint": "BstaberX.safetensors [73cac1edef]", "sd_checkpoint_hash": "73cac1edefe2f7e81d64b126b212f6c96e006824bcfbdd27ca96e2599e40750f", "outdir_samples": "", "outdir_txt2img_samples": "outputs\\txt2img-images", "outdir_img2img_samples": "outputs\\img2img-images", "outdir_extras_samples": "outputs\\extras-images", "outdir_grids": "", "outdir_txt2img_grids": "outputs\\txt2img-grids", "outdir_img2img_grids": "outputs\\img2img-grids", "outdir_save": "log\\images", "outdir_init_images": "outputs\\init-images", "samples_save": true, "samples_format": "png", "samples_filename_pattern": "", "save_images_add_number": true, "save_images_replace_action": "Replace", "grid_save": true, "grid_format": "png", "grid_extended_filename": false, "grid_only_if_multiple": true, "grid_prevent_empty_spots": false, "grid_zip_filename_pattern": "", "n_rows": -1, "font": "", "grid_text_active_color": "#000000", "grid_text_inactive_color": "#999999", "grid_background_color": "#ffffff", "save_images_before_face_restoration": false, "save_images_before_highres_fix": false, "save_images_before_color_correction": false, "save_mask": false, "save_mask_composite": false, "jpeg_quality": 80, "webp_lossless": false, "export_for_4chan": true, "img_downscale_threshold": 4.0, "target_side_length": 4000.0, "img_max_size_mp": 200.0, "use_original_name_batch": true, "use_upscaler_name_as_suffix": false, "save_selected_only": true, "save_init_img": false, "temp_dir": "", "clean_temp_dir_at_start": false, "save_incomplete_images": false, "notification_audio": true, "notification_volume": 100, "save_to_dirs": true, "grid_save_to_dirs": true, "use_save_to_dirs_for_ui": false, "directories_filename_pattern": "[date]", "directories_max_prompt_words": 8, "auto_backcompat": true, "use_old_emphasis_implementation": false, "use_old_karras_scheduler_sigmas": false, "no_dpmpp_sde_batch_determinism": false, "use_old_hires_fix_width_height": false, "dont_fix_second_order_samplers_schedule": false, "hires_fix_use_firstpass_conds": false, "use_old_scheduling": false, "use_downcasted_alpha_bar": true, "refiner_switch_by_sample_steps": false, "lora_functional": false, "extra_networks_show_hidden_directories": true, "extra_networks_dir_button_function": false, "extra_networks_hidden_models": "When searched", "extra_networks_default_multiplier": 1, "extra_networks_card_width": 0.0, "extra_networks_card_height": 0.0, "extra_networks_card_text_scale": 1, "extra_networks_card_show_desc": true, "extra_networks_card_description_is_html": false, "extra_networks_card_order_field": "Path", "extra_networks_card_order": "Ascending", "extra_networks_tree_view_style": "Dirs", "extra_networks_tree_view_default_enabled": true, "extra_networks_tree_view_default_width": 180.0, "extra_networks_add_text_separator": " ", "ui_extra_networks_tab_reorder": "", "textual_inversion_print_at_load": false, "textual_inversion_add_hashes_to_infotext": true, "sd_hypernetwork": "None", "sd_lora": "None", "lora_preferred_name": "Alias from file", "lora_add_hashes_to_infotext": true, "lora_show_all": false, "lora_hide_unknown_for_versions": [], "lora_in_memory_limit": 0, "lora_not_found_warning_console": false, "lora_not_found_gradio_warning": false, "cross_attention_optimization": "Automatic", "s_min_uncond": 0, "token_merging_ratio": 0, "token_merging_ratio_img2img": 0, "token_merging_ratio_hr": 0, "pad_cond_uncond": false, "pad_cond_uncond_v0": false, "persistent_cond_cache": true, "batch_cond_uncond": true, "fp8_storage": "Disable", "cache_fp16_weight": false, "hide_samplers": [], "eta_ddim": 0, "eta_ancestral": 1, "ddim_discretize": "uniform", "s_churn": 0, "s_tmin": 0, "s_tmax": 0, "s_noise": 1, "sigma_min": 0.0, "sigma_max": 0.0, "rho": 0.0, "eta_noise_seed_delta": 0, "always_discard_next_to_last_sigma": false, "sgm_noise_multiplier": false, "uni_pc_variant": "bh1", "uni_pc_skip_type": "time_uniform", "uni_pc_order": 3, "uni_pc_lower_order_final": true, "sd_noise_schedule": "Default", "sd_checkpoints_limit": 1, "sd_checkpoints_keep_in_cpu": true, "sd_checkpoint_cache": 0, "sd_unet": "Automatic", "enable_quantization": false, "emphasis": "Original", "enable_batch_seeds": true, "comma_padding_backtrack": 20, "CLIP_stop_at_last_layers": 1, "upcast_attn": false, "randn_source": "NV", "tiling": false, "hires_fix_refiner_pass": "second pass", "enable_prompt_comments": true, "sdxl_crop_top": 0.0, "sdxl_crop_left": 0.0, "sdxl_refiner_low_aesthetic_score": 2.5, "sdxl_refiner_high_aesthetic_score": 6.0, "sd_vae_checkpoint_cache": 0, "sd_vae": "None", "sd_vae_overrides_per_model_preferences": true, "auto_vae_precision_bfloat16": false, "auto_vae_precision": false, "sd_vae_encode_method": "Full", "sd_vae_decode_method": "Full", "inpainting_mask_weight": 1, "initial_noise_multiplier": 1, "img2img_extra_noise": 0, "img2img_color_correction": false, "img2img_fix_steps": false, "img2img_background_color": "#ffffff", "img2img_editor_height": 720, "img2img_sketch_default_brush_color": "#ffffff", "img2img_inpaint_mask_brush_color": "#ffffff", "img2img_inpaint_sketch_default_brush_color": "#ffffff", "return_mask": false, "return_mask_composite": false, "img2img_batch_show_results_limit": 32, "overlay_inpaint": true, "return_grid": true, "do_not_show_images": false, "js_modal_lightbox": true, "js_modal_lightbox_initially_zoomed": true, "js_modal_lightbox_gamepad": false, "js_modal_lightbox_gamepad_repeat": 250.0, "sd_webui_modal_lightbox_icon_opacity": 1, "sd_webui_modal_lightbox_toolbar_opacity": 0.9, "gallery_height": "", "open_dir_button_choice": "Subdirectory", "enable_pnginfo": true, "save_txt": false, "add_model_name_to_info": true, "add_model_hash_to_info": true, "add_vae_name_to_info": true, "add_vae_hash_to_info": true, "add_user_name_to_info": false, "add_version_to_infotext": true, "disable_weights_auto_swap": true, "infotext_skip_pasting": [], "infotext_styles": "Apply if any", "show_progressbar": true, "live_previews_enable": true, "live_previews_image_format": "webp", "show_progress_grid": true, "show_progress_every_n_steps": 1, "show_progress_type": "Full", "live_preview_allow_lowvram_full": false, "live_preview_content": "Combined", "live_preview_refresh_period": 1000.0, "live_preview_fast_interrupt": false, "js_live_preview_in_modal_lightbox": false, "keyedit_precision_attention": 0.1, "keyedit_precision_extra": 0.05, "keyedit_delimiters": ".,\\/!?%^*;:{}=`~() ", "keyedit_delimiters_whitespace": [ "Tab", "Carriage Return", "Line Feed" ], "keyedit_move": true, "disable_token_counters": false, "include_styles_into_token_counters": true, "extra_options_txt2img": [], "extra_options_img2img": [], "extra_options_cols": 1, "extra_options_accordion": false, "compact_prompt_box": false, "samplers_in_dropdown": true, "dimensions_and_batch_together": true, "sd_checkpoint_dropdown_use_short": false, "hires_fix_show_sampler": true, "hires_fix_show_prompts": false, "txt2img_settings_accordion": false, "img2img_settings_accordion": false, "interrupt_after_current": true, "localization": "ru_RU", "quicksettings_list": [ "sd_model_checkpoint", "CLIP_stop_at_last_layers", "sd_vae", "sd_lora", "use_LORA", "tac_activeIn.negativePrompts" ], "ui_tab_order": [], "hidden_tabs": [], "ui_reorder_list": [], "gradio_theme": "Default", "gradio_themes_cache": true, "show_progress_in_title": true, "send_seed": true, "send_size": true, "enable_reloading_ui_scripts": false, "api_enable_requests": true, "api_forbid_local_requests": true, "api_useragent": "", "prioritized_callbacks_app_started": [], "prioritized_callbacks_model_loaded": [], "prioritized_callbacks_ui_tabs": [], "prioritized_callbacks_ui_settings": [], "prioritized_callbacks_before_image_saved": [], "prioritized_callbacks_infotext_pasted": [], "prioritized_callbacks_script_unloaded": [], "prioritized_callbacks_before_ui": [], "prioritized_callbacks_list_optimizers": [], "prioritized_callbacks_before_token_counter": [], "prioritized_callbacks_script_before_process": [], "prioritized_callbacks_script_process": [], "prioritized_callbacks_script_post_sample": [], "prioritized_callbacks_script_on_mask_blend": [], "prioritized_callbacks_script_postprocess_maskoverlay": [], "prioritized_callbacks_script_after_component": [], "auto_launch_browser": "Local", "enable_console_prompts": false, "show_warnings": true, "show_gradio_deprecation_warnings": true, "memmon_poll_rate": 8, "samples_log_stdout": false, "multiple_tqdm": true, "enable_upscale_progressbar": true, "print_hypernet_extra": false, "list_hidden_files": true, "disable_mmap_load_safetensors": false, "hide_ldm_prints": true, "dump_stacks_on_signal": false, "face_restoration": false, "face_restoration_model": "CodeFormer", "code_former_weight": 0.5, "face_restoration_unload": false, "postprocessing_enable_in_main_ui": [], "postprocessing_disable_in_extras": [], "postprocessing_operation_order": [], "upscaling_max_images_in_cache": 5, "postprocessing_existing_caption_action": "Ignore", "ESRGAN_tile": 192, "ESRGAN_tile_overlap": 8, "realesrgan_enabled_models": [ "R-ESRGAN 4x+", "R-ESRGAN 4x+ Anime6B" ], "dat_enabled_models": [ "DAT x2", "DAT x3", "DAT x4" ], "DAT_tile": 192, "DAT_tile_overlap": 8, "set_scale_by_when_changing_upscaler": false, "unload_models_when_training": false, "pin_memory": false, "save_optimizer_state": false, "save_training_settings_to_txt": true, "dataset_filename_word_regex": "", "dataset_filename_join_string": " ", "training_image_repeats_per_epoch": 1, "training_write_csv_every": 500.0, "training_xattention_optimizations": false, "training_enable_tensorboard": false, "training_tensorboard_save_images": false, "training_tensorboard_flush_every": 120.0, "canvas_hotkey_shrink_brush": "Q", "canvas_hotkey_grow_brush": "W", "canvas_disabled_functions": [ "Overlap" ], "canvas_hotkey_zoom": "Shift", "canvas_hotkey_adjust": "Ctrl", "canvas_hotkey_move": "F", "canvas_hotkey_fullscreen": "S", "canvas_hotkey_reset": "R", "canvas_hotkey_overlap": "O", "canvas_show_tooltip": true, "canvas_auto_expand": true, "canvas_blur_prompt": false, "canvas_zoom_undo_extra_key": "Ctrl", "canvas_zoom_hotkey_undo": "Z", "canvas_zoom_inc_brush_size": "]", "canvas_zoom_dec_brush_size": "[", "canvas_zoom_hotkey_open_colorpanel": "Q", "canvas_zoom_hotkey_pin_colorpanel": "T", "canvas_zoom_hotkey_dropper": "A", "canvas_zoom_hotkey_fill": "X", "canvas_zoom_hotkey_transparency": "C", "canvas_zoom_hide_btn": true, "canvas_zoom_mask_clear": true, "canvas_zoom_enable_integration": true, "canvas_zoom_brush_size": 200, "canvas_zoom_brush_size_change": 5, "canvas_zoom_transparency_level": 70, "canvas_zoom_brush_opacity": false, "canvas_zoom_inpaint_label": true, "canvas_zoom_inpaint_warning": true, "canvas_zoom_inpaint_change_btn_color": false, "canvas_zoom_inpaint_btn_color": "#C33227", "canvas_zoom_brush_outline": false, "canvas_zoom_add_buttons": false, "canvas_zoom_draw_staight_lines": false, "canvas_zoom_inpaint_brushcolor": "#000000", "canvas_zoom_disabled_functions": [ "Overlap" ], "interrogate_keep_models_in_memory": false, "interrogate_return_ranks": false, "interrogate_clip_num_beams": 1, "interrogate_clip_min_length": 24, "interrogate_clip_max_length": 48, "interrogate_clip_dict_limit": 1500.0, "interrogate_clip_skip_categories": [], "interrogate_deepbooru_score_threshold": 0.5, "deepbooru_sort_alpha": true, "deepbooru_use_spaces": true, "deepbooru_escape": true, "deepbooru_filter_tags": "", "disabled_extensions": [ "sd-webui-photopea-embed", "stable-diffusion-webui-images-browser" ], "disable_all_extensions": "none", "tac_frequencySort": true, "tac_frequencyFunction": "Logarithmic (weak)", "tac_frequencyMinCount": 3, "tac_frequencyMaxAge": 30, "tac_frequencyRecommendCap": 10, "tac_frequencyIncludeAlias": false, "dp_ignore_whitespace": false, "dp_write_raw_template": false, "dp_write_prompts_to_file": false, "dp_parser_variant_start": "{", "dp_parser_variant_end": "}", "dp_parser_wildcard_wrap": "__", "dp_limit_jinja_prompts": false, "dp_auto_purge_cache": false, "dp_wildcard_manager_no_dedupe": false, "dp_wildcard_manager_no_sort": false, "dp_wildcard_manager_shuffle": false, "dp_magicprompt_default_model": "Gustavosta/MagicPrompt-Stable-Diffusion", "dp_magicprompt_batch_size": 1, "images_history_preload": false, "images_record_paths": true, "images_delete_message": true, "images_history_page_columns": 6, "images_history_page_rows": 6, "images_history_pages_perload": 20, "LORA_subfolder": "None", "LoCon_subfolder": "None", "DoRA_subfolder": "None", "ad_max_models": 4, "ad_extra_models_dir": "", "ad_save_previews": false, "ad_save_images_before": false, "ad_only_seleted_scripts": true, "ad_script_names": "dynamic_prompting,dynamic_thresholding,lora_block_weight,negpip,wildcard_recursive,wildcards", "ad_bbox_sortby": "None", "ad_same_seed_for_each_tap": false, "regprp_debug": false, "regprp_hidepmask": false, "prioritized_callbacks_after_component": [], "prioritized_callbacks_script_before_process_batch": [], "prioritized_callbacks_script_process_batch": [], "prioritized_callbacks_script_postprocess": [], "prioritized_callbacks_script_postprocess_image": [], "ad_same_seed_for_each_tab": false }, "Startup": null, "Packages": [ "absl-py==2.1.0", "accelerate==0.21.0", "aenum==3.1.15", "aiofiles==23.2.1", "aiohappyeyeballs==2.3.4", "aiohttp==3.10.0", "aiosignal==1.3.1", "aliyun-python-sdk-alimt==3.2.0", "aliyun-python-sdk-core==2.13.10", "altair==5.3.0", "antlr4-python3-runtime==4.9.3", "anyio==3.7.1", "async-timeout==4.0.3", "attrs==23.2.0", "beautifulsoup4==4.12.3", "bitsandbytes==0.43.0", "blendmodes==2022", "boto3==1.34.152", "botocore==1.34.152", "cachetools==5.4.0", "certifi==2024.7.4", "cffi==1.16.0", "chardet==5.2.0", "charset-normalizer==3.3.2", "clean-fid==0.1.35", "click==8.1.7", "clip @ https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip#sha256=b5842c25da441d6c581b53a5c60e0c2127ebafe0f746f8e15561a006c6c3be6a", "colorama==0.4.6", "contourpy==1.2.1", "cryptography==43.0.0", "cycler==0.12.1", "dadaptation==3.2", "deprecation==2.1.0", "diffusers==0.29.2", "dill==0.3.8", "discord-webhook==1.3.0", "diskcache==5.6.3", "distro==1.9.0", "dynamicprompts==0.31.0", "einops==0.4.1", "exceptiongroup==1.2.2", "facexlib==0.3.0", "fastapi==0.94.0", "ffmpy==0.4.0", "filelock==3.15.4", "filterpy==1.4.5", "flatbuffers==24.3.25", "fonttools==4.53.1", "frozenlist==1.4.1", "fsspec==2024.6.1", "ftfy==6.2.0", "gitdb==4.0.11", "GitPython==3.1.43", "google-auth==2.32.0", "google-auth-oauthlib==1.0.0", "gradio==3.41.2", "gradio_client==0.5.0", "grpcio==1.65.4", "h11==0.12.0", "httpcore==0.15.0", "httpx==0.24.1", "huggingface-hub==0.24.5", "idna==3.7", "imageio==2.34.2", "importlib_metadata==8.2.0", "importlib_resources==6.4.0", "inflection==0.5.1", "jax==0.4.31", "jaxlib==0.4.31", "Jinja2==3.1.4", "jmespath==0.10.0", "jsonmerge==1.8.0", "jsonschema==4.23.0", "jsonschema-specifications==2023.12.1", "kiwisolver==1.4.5", "kornia==0.6.7", "lark==1.1.2", "lazy_loader==0.4", "lightning-utilities==0.11.6", "llvmlite==0.43.0", "lxml==5.2.2", "Markdown==3.6", "markdown-it-py==3.0.0", "MarkupSafe==2.1.5", "matplotlib==3.9.1", "mdurl==0.1.2", "mediapipe==0.10.14", "ml-dtypes==0.4.0", "mpmath==1.3.0", "multidict==6.0.5", "multiprocess==0.70.16", "networkx==3.3", "numba==0.60.0", "numpy==1.26.2", "oauthlib==3.2.2", "omegaconf==2.2.3", "open-clip-torch==2.20.0", "openai==1.38.0", "opencv-contrib-python==4.10.0.84", "opencv-python==4.10.0.84", "opt-einsum==3.3.0", "orjson==3.10.6", "packaging==24.1", "pandas==2.2.2", "pathos==0.3.2", "piexif==1.1.3", "Pillow==9.5.0", "pillow-avif-plugin==1.4.3", "pip==24.2", "pox==0.3.4", "ppft==1.7.6.8", "protobuf==3.20.0", "psutil==5.9.5", "py-cpuinfo==9.0.0", "pyasn1==0.6.0", "pyasn1_modules==0.4.0", "pycparser==2.22", "pydantic==1.10.17", "pydub==0.25.1", "PyExecJS==1.5.1", "Pygments==2.18.0", "pyparsing==3.1.2", "PySocks==1.7.1", "python-dateutil==2.9.0.post0", "python-multipart==0.0.9", "pytorch-lightning==1.9.4", "pytorch_optimizer==2.12.0", "pytz==2024.1", "PyWavelets==1.6.0", "PyYAML==6.0.1", "referencing==0.35.1", "regex==2024.7.24", "requests==2.32.3", "requests-oauthlib==2.0.0", "resize-right==0.0.2", "rich==13.7.1", "rpds-py==0.19.1", "rsa==4.9", "s3transfer==0.10.2", "safetensors==0.4.2", "scikit-image==0.21.0", "scipy==1.14.0", "seaborn==0.13.2", "semantic-version==2.10.0", "Send2Trash==1.8.2", "sentencepiece==0.2.0", "setuptools==69.5.1", "six==1.16.0", "smmap==5.0.1", "sniffio==1.3.1", "sounddevice==0.4.7", "soupsieve==2.5", "spandrel==0.3.4", "spandrel_extra_arches==0.1.1", "starlette==0.26.1", "sympy==1.13.1", "tensorboard==2.13.0", "tensorboard-data-server==0.7.2", "tifffile==2024.7.24", "timm==1.0.8", "tokenizers==0.13.3", "tomesd==0.1.3", "toolz==0.12.1", "torch==2.1.2+cu121", "torchdiffeq==0.2.3", "torchmetrics==1.4.0.post0", "torchsde==0.2.6", "torchvision==0.16.2+cu121", "tqdm==4.66.4", "trampoline==0.1.2", "transformers==4.30.2", "typing_extensions==4.12.2", "tzdata==2024.1", "ultralytics==8.2.71", "ultralytics-thop==2.0.0", "urllib3==2.2.2", "uvicorn==0.30.5", "wcwidth==0.2.13", "websockets==11.0.3", "Werkzeug==3.0.3", "wheel==0.43.0", "xformers==0.0.23.post1", "yarl==1.9.4", "zipp==3.19.2", "ZipUnicode==1.1.1" ] } ### Console logs ```Shell D:\StableDiffusion\stable-diffusion-webui>git pull Already up to date. venv "D:\StableDiffusion\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Version: v1.10.1 Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2 Launching Web UI with arguments: --autolaunch --theme=dark --force-enable-xformers --xformers --enable-insecure-extension-access --disable-safe-unpickle --api --listen --no-half-vae Loading weights [f5fc545f18] from D:\StableDiffusion\stable-diffusion-webui\models\Stable-diffusion\bb95FurryMix_v130.safetensors Для продолжения нажмите любую клавишу . . . ``` ### Additional information _No response_
open
2024-08-04T17:54:01Z
2024-08-04T17:54:01Z
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16327
[ "bug-report" ]
K1LL3RPUNCH
0
huggingface/datasets
machine-learning
6,864
Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub
### Describe the bug The dataset `rewardsignal/reddit_writing_prompts` is missing in Huggingface Hub. ### Steps to reproduce the bug ``` from datasets import load_dataset prompt_response_dataset = load_dataset("rewardsignal/reddit_writing_prompts", data_files="prompt_responses_full.csv", split='train[:80%]') ``` ### Expected behavior DatasetNotFoundError: Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub or cannot be accessed ### Environment info Nothing to do with versions
closed
2024-05-03T06:03:30Z
2024-05-06T06:36:42Z
https://github.com/huggingface/datasets/issues/6864
[]
vinodrajendran001
1
deepspeedai/DeepSpeed
pytorch
6,618
[BUG] Long sequence parallelism (Ulysses) got error list index out of range
**Describe the bug** test ulysses got error list index out of range **To Reproduce** test ulysses with [test_ulysses.py](https://github.com/microsoft/DeepSpeedExamples/blob/uly-hf/post_training/sequence_parallelism/test_ulysses.py) torchrun --nproc_per_node=8 test_ulysses.py **Expected behavior** work fine **ds_report output** ``` collect2: error: ld returned 1 exit status gds .................... [NO] ....... [NO] transformer_inference .. [NO] ....... [OKAY] inference_core_ops ..... [NO] ....... [OKAY] cutlass_ops ............ [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] ragged_device_ops ...... [NO] ....... [OKAY] ragged_ops ............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3 [WARNING] using untested triton version (2.3.0), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] -------------------------------------------------- DeepSpeed general environment info: torch install path ............... ['/opt/conda/lib/python3.10/site-packages/torch'] torch version .................... 2.3.0+cu121 deepspeed install path ........... ['/opt/conda/lib/python3.10/site-packages/deepspeed'] deepspeed info ................... 0.15.2, unknown, unknown torch cuda version ............... 12.1 torch hip version ................ None nvcc version ..................... 12.1 deepspeed wheel compiled w. ...... torch 2.3, cuda 12.1 shared memory (/dev/shm) size .... 100.00 GB ``` **Screenshots** got error: list index out of range ``` [rank6]: Traceback (most recent call last): [rank6]: File "/data1/nfs15/nfs/zhanglei335/mlsys/train/long-context-train/llm-train/uly_sp_test.py", line 166, in <module> [rank6]: get_loss(model, data_loader, DS_CONFIG) [rank6]: File "/data1/nfs15/nfs/zhanglei335/mlsys/train/long-context-train/llm-train/uly_sp_test.py", line 112, in get_loss [rank6]: model, _, _, _ = deepspeed.initialize(model=model, [rank6]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/__init__.py", line 193, in initialize [rank6]: engine = DeepSpeedEngine(args=args, [rank6]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 269, in __init__ [rank6]: self._configure_distributed_model(model) [rank6]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1188, in _configure_distributed_model [rank6]: self.data_parallel_group = groups._get_data_parallel_group() [rank6]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/utils/groups.py", line 405, in _get_data_parallel_group [rank6]: return mesh_device.get_group(mesh_dim="data_parallel") [rank6]: File "/opt/conda/lib/python3.10/site-packages/torch/distributed/device_mesh.py", line 423, in get_group [rank6]: _find_pg_by_ranks_and_tag(*self._dim_group_infos[mesh_dim][:2]) [rank6]: IndexError: list index out of range [rank7]: Traceback (most recent call last): [rank7]: File "/data1/nfs15/nfs/zhanglei335/mlsys/train/long-context-train/llm-train/uly_sp_test.py", line 166, in <module> [rank7]: get_loss(model, data_loader, DS_CONFIG) [rank7]: File "/data1/nfs15/nfs/zhanglei335/mlsys/train/long-context-train/llm-train/uly_sp_test.py", line 112, in get_loss [rank7]: model, _, _, _ = deepspeed.initialize(model=model, [rank7]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/__init__.py", line 193, in initialize [rank7]: engine = DeepSpeedEngine(args=args, [rank7]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 269, in __init__ [rank7]: self._configure_distributed_model(model) [rank7]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1188, in _configure_distributed_model [rank7]: self.data_parallel_group = groups._get_data_parallel_group() [rank7]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/utils/groups.py", line 405, in _get_data_parallel_group [rank7]: return mesh_device.get_group(mesh_dim="data_parallel") [rank7]: File "/opt/conda/lib/python3.10/site-packages/torch/distributed/device_mesh.py", line 423, in get_group [rank7]: _find_pg_by_ranks_and_tag(*self._dim_group_infos[mesh_dim][:2]) [rank7]: IndexError: list index out of range ``` **Launcher context** Are you launching your experiment with the `deepspeed` launcher, MPI, or something else?
closed
2024-10-10T03:41:30Z
2024-10-14T02:21:57Z
https://github.com/deepspeedai/DeepSpeed/issues/6618
[ "bug", "training" ]
Lzhang-hub
2
scanapi/scanapi
rest-api
267
New feature: import a Postman collection to scanapi format
A good improvment to this project is the feature to import a Postman collection format to scanapi format!
closed
2020-09-10T02:54:34Z
2022-05-04T19:11:42Z
https://github.com/scanapi/scanapi/issues/267
[ "Feature" ]
flap
8
ray-project/ray
machine-learning
51,495
CI test windows://python/ray/tests:test_advanced_3 is consistently_failing
CI test **windows://python/ray/tests:test_advanced_3** is consistently_failing. Recent failures: - https://buildkite.com/ray-project/postmerge/builds/8965#0195aaf1-9737-4a02-a7f8-1d7087c16fb1 - https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4156-97c5-9793049512c1 DataCaseName-windows://python/ray/tests:test_advanced_3-END Managed by OSS Test Policy
closed
2025-03-19T00:05:28Z
2025-03-19T21:52:04Z
https://github.com/ray-project/ray/issues/51495
[ "bug", "triage", "core", "flaky-tracker", "ray-test-bot", "ci-test", "weekly-release-blocker", "stability" ]
can-anyscale
3
CorentinJ/Real-Time-Voice-Cloning
tensorflow
558
encoder training does not create saved_models folder
Hi, I'm training the encoder on m-ailabs dataset for some time and would expect to find a saved_models folder in my encoder folder. But there is no folder so far. Is this a known problem? Training the synthesizer creates such a directory as expected. Thanks for your help guys!
closed
2020-10-15T12:12:40Z
2020-10-15T12:37:04Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/558
[]
padmalcom
1
mars-project/mars
pandas
2,758
make mars type inference optional
<!-- Thank you for your contribution! Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue. --> **Is your feature request related to a problem? Please describe.** Mars type inferrence will generate mock data, then feed the data into user provided function, if the user function is time-consuming which take minutes or hours, there will be much time wasting in type inference, and the real computing didn't even happen which is unacceptable in some cases. **Describe the solution you'd like** It would be great if we can make dtypes lazy and type inferrence optional. If so, the expression call cost would be minimal.
closed
2022-02-25T10:40:07Z
2022-02-28T06:15:22Z
https://github.com/mars-project/mars/issues/2758
[ "type: enhancement", "mod: dataframe" ]
chaokunyang
0
pydantic/logfire
fastapi
59
Extra message displayed in `openai` instrumentation
### Description first off, really excited about `logfire`! 👏 --- this issue is purely cosmetic and **disclaimer** I could be confused about the expected behavior but I seem to consistently get an extra message displayed in the human readable trace details. Maybe this is meant to be a placeholder for the `Assistant` response? ![Screenshot 2024-04-30 at 7 22 30 PM](https://github.com/pydantic/logfire/assets/31014960/15370dfa-0173-417b-955e-67d4a670004f) however the messages in the arguments section agrees with my expectations ![image](https://github.com/pydantic/logfire/assets/31014960/d9df133a-38c9-48d2-a61e-eafb7fa2a1ae)
closed
2024-05-01T00:25:06Z
2024-05-01T19:10:58Z
https://github.com/pydantic/logfire/issues/59
[ "Platform Bug", "OpenAI" ]
zzstoatzz
2
junyanz/pytorch-CycleGAN-and-pix2pix
computer-vision
1,035
pix2pix
我使用了两类钢板的缺陷作为pix2pix的数据集输入,输出的效果很差,请问你可有给我解答一下吗?
open
2020-05-20T03:18:54Z
2020-05-20T03:18:54Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1035
[]
Yu700
0
home-assistant/core
python
140,414
core 2025.3.2: after update selectable output precision of a sensor doesn't work anymore
### The problem Normally i can select a precision on a template sensor: Standard (0.12345678) 0 0.1 ... 0.123456 after update this doesn't work anymore. Only the standard is working, despite what i set. ### What version of Home Assistant Core has the issue? core-2025.3.2 ### What was the last working version of Home Assistant Core? core-2025.3.1 ### What type of installation are you running? Home Assistant OS ### Integration causing the issue _No response_ ### Link to integration documentation on our website _No response_ ### Diagnostics information _No response_ ### Example YAML snippet ```yaml ``` ### Anything in the logs that might be useful for us? ```txt ``` ### Additional information _No response_
closed
2025-03-11T20:56:26Z
2025-03-11T21:35:58Z
https://github.com/home-assistant/core/issues/140414
[]
peter9091
1
quokkaproject/quokka
flask
114
Add support MediaElement.js Player
Add support MediaElement.js mp4, flv, webm, ogg https://github.com/johndyer/mediaelement http://mediaelementjs.com/ https://pypi.python.org/pypi/collective.mediaelementjs/0.4.1
closed
2013-12-08T01:22:32Z
2015-07-16T02:56:33Z
https://github.com/quokkaproject/quokka/issues/114
[]
jniltinho
1
hyperspy/hyperspy
data-visualization
2,956
s.map and ragged kwarg: different behavior between lazy and non-lazy
There is a different behavior between lazy and non-lazy `dtype=object` kwarg when using s.map. Essentially, for non-lazy signal `object` kwarg, the array is passed as an `object`. While for a lazy signal, this is passed as the "inner" array, not the object. ------------ Example of this, for a lazy object kwarg. ```python import numpy as np import hyperspy.api as hs def afunction(vector, avector): print(avector.dtype) vector_new = np.zeros(2) vector_new[0] = vector[0] + avector[0] vector_new[1] = vector[1] + avector[1] return vector_new data = np.empty((4, 2), dtype=object) for iy, ix in np.ndindex(data.shape): data[iy, ix] = np.array([5, 10]) s = hs.signals.BaseSignal(data, ragged=True) s_ref = s.inav[0, 0].as_lazy() s_out = s.map(afunction, avector=s_ref, inplace=False, lazy_output=False) ``` Output from the `print(avector.dtype)`: `int64` -------- Example of this, with a non-lazy object kwarg ```python def afunction(vector, avector): print(avector.dtype) vector_new = np.zeros(2) vector_new[0] = vector[0] + avector[0] vector_new[1] = vector[1] + avector[1] return vector_new data = np.empty((4, 2), dtype=object) for iy, ix in np.ndindex(data.shape): data[iy, ix] = np.array([5, 10]) s = hs.signals.BaseSignal(data, ragged=True) s_ref = s.inav[0, 0]#.as_lazy() s_out = s.map(afunction, avector=s_ref, inplace=False, lazy_output=False) ``` Output from the `print(avector.dtype)`: `object` ---------- This creates problems, for example is seen in https://github.com/pyxem/pyxem/issues/851. Since the input function expects vectors, not an object. ------------ I'm not certain what the best solution is, but I think the most important thing is that the `map` function works the same for both `lazy` and `non-lazy` kwarg signals.
open
2022-06-10T19:28:06Z
2022-06-13T13:18:31Z
https://github.com/hyperspy/hyperspy/issues/2956
[ "type: bug" ]
magnunor
3
tortoise/tortoise-orm
asyncio
996
Function cannot use relations in the scope of annotate
**Describe the bug** Function cannot use relations in the scope of annotate. got error ``` tortoise.exceptions.FieldError: There is no non-virtual field points__value1 on Model User ``` --- **To Reproduce** ```py from tortoise import Tortoise, fields, run_async from tortoise.expressions import F from tortoise.functions import Coalesce, Sum from tortoise.models import Model class User(Model): id = fields.IntField(pk=True) name = fields.TextField() points: fields.ReverseRelation["Point"] class Point(Model): id = fields.IntField(pk=True) value1 = fields.TextField() value2 = fields.TextField() user = fields.ForeignKeyField("models.User", related_name="points") async def run(): await Tortoise.init(db_url="sqlite://:memory:", modules={"models": ["__main__"]}) await Tortoise.generate_schemas() user = await User.create(name="user1") await Point.create(user_id=user.id, value1=2, value2=5) await Point.create(user_id=user.id, value1=3, value2=10) res = ( await User.all() .limit(2) .offset(0) .annotate(value_total=Coalesce( Sum( F('points__value1') * F('points__value2') ), 0 )) .values('id', 'name', 'value_total') ) print(res) if __name__ == "__main__": run_async(run()) ``` --- **Expected behavior** ```sql SELECT `user`.`id`, `user`.`name`, COALESCE(SUM(`user`.`value1` * `user`.`value2`), 0) `value_total` FROM `users` ``` --- **Additional context** Tortoise 0.17.8 --- **Temporary solution** ```py Sum( - F('points__value1') * F('points__value2') + RawSQL('points.value1 * points.value2') ), ``` If the F expression is changed to RawSQL can work, But the _filter of Sum is invalid...
closed
2021-12-02T17:55:24Z
2024-11-14T08:29:39Z
https://github.com/tortoise/tortoise-orm/issues/996
[]
maou-shonen
1
pennersr/django-allauth
django
3,978
Django responds very slowly when user is signing up and first logging in
I'm using django-allauth to handle user signup/login. The current setup works, but the response from the server is incredibly slow (>5s, other types of responses are usually <500ms). In base.py (I used cookiecutter-django to setup everything), the relevant settings are ```python LOGIN_REDIRECT_URL = "myapp:home" LOGIN_URL = "account_login" ACCOUNT_SIGNUP_REDIRECT_URL = reverse_lazy("myapp:accept-terms-of-service") ACCOUNT_EMAIL_REQUIRED = True ACCOUNT_EMAIL_VERIFICATION = "mandatory" ACCOUNT_ADAPTER = "project.users.adapters.AccountAdapter" ``` In adapters.py, ```python class AccountAdapter(DefaultAccountAdapter): def is_open_for_signup(self, request: HttpRequest) -> bool: return getattr(settings, "ACCOUNT_ALLOW_REGISTRATION", True) def send_mail(self, template_prefix, email, context): mailing_thread = threading.Thread( target=super().send_mail, args=(template_prefix, email, context), ) mailing_thread.start() ``` In views.py ```python class CustomLoginView(LoginView): def form_valid(self, form): response = super().form_valid(form) user_id = self.request.user.id if not UserProfile.objects.filter(user_id=user_id).exists(): return redirect("myapp:accept-terms-of-service") return response def home(request): return render(request, "myapp/home.html") ``` The idea is to send a verification email to the user when he signs up. And when he first signs in, he will need to go to the process of accepting the legal documents. The responses of two requests are very slow: 1. When the user is signing up, there is a significant delay before the server makes a response, though ultimately the user receives the correct email. 2. When the user first logs in, there is a significant delay before showing the accept terms page. I have tried to use both django celery and threads to send the email, but the delay doesn't change at all (BTW all of them works, just very slow). So it's probably not a blocking problem. I am not sure if this is related to how boto3 is configured because it seems that AWS SES is ultimately sending the right email. But the following appears in the terminal each time when a user signs up: > DEBUG 2024-07-17 22:31:32,245 loaders 32086 139956360050368 Loading JSON file: /home/admin/.pyenv/versions/3.11.9/envs/django/lib/python3.11/site-packages/boto3/data/s3/2006-03-01/resources-1.json DEBUG 2024-07-17 22:31:32,248 loaders 32086 139956360050368 Loading JSON file: /home/admin/.pyenv/versions/3.11.9/envs/django/lib/python3.11/site-packages/botocore/data/endpoints.json DEBUG 2024-07-17 22:31:32,270 loaders 32086 139956360050368 Loading JSON file: /home/admin/.pyenv/versions/3.11.9/envs/django/lib/python3.11/site-packages/botocore/data/sdk-default-configuration.json DEBUG 2024-07-17 22:31:32,287 loaders 32086 139956360050368 Loading JSON file: /home/admin/.pyenv/versions/3.11.9/envs/django/lib/python3.11/site-packages/botocore/data/s3/2006-03-01/service-2.json.gz DEBUG 2024-07-17 22:31:32,321 loaders 32086 139956360050368 Loading JSON file: /home/admin/.pyenv/versions/3.11.9/envs/django/lib/python3.11/site-packages/botocore/data/s3/2006-03-01/endpoint-rule-set-1.json.gz DEBUG 2024-07-17 22:31:32,337 loaders 32086 139956360050368 Loading JSON file: /home/admin/.pyenv/versions/3.11.9/envs/django/lib/python3.11/site-packages/botocore/data/partitions.json DEBUG 2024-07-17 22:31:32,345 configprovider 32086 139956360050368 Looking for endpoint for s3 via: environment_service DEBUG 2024-07-17 22:31:32,345 configprovider 32086 139956360050368 Looking for endpoint for s3 via: environment_global DEBUG 2024-07-17 22:31:32,345 configprovider 32086 139956360050368 Looking for endpoint for s3 via: config_service DEBUG 2024-07-17 22:31:32,345 configprovider 32086 139956360050368 Looking for endpoint for s3 via: config_global DEBUG 2024-07-17 22:31:32,346 configprovider 32086 139956360050368 No configured endpoint found. DEBUG 2024-07-17 22:31:32,355 loaders 32086 139956360050368 Loading JSON file: /home/admin/.pyenv/versions/3.11.9/envs/django/lib/python3.11/site-packages/botocore/data/_retry.json DEBUG 2024-07-17 22:31:32,357 factory 32086 139956360050368 Loading s3:s3 DEBUG 2024-07-17 22:31:32,358 factory 32086 139956360050368 Loading s3:Bucket DEBUG 2024-07-17 22:31:32,358 model 32086 139956360050368 Renaming Bucket attribute name
closed
2024-07-17T23:36:06Z
2024-07-19T19:39:27Z
https://github.com/pennersr/django-allauth/issues/3978
[]
kjnez
4
apache/airflow
automation
47,502
AIP-38 | Serialize connection types to database
### Body As pre-requisite of #47501 the providers manager must serialize all connectiont ypes into a database table (inlcuding the extra json form field structure) in order to be served by API server to clients. This is a preparation to un-bundle the providers-manager from API server. ### Committer - [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
open
2025-03-07T14:56:11Z
2025-03-21T22:25:07Z
https://github.com/apache/airflow/issues/47502
[ "area:providers", "area:serialization", "area:API", "kind:meta" ]
jscheffl
1
syrupy-project/syrupy
pytest
517
Codecov not working in syrupy repo
closed
2021-06-09T17:23:49Z
2021-06-20T01:38:15Z
https://github.com/syrupy-project/syrupy/issues/517
[ "infrastructure", "released" ]
noahnu
2
tensorflow/tensor2tensor
deep-learning
1,236
Something about your paper
Hi, If using this model to do seq2seq tasks, such as translation. What's the output embedding in figure 1? Such that we can't input the whole sentence to the decoder like what encoder do. Is it just like lstm that we get the first translated word and input it to the decoder as output embedding? And the output embedding matrix become larger and larger? Thanks!
closed
2018-11-19T07:42:47Z
2018-11-20T05:38:05Z
https://github.com/tensorflow/tensor2tensor/issues/1236
[]
mfxss
1
home-assistant/core
python
141,172
[Overkiz] - Battery unknown for all equipments
### The problem I see that all my Equipment in the Overkiz integration has unknown value for Battery sensor. ### What version of Home Assistant Core has the issue? core-2025.3.4 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant OS ### Integration causing the issue Overkiz ### Link to integration documentation on our website _No response_ ### Diagnostics information See example in the diagnostic file. ### Example YAML snippet ```yaml ``` ### Anything in the logs that might be useful for us? ```txt ``` ### Additional information [overkiz-c3870f84f152a0449c3a9448c042aa20-Capteur de fumée Étage-015a934295c501564fa0aa50a13a8cba (1).json](https://github.com/user-attachments/files/19407689/overkiz-c3870f84f152a0449c3a9448c042aa20-Capteur.de.fumee.Etage-015a934295c501564fa0aa50a13a8cba.1.json)
open
2025-03-23T07:59:46Z
2025-03-23T08:39:06Z
https://github.com/home-assistant/core/issues/141172
[ "integration: overkiz" ]
alsmaison
1
marimo-team/marimo
data-science
3,916
Upstreaming `marimo-agents` and agent cells
### Description I've built [marimo-agents](https://github.com/riyavsinha/marimo-agents) as an extension of marimo which enables plain text cell inputs to any "agent" (which can be a langchain/langgraph agent, or any other function that responds to a prompt). Would love to see if we can integrate this back into Marimo. My use cases have been for a biological research agent, which can fetch data, register datasources, and output visualizations and syntheses for users within the notebook format: <img width="1051" alt="Image" src="https://github.com/user-attachments/assets/dae465a6-085b-4fed-976c-80c0e2c07ac1" /> <img width="1644" alt="Image" src="https://github.com/user-attachments/assets/38771a1f-dcb9-4d87-85ee-fd0c5399fb44" /> ### Suggested solution Begin discussion on what a broader api could look like in marimo, for eventual upstreaming if possible ### Alternative _No response_ ### Additional context _No response_
open
2025-02-25T21:17:10Z
2025-02-25T21:25:58Z
https://github.com/marimo-team/marimo/issues/3916
[ "enhancement" ]
riyavsinha
0
harry0703/MoneyPrinterTurbo
automation
404
How to make better use of multiple cores for video merging?
I noticed it uses only 1-2% of my CPU (24 cores / 48 threads), and for a 1-minute video, it takes 30 minutes, with 27-28 minutes spent just on the last 2 stages where the videos are merged using ffmpeg. How can I make it use more cores? Which part of the code do I need to modify? Additionally, are there any settings that can be modified to speed it up further?
closed
2024-06-09T15:48:46Z
2024-06-11T03:43:06Z
https://github.com/harry0703/MoneyPrinterTurbo/issues/404
[]
UserItaly
1
GibbsConsulting/django-plotly-dash
plotly
203
Performance questions
Hey, great work with this, it might be a lifesaver for me. I just wanted to know if you see much of a performance hit inside the dash app once it's inside django, just in terms of how fast responses are and so on. And related to this: I have an ever-growing set of datasets. In terms of performance, would you imagine there would be an issue if each dataset was served up in a unique Dash app? Meaning, there could be hundreds of different dash apps. Can you foresee any issues if I were to go that way?
closed
2019-11-26T15:14:25Z
2019-11-27T13:46:06Z
https://github.com/GibbsConsulting/django-plotly-dash/issues/203
[]
interrogator
2
plotly/dash-html-components
dash
161
[BUG] html.ObjectEl does not accept the data keyword
Hi, I'm using Dash with the following configuration: ``` dash-core-components==1.5.0 dash-html-components==1.0.1 dash-renderer==1.2.0 ``` The issue is that html.ObjectEl() does not accept the **data** keyword but only a "data-*" wildcard Indeed the following python code is raising an exception: ``` html.ObjectEl("your browser doesn’t support the object tag", data= "/a_link") ``` Exception: ``` TypeError: Unexpected keyword argument 'data' Allowed arguments: accessKey, aria-*, children, className, contentEditable, contextMenu, data-*, dir, draggable, form, height, hidden, id, key, lang, loading_state, n_clicks, n_clicks_timestamp, name, role, spellCheck, style, tabIndex, title, type, useMap, width ``` Instead the expected behaviour should be the following HTML: ``` <object data="/a_link"> Your browser doesn’t support the object tag. </object> ``` calling the constructor with the wildcard works, but the produced HTML is different from the expected one: ``` html.ObjectEl("your browser doesn’t support the object tag", **{'data-test': "/a_link"}) ``` return the HTML: ``` <object data-test="/a_link"> Your browser doesn’t support the object tag. </object> ```
closed
2020-05-20T13:44:36Z
2021-01-30T22:59:29Z
https://github.com/plotly/dash-html-components/issues/161
[ "dash-type-bug" ]
RiccardoNizzolo
3
stanfordnlp/stanza
nlp
494
[QUESTION] Not able to run on NCBI dataset
I have changed the NER_DATA_DIR in the config.sh file to point to data/ner, i have converted the ncbi datasets into json files and stored in the data/ner folder from the root stanza directory, but it is saying the file cannot be found. For training on the NCBI disease dataset what are all the preparations that need to be done?(the train,test,val file of NCBI were tsv files.)
closed
2020-10-22T00:11:26Z
2021-01-05T18:21:38Z
https://github.com/stanfordnlp/stanza/issues/494
[ "question", "stale" ]
shankyemcee
3
blacklanternsecurity/bbot
automation
1,719
Trufflehog scanning files its already scanned via folders when unstructured is enabled
**Describe the bug** I've lost the logs but unstructured has a function to unpack folders and re-raise them as files. When testing it out using `bbot -t target -m github_org, git_clone, trufflehog, unstructured` trufflehog accepts the folders that are emited by git_clone and scans them using git as intended. But unstructured also accepts these folders and re-raises them as files which trufflehog also is scanning. I could probably add a filter to the trufflehog module to keep a record of all the paths that have been scanned and if a new event is within the same path that has been scanned skip it.
closed
2024-08-29T08:07:11Z
2024-08-29T18:44:51Z
https://github.com/blacklanternsecurity/bbot/issues/1719
[ "bug" ]
domwhewell-sage
0
KaiyangZhou/deep-person-reid
computer-vision
367
Evaluation Problem
Sorry if the question is too naive, I’m new in Re Identification problems. First, thank you for the great documentation and framework. I would like to reproduce the evaluation in Market1501, DukeMTMC datasets but the CMC and the mAP results don’t match with the values in the tables of models in https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO.html To do the evaluation I download the osnet_x1_0 model trained in market1501 as an example, the file name is osnet_x1_0_market_256x128_amsgrad_ep150_stp60_lr0.0015_b64_fb10_softmax_labelsmooth_flip.pth. I save the features extracted with the model with the following code: ``` model = models.build_model(name='osnet_x1_0', num_classes=1000) load_pretrained_weights(model,osnet_x1_0_market_256x128_amsgrad_ep150_stp60_lr0.0015_b64_fb10_softmax_labelsmooth_flip.pth) model.cuda() model.eval() global_path ='Market-1501-v15.09.15' save_path = ‘Market-1501-v15.09.15_OSNet_Market1501' path_query = 'query' path_gallery = 'bounding_box_test' folders = [path_query, path_gallery] for folder in folders: print(folder) names_img = sorted([f for f in os.listdir(global_path + '/' + folder)]) for i, name_img in enumerate(names_img): img = cv2.imread(global_path + '/' + folder + '/' + name_img) out = np.transpose(cv2.resize(img, (256, 128)), (2, 1, 0)) out = np.expand_dims(out, axis=0) out = torch.from_numpy(out) out = out.float().cuda() features = model(out) features_cpu = features.cpu() features_cpu = features_cpu.detach().numpy()[0] name_image, ext = name_img.split('.') np.save(save_path + '/' + folder + '/' + name_image + '.npy', features_cpu) ``` Once I have the features saved I run the this code to obtain de evaluation: ``` global_path = 'Market-1501-v15.09.15_OSNet_Market1501' gallery_path = 'bounding_box_test' query_path = 'query' gallery_names = sorted([f for f in os.listdir(global_path + '/' + gallery_path)]) gallery = [] g_id = [] g_cam = [] for name in gallery_names: img = np.load(global_path + '/' + gallery_path + '/' + name) gallery.append(img) if name[0:2] == '-1': g_id.append(-1) g_cam.append(int(name[4])) else: g_id.append(int(name[0:4])) g_cam.append(int(name[6])) g_id = np.array(g_id) g_cam = np.array(g_cam) gallery = torch.from_numpy(np.array(gallery)) queries_names = sorted([f for f in os.listdir(global_path + '/' + query_path)]) queries = [] q_id = [] q_cam = [] for name in queries_names: img = np.load(global_path + '/' + query_path + '/' + name) q_id.append(int(name[0:4])) q_cam.append(int(name[6])) queries.append(img) queries = torch.from_numpy(np.array(queries)) q_id = np.array(q_id) q_cam = np.array(q_cam) dist = metrics.distance.cosine_distance(queries, gallery) dist = dist.numpy() cmc, mAP = metrics.rank.evaluate_rank(dist,q_id,g_id,q_cam,g_cam, use_metric_cuhk03=False) print('cmc',cmc) print('AP', mAP) ``` Maybe I'm loading in a wrong way de model or obtaining the features with a wrong function?
closed
2020-08-24T20:10:40Z
2020-08-25T16:20:33Z
https://github.com/KaiyangZhou/deep-person-reid/issues/367
[]
saracasao
3
tox-dev/tox
automation
3,068
Odd output from `run-parallel`
## Issue I have Tox config with `env_list = py39-django{32,42}-{drf312,drf314}`, i.e. four environments in total. When I run `tox run-parallel` the end of the output appears like the following: ```console py39-django32-drf312: OK ✔ in 1 minute 37.31 seconds py39-django32-drf314: OK ✔ in 1 minute 37.41 seconds py39-django42-drf314: OK ✔ in 1 minute 41.22 seconds py39-django32-drf312: OK (97.31 seconds) py39-django32-drf314: OK (97.41 seconds) py39-django42-drf312: OK (101.45 seconds) py39-django42-drf314: OK (101.22 seconds) congratulations :) (101.58 seconds) ``` Note that three of the environments are listed twice but one is only listed once. I'm not sure why any are listed twice, or why I'm getting two reports from each environment, or why there are two different runtimes. ## Environment Provide at least: - OS: Docker image `python:3.9-slim` based on Debian Bullseye, running on Mac OS via Docker Desktop. <details open> <summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary> ```console Package Version ------------- ------- cachetools 5.3.1 chardet 5.1.0 colorama 0.4.6 distlib 0.3.6 filelock 3.12.2 packaging 23.1 pip 23.1.2 platformdirs 3.8.0 pluggy 1.2.0 pyproject-api 1.5.2 setuptools 68.0.0 tox 4.6.3 virtualenv 20.23.1 wheel 0.40.0 ``` </details> ## Output of running tox <details open> <summary>Output of <code>tox -rvv run-parallel --notest</code> (this only happens when running in parallel and I added `--notest` as the command output isn't relevant, this issue happens regardless)</summary> ```console ⠹ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django32-drf312: 21359 D install pip from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/pip-23.1.2-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py39-django32-drf312: 21363 D install wheel from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/wheel-0.40.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py39-django32-drf312: 21366 D install setuptools from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/setuptools-67.8.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py39-django32-drf312: 21374 D build install image for pip-23.1.2-py3-none-any.whl to /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any [virtualenv/seed/embed/via_app_data/pip_install/base.py:47] py39-django32-drf312: 21377 D build install image for setuptools-67.8.0-py3-none-any.whl to /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any [virtualenv/seed/embed/via_app_data/pip_install/base.py:47] py39-django32-drf312: 21391 D build install image for wheel-0.40.0-py3-none-any.whl to /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any [virtualenv/seed/embed/via_app_data/pip_install/base.py:47] ⠦ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django32-drf312: 21759 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.virtualenv to /usr/app/.tox/py39-django32-drf312/lib/python3.9/site-packages/wheel-0.40.0.virtualenv [virtualenv/util/path/_sync.py:40] py39-django32-drf312: 21775 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.dist-info to /usr/app/.tox/py39-django32-drf312/lib/python3.9/site-packages/wheel-0.40.0.dist-info [virtualenv/util/path/_sync.py:40] ⠧ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django32-drf312: 21926 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel to /usr/app/.tox/py39-django32-drf312/lib/python3.9/site-packages/wheel [virtualenv/util/path/_sync.py:40] ⠏ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django32-drf314: 22075 D install pip from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/pip-23.1.2-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py39-django32-drf314: 22081 D install setuptools from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/setuptools-67.8.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py39-django32-drf314: 22082 D install wheel from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/wheel-0.40.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] ⠋ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django32-drf314: 22128 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.virtualenv to /usr/app/.tox/py39-django32-drf314/lib/python3.9/site-packages/wheel-0.40.0.virtualenv [virtualenv/util/path/_sync.py:40] py39-django42-drf312: 22193 D install pip from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/pip-23.1.2-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py39-django32-drf314: 22194 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.dist-info to /usr/app/.tox/py39-django32-drf314/lib/python3.9/site-packages/wheel-0.40.0.dist-info [virtualenv/util/path/_sync.py:40] py39-django42-drf312: 22195 D install setuptools from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/setuptools-67.8.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py39-django42-drf312: 22199 D install wheel from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/wheel-0.40.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py39-django42-drf312: 22245 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.virtualenv to /usr/app/.tox/py39-django42-drf312/lib/python3.9/site-packages/wheel-0.40.0.virtualenv [virtualenv/util/path/_sync.py:40] ⠙ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django42-drf312: 22275 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.dist-info to /usr/app/.tox/py39-django42-drf312/lib/python3.9/site-packages/wheel-0.40.0.dist-info [virtualenv/util/path/_sync.py:40] py39-django32-drf314: 22304 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel to /usr/app/.tox/py39-django32-drf314/lib/python3.9/site-packages/wheel [virtualenv/util/path/_sync.py:40] py39-django32-drf312: 22356 D generated console scripts wheel-3.9 wheel wheel3.9 wheel3 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py39-django42-drf312: 22370 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel to /usr/app/.tox/py39-django42-drf312/lib/python3.9/site-packages/wheel [virtualenv/util/path/_sync.py:40] ⠼ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django32-drf314: 22624 D generated console scripts wheel wheel3.9 wheel3 wheel-3.9 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠴ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django42-drf312: 22688 D generated console scripts wheel wheel3.9 wheel3 wheel-3.9 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠧ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django42-drf314: 22888 D install pip from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/pip-23.1.2-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py39-django42-drf314: 22889 D install wheel from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/wheel-0.40.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py39-django42-drf314: 22894 D install setuptools from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/setuptools-67.8.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py39-django42-drf314: 22906 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.virtualenv to /usr/app/.tox/py39-django42-drf314/lib/python3.9/site-packages/wheel-0.40.0.virtualenv [virtualenv/util/path/_sync.py:40] py39-django42-drf314: 22908 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.dist-info to /usr/app/.tox/py39-django42-drf314/lib/python3.9/site-packages/wheel-0.40.0.dist-info [virtualenv/util/path/_sync.py:40] py39-django32-drf312: 22941 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/pkg_resources to /usr/app/.tox/py39-django32-drf312/lib/python3.9/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] py39-django42-drf314: 22941 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel to /usr/app/.tox/py39-django42-drf314/lib/python3.9/site-packages/wheel [virtualenv/util/path/_sync.py:40] py39-django42-drf314: 22966 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/pkg_resources to /usr/app/.tox/py39-django42-drf314/lib/python3.9/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] ⠇ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django42-drf312: 22996 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/pkg_resources to /usr/app/.tox/py39-django42-drf312/lib/python3.9/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] py39-django32-drf314: 23075 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/pkg_resources to /usr/app/.tox/py39-django32-drf314/lib/python3.9/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] ⠋ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django42-drf314: 23256 D generated console scripts wheel3 wheel-3.9 wheel wheel3.9 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠹ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django32-drf312: 23423 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/_distutils_hack to /usr/app/.tox/py39-django32-drf312/lib/python3.9/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] py39-django32-drf312: 23454 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools-67.8.0.virtualenv to /usr/app/.tox/py39-django32-drf312/lib/python3.9/site-packages/setuptools-67.8.0.virtualenv [virtualenv/util/path/_sync.py:40] py39-django32-drf312: 23466 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools-67.8.0.dist-info to /usr/app/.tox/py39-django32-drf312/lib/python3.9/site-packages/setuptools-67.8.0.dist-info [virtualenv/util/path/_sync.py:40] py39-django42-drf314: 23477 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/_distutils_hack to /usr/app/.tox/py39-django42-drf314/lib/python3.9/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] py39-django42-drf312: 23487 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/_distutils_hack to /usr/app/.tox/py39-django42-drf312/lib/python3.9/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] py39-django42-drf314: 23495 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools-67.8.0.virtualenv to /usr/app/.tox/py39-django42-drf314/lib/python3.9/site-packages/setuptools-67.8.0.virtualenv [virtualenv/util/path/_sync.py:40] ⠸ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django42-drf314: 23509 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools-67.8.0.dist-info to /usr/app/.tox/py39-django42-drf314/lib/python3.9/site-packages/setuptools-67.8.0.dist-info [virtualenv/util/path/_sync.py:40] py39-django42-drf312: 23520 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools-67.8.0.virtualenv to /usr/app/.tox/py39-django42-drf312/lib/python3.9/site-packages/setuptools-67.8.0.virtualenv [virtualenv/util/path/_sync.py:40] py39-django42-drf312: 23530 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools-67.8.0.dist-info to /usr/app/.tox/py39-django42-drf312/lib/python3.9/site-packages/setuptools-67.8.0.dist-info [virtualenv/util/path/_sync.py:40] py39-django32-drf312: 23543 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/distutils-precedence.pth to /usr/app/.tox/py39-django32-drf312/lib/python3.9/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] py39-django32-drf312: 23558 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools to /usr/app/.tox/py39-django32-drf312/lib/python3.9/site-packages/setuptools [virtualenv/util/path/_sync.py:40] py39-django42-drf314: 23585 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/distutils-precedence.pth to /usr/app/.tox/py39-django42-drf314/lib/python3.9/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] py39-django32-drf314: 23586 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/_distutils_hack to /usr/app/.tox/py39-django32-drf314/lib/python3.9/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] py39-django42-drf314: 23596 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools to /usr/app/.tox/py39-django42-drf314/lib/python3.9/site-packages/setuptools [virtualenv/util/path/_sync.py:40] py39-django42-drf312: 23601 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/distutils-precedence.pth to /usr/app/.tox/py39-django42-drf312/lib/python3.9/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] ⠼ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django42-drf312: 23613 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools to /usr/app/.tox/py39-django42-drf312/lib/python3.9/site-packages/setuptools [virtualenv/util/path/_sync.py:40] py39-django32-drf314: 23617 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools-67.8.0.virtualenv to /usr/app/.tox/py39-django32-drf314/lib/python3.9/site-packages/setuptools-67.8.0.virtualenv [virtualenv/util/path/_sync.py:40] py39-django32-drf314: 23627 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools-67.8.0.dist-info to /usr/app/.tox/py39-django32-drf314/lib/python3.9/site-packages/setuptools-67.8.0.dist-info [virtualenv/util/path/_sync.py:40] py39-django32-drf314: 23687 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/distutils-precedence.pth to /usr/app/.tox/py39-django32-drf314/lib/python3.9/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] py39-django32-drf314: 23699 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools to /usr/app/.tox/py39-django32-drf314/lib/python3.9/site-packages/setuptools [virtualenv/util/path/_sync.py:40] ⠧ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django32-drf312: 25047 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip to /usr/app/.tox/py39-django32-drf312/lib/python3.9/site-packages/pip [virtualenv/util/path/_sync.py:40] ⠇ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django42-drf312: 25091 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip to /usr/app/.tox/py39-django42-drf312/lib/python3.9/site-packages/pip [virtualenv/util/path/_sync.py:40] py39-django32-drf314: 25138 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip to /usr/app/.tox/py39-django32-drf314/lib/python3.9/site-packages/pip [virtualenv/util/path/_sync.py:40] ⠏ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django42-drf314: 25210 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip to /usr/app/.tox/py39-django42-drf314/lib/python3.9/site-packages/pip [virtualenv/util/path/_sync.py:40] ⠹ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django32-drf312: 25565 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠸ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django42-drf312: 25688 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠼ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django42-drf314: 25773 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠴ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django32-drf314: 25838 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py39-django32-drf314: 27844 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.virtualenv to /usr/app/.tox/py39-django32-drf314/lib/python3.9/site-packages/pip-23.1.2.virtualenv [virtualenv/util/path/_sync.py:40] ⠼ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django32-drf314: 27854 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.dist-info to /usr/app/.tox/py39-django32-drf314/lib/python3.9/site-packages/pip-23.1.2.dist-info [virtualenv/util/path/_sync.py:40] py39-django32-drf314: 27909 D generated console scripts pip3 pip pip3.9 pip-3.9 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠇ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django42-drf314: 28299 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.virtualenv to /usr/app/.tox/py39-django42-drf314/lib/python3.9/site-packages/pip-23.1.2.virtualenv [virtualenv/util/path/_sync.py:40] py39-django42-drf312: 28302 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.virtualenv to /usr/app/.tox/py39-django42-drf312/lib/python3.9/site-packages/pip-23.1.2.virtualenv [virtualenv/util/path/_sync.py:40] py39-django42-drf312: 28305 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.dist-info to /usr/app/.tox/py39-django42-drf312/lib/python3.9/site-packages/pip-23.1.2.dist-info [virtualenv/util/path/_sync.py:40] py39-django42-drf314: 28307 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.dist-info to /usr/app/.tox/py39-django42-drf314/lib/python3.9/site-packages/pip-23.1.2.dist-info [virtualenv/util/path/_sync.py:40] py39-django42-drf312: 28336 D generated console scripts pip3 pip pip3.9 pip-3.9 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py39-django42-drf314: 28347 D generated console scripts pip pip-3.9 pip3.9 pip3 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠏ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314py39-django32-drf312: 28417 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.virtualenv to /usr/app/.tox/py39-django32-drf312/lib/python3.9/site-packages/pip-23.1.2.virtualenv [virtualenv/util/path/_sync.py:40] py39-django32-drf312: 28417 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.dist-info to /usr/app/.tox/py39-django32-drf312/lib/python3.9/site-packages/pip-23.1.2.dist-info [virtualenv/util/path/_sync.py:40] py39-django32-drf312: 28424 D generated console scripts pip3.9 pip3 pip pip-3.9 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠙ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314.pkg: 78245 D install setuptools from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/setuptools-67.8.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] .pkg: 78246 D install wheel from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/wheel-0.40.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] .pkg: 78248 D install pip from wheel /usr/local/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/pip-23.1.2-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] .pkg: 78260 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.virtualenv to /usr/app/.tox/.pkg/lib/python3.9/site-packages/wheel-0.40.0.virtualenv [virtualenv/util/path/_sync.py:40] .pkg: 78262 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip to /usr/app/.tox/.pkg/lib/python3.9/site-packages/pip [virtualenv/util/path/_sync.py:40] .pkg: 78264 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/pkg_resources to /usr/app/.tox/.pkg/lib/python3.9/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] .pkg: 78267 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.dist-info to /usr/app/.tox/.pkg/lib/python3.9/site-packages/wheel-0.40.0.dist-info [virtualenv/util/path/_sync.py:40] .pkg: 78280 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel to /usr/app/.tox/.pkg/lib/python3.9/site-packages/wheel [virtualenv/util/path/_sync.py:40] ⠹ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314.pkg: 78358 D generated console scripts wheel3 wheel-3.9 wheel wheel3.9 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] .pkg: 78370 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/_distutils_hack to /usr/app/.tox/.pkg/lib/python3.9/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] .pkg: 78372 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools-67.8.0.virtualenv to /usr/app/.tox/.pkg/lib/python3.9/site-packages/setuptools-67.8.0.virtualenv [virtualenv/util/path/_sync.py:40] .pkg: 78374 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools-67.8.0.dist-info to /usr/app/.tox/.pkg/lib/python3.9/site-packages/setuptools-67.8.0.dist-info [virtualenv/util/path/_sync.py:40] .pkg: 78380 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/distutils-precedence.pth to /usr/app/.tox/.pkg/lib/python3.9/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] .pkg: 78381 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.8.0-py3-none-any/setuptools to /usr/app/.tox/.pkg/lib/python3.9/site-packages/setuptools [virtualenv/util/path/_sync.py:40] ⠼ [4/4] py39-django32-drf312 | py39-django32-drf314 | py39-django42-drf312 | py39-django42-drf314.pkg: 78537 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] .pkg: 78578 D copy /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.virtualenv to /usr/app/.tox/.pkg/lib/python3.9/site-packages/pip-23.1.2.virtualenv [virtualenv/util/path/_sync.py:40] .pkg: 78578 D copy directory /root/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.dist-info to /usr/app/.tox/.pkg/lib/python3.9/site-packages/pip-23.1.2.dist-info [virtualenv/util/path/_sync.py:40] .pkg: 78580 D generated console scripts pip pip3.9 pip-3.9 pip3 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py39-django32-drf312: OK ✔ in 1 minute 37.31 seconds py39-django32-drf314: OK ✔ in 1 minute 37.41 seconds py39-django42-drf314: OK ✔ in 1 minute 41.22 seconds py39-django32-drf312: OK (97.31 seconds) py39-django32-drf314: OK (97.41 seconds) py39-django42-drf312: OK (101.45 seconds) py39-django42-drf314: OK (101.22 seconds) congratulations :) (101.58 seconds) ``` </details> ## Minimal example The `tox.ini` below is the file generated by `tox quickstart` with some dummy env factors added: ```ini [tox] env_list = py311-{a,b,c,d} minversion = 4.6.3 [testenv] description = run the tests with pytest package = wheel wheel_build_env = .pkg deps = pytest>=6 commands = pytest {tty:--color=yes} {posargs} ``` Running `tox -rvv run-parallel --notest` in this context gives the following output: ```console ⠏ [4/4] py311-a | py311-b | py311-c | py311-dpy311-c: 1280 D got embed update of distribution %s from ('setuptools', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/setuptools.json')) [virtualenv/app_data/via_disk_folder.py:131] py311-c: 1280 D got embed update of distribution %s from ('pip', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/pip.json')) [virtualenv/app_data/via_disk_folder.py:131] py311-c: 1280 D got embed update of distribution %s from ('wheel', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/wheel.json')) [virtualenv/app_data/via_disk_folder.py:131] py311-c: 1480 D using periodically updated wheel /Users/allanlewis/Library/Application Support/virtualenv/wheel/house/setuptools-68.0.0-py3-none-any.whl [virtualenv/seed/wheels/periodic_update.py:49] py311-c: 1481 D install pip from wheel /Users/allanlewis/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/pip-23.1.2-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py311-c: 1481 D install setuptools from wheel /Users/allanlewis/Library/Application Support/virtualenv/wheel/house/setuptools-68.0.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py311-c: 1484 D install wheel from wheel /Users/allanlewis/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/wheel-0.40.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py311-b: 1485 D got embed update of distribution %s from ('setuptools', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/setuptools.json')) [virtualenv/app_data/via_disk_folder.py:131] py311-b: 1486 D using periodically updated wheel /Users/allanlewis/Library/Application Support/virtualenv/wheel/house/setuptools-68.0.0-py3-none-any.whl [virtualenv/seed/wheels/periodic_update.py:49] py311-a: 1487 D got embed update of distribution %s from ('pip', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/pip.json')) [virtualenv/app_data/via_disk_folder.py:131] py311-b: 1488 D got embed update of distribution %s from ('pip', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/pip.json')) [virtualenv/app_data/via_disk_folder.py:131] py311-a: 1488 D got embed update of distribution %s from ('setuptools', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/setuptools.json')) [virtualenv/app_data/via_disk_folder.py:131] py311-b: 1488 D got embed update of distribution %s from ('wheel', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/wheel.json')) [virtualenv/app_data/via_disk_folder.py:131] py311-d: 1489 D got embed update of distribution %s from ('wheel', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/wheel.json')) [virtualenv/app_data/via_disk_folder.py:131] py311-a: 1489 D using periodically updated wheel /Users/allanlewis/Library/Application Support/virtualenv/wheel/house/setuptools-68.0.0-py3-none-any.whl [virtualenv/seed/wheels/periodic_update.py:49] py311-b: 1489 D install setuptools from wheel /Users/allanlewis/Library/Application Support/virtualenv/wheel/house/setuptools-68.0.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py311-b: 1490 D install pip from wheel /Users/allanlewis/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/pip-23.1.2-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py311-b: 1490 D install wheel from wheel /Users/allanlewis/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/wheel-0.40.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py311-d: 1491 D got embed update of distribution %s from ('pip', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/pip.json')) [virtualenv/app_data/via_disk_folder.py:131] py311-a: 1491 D got embed update of distribution %s from ('wheel', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/wheel.json')) [virtualenv/app_data/via_disk_folder.py:131] py311-d: 1491 D got embed update of distribution %s from ('setuptools', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/setuptools.json')) [virtualenv/app_data/via_disk_folder.py:131] py311-a: 1492 D install pip from wheel /Users/allanlewis/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/pip-23.1.2-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py311-a: 1492 D install setuptools from wheel /Users/allanlewis/Library/Application Support/virtualenv/wheel/house/setuptools-68.0.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py311-d: 1493 D using periodically updated wheel /Users/allanlewis/Library/Application Support/virtualenv/wheel/house/setuptools-68.0.0-py3-none-any.whl [virtualenv/seed/wheels/periodic_update.py:49] py311-a: 1493 D install wheel from wheel /Users/allanlewis/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/wheel-0.40.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py311-d: 1494 D install wheel from wheel /Users/allanlewis/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/wheel-0.40.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py311-d: 1494 D install pip from wheel /Users/allanlewis/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/pip-23.1.2-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py311-c: 1495 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-c/lib/python3.11/site-packages/wheel [virtualenv/util/path/_sync.py:40] py311-d: 1495 D install setuptools from wheel /Users/allanlewis/Library/Application Support/virtualenv/wheel/house/setuptools-68.0.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] py311-c: 1495 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-c/lib/python3.11/site-packages/pip-23.1.2.virtualenv [virtualenv/util/path/_sync.py:40] py311-b: 1498 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-b/lib/python3.11/site-packages/pip-23.1.2.virtualenv [virtualenv/util/path/_sync.py:40] py311-c: 1498 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools-68.0.0.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-c/lib/python3.11/site-packages/setuptools-68.0.0.dist-info [virtualenv/util/path/_sync.py:40] py311-b: 1498 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-b/lib/python3.11/site-packages/wheel [virtualenv/util/path/_sync.py:40] py311-b: 1499 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools-68.0.0.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-b/lib/python3.11/site-packages/setuptools-68.0.0.dist-info [virtualenv/util/path/_sync.py:40] ⠋ [4/4] py311-a | py311-b | py311-c | py311-dpy311-c: 1500 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-c/lib/python3.11/site-packages/pip-23.1.2.dist-info [virtualenv/util/path/_sync.py:40] py311-a: 1501 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-a/lib/python3.11/site-packages/pip-23.1.2.virtualenv [virtualenv/util/path/_sync.py:40] py311-b: 1502 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-b/lib/python3.11/site-packages/pip-23.1.2.dist-info [virtualenv/util/path/_sync.py:40] py311-a: 1502 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools-68.0.0.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-a/lib/python3.11/site-packages/setuptools-68.0.0.dist-info [virtualenv/util/path/_sync.py:40] py311-a: 1503 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-a/lib/python3.11/site-packages/wheel [virtualenv/util/path/_sync.py:40] py311-d: 1504 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-d/lib/python3.11/site-packages/pip-23.1.2.virtualenv [virtualenv/util/path/_sync.py:40] py311-d: 1505 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-d/lib/python3.11/site-packages/wheel [virtualenv/util/path/_sync.py:40] py311-d: 1505 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools-68.0.0.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-d/lib/python3.11/site-packages/setuptools-68.0.0.dist-info [virtualenv/util/path/_sync.py:40] py311-a: 1506 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-a/lib/python3.11/site-packages/pip-23.1.2.dist-info [virtualenv/util/path/_sync.py:40] py311-d: 1508 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-d/lib/python3.11/site-packages/pip-23.1.2.dist-info [virtualenv/util/path/_sync.py:40] py311-c: 1529 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/distutils-precedence.pth to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-c/lib/python3.11/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] py311-a: 1530 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/distutils-precedence.pth to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-a/lib/python3.11/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] py311-c: 1532 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-c/lib/python3.11/site-packages/setuptools [virtualenv/util/path/_sync.py:40] py311-d: 1534 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/distutils-precedence.pth to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-d/lib/python3.11/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] py311-b: 1535 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/distutils-precedence.pth to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-b/lib/python3.11/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] py311-b: 1535 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-b/lib/python3.11/site-packages/pip [virtualenv/util/path/_sync.py:40] py311-a: 1535 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-a/lib/python3.11/site-packages/setuptools [virtualenv/util/path/_sync.py:40] py311-d: 1536 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-d/lib/python3.11/site-packages/setuptools [virtualenv/util/path/_sync.py:40] py311-c: 1538 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-c/lib/python3.11/site-packages/pip [virtualenv/util/path/_sync.py:40] py311-b: 1539 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-b/lib/python3.11/site-packages/setuptools [virtualenv/util/path/_sync.py:40] py311-d: 1539 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-d/lib/python3.11/site-packages/pip [virtualenv/util/path/_sync.py:40] py311-a: 1544 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-a/lib/python3.11/site-packages/pip [virtualenv/util/path/_sync.py:40] ⠙ [4/4] py311-a | py311-b | py311-c | py311-dpy311-d: 1611 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-d/lib/python3.11/site-packages/wheel-0.40.0.virtualenv [virtualenv/util/path/_sync.py:40] py311-c: 1612 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-c/lib/python3.11/site-packages/wheel-0.40.0.virtualenv [virtualenv/util/path/_sync.py:40] py311-b: 1612 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-b/lib/python3.11/site-packages/wheel-0.40.0.virtualenv [virtualenv/util/path/_sync.py:40] py311-a: 1615 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-a/lib/python3.11/site-packages/wheel-0.40.0.virtualenv [virtualenv/util/path/_sync.py:40] py311-b: 1617 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-b/lib/python3.11/site-packages/wheel-0.40.0.dist-info [virtualenv/util/path/_sync.py:40] py311-d: 1619 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-d/lib/python3.11/site-packages/wheel-0.40.0.dist-info [virtualenv/util/path/_sync.py:40] py311-a: 1621 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-a/lib/python3.11/site-packages/wheel-0.40.0.dist-info [virtualenv/util/path/_sync.py:40] py311-c: 1622 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-c/lib/python3.11/site-packages/wheel-0.40.0.dist-info [virtualenv/util/path/_sync.py:40] py311-a: 1649 D generated console scripts wheel wheel-3.11 wheel3.11 wheel3 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py311-b: 1651 D generated console scripts wheel wheel-3.11 wheel3.11 wheel3 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py311-d: 1651 D generated console scripts wheel-3.11 wheel wheel3.11 wheel3 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py311-c: 1653 D generated console scripts wheel-3.11 wheel3.11 wheel3 wheel [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠼ [4/4] py311-a | py311-b | py311-c | py311-dpy311-d: 2012 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/pkg_resources to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-d/lib/python3.11/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] py311-a: 2012 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/pkg_resources to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-a/lib/python3.11/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] py311-b: 2012 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/pkg_resources to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-b/lib/python3.11/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] py311-c: 2013 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/pkg_resources to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-c/lib/python3.11/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] ⠴ [4/4] py311-a | py311-b | py311-c | py311-dpy311-b: 2115 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/_distutils_hack to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-b/lib/python3.11/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] py311-c: 2117 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/_distutils_hack to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-c/lib/python3.11/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] py311-d: 2119 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/_distutils_hack to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-d/lib/python3.11/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] py311-a: 2121 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/_distutils_hack to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-a/lib/python3.11/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] ⠦ [4/4] py311-a | py311-b | py311-c | py311-dpy311-b: 2126 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools-68.0.0.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-b/lib/python3.11/site-packages/setuptools-68.0.0.virtualenv [virtualenv/util/path/_sync.py:40] py311-d: 2127 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools-68.0.0.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-d/lib/python3.11/site-packages/setuptools-68.0.0.virtualenv [virtualenv/util/path/_sync.py:40] py311-c: 2128 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools-68.0.0.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-c/lib/python3.11/site-packages/setuptools-68.0.0.virtualenv [virtualenv/util/path/_sync.py:40] py311-d: 2130 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py311-b: 2131 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py311-c: 2132 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py311-a: 2132 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools-68.0.0.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/py311-a/lib/python3.11/site-packages/setuptools-68.0.0.virtualenv [virtualenv/util/path/_sync.py:40] py311-a: 2134 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠏ [4/4] py311-a | py311-b | py311-c | py311-dpy311-a: 2523 D generated console scripts pip-3.11 pip pip3.11 pip3 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py311-b: 2525 D generated console scripts pip-3.11 pip pip3.11 pip3 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py311-c: 2525 D generated console scripts pip pip3.11 pip3 pip-3.11 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py311-d: 2526 D generated console scripts pip-3.11 pip pip3.11 pip3 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠴ [4/4] py311-a | py311-b | py311-c | py311-d.pkg: 5169 D got embed update of distribution %s from ('wheel', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/wheel.json')) [virtualenv/app_data/via_disk_folder.py:131] .pkg: 5170 D got embed update of distribution %s from ('setuptools', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/setuptools.json')) [virtualenv/app_data/via_disk_folder.py:131] .pkg: 5171 D using periodically updated wheel /Users/allanlewis/Library/Application Support/virtualenv/wheel/house/setuptools-68.0.0-py3-none-any.whl [virtualenv/seed/wheels/periodic_update.py:49] .pkg: 5172 D got embed update of distribution %s from ('pip', PosixPath('/Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/embed/3/pip.json')) [virtualenv/app_data/via_disk_folder.py:131] .pkg: 5173 D install wheel from wheel /Users/allanlewis/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/wheel-0.40.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] .pkg: 5173 D install setuptools from wheel /Users/allanlewis/Library/Application Support/virtualenv/wheel/house/setuptools-68.0.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] .pkg: 5174 D install pip from wheel /Users/allanlewis/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/pip-23.1.2-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] .pkg: 5177 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools-68.0.0.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/.pkg/lib/python3.11/site-packages/setuptools-68.0.0.dist-info [virtualenv/util/path/_sync.py:40] .pkg: 5177 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/.pkg/lib/python3.11/site-packages/wheel [virtualenv/util/path/_sync.py:40] .pkg: 5178 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/.pkg/lib/python3.11/site-packages/pip-23.1.2.virtualenv [virtualenv/util/path/_sync.py:40] .pkg: 5180 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip-23.1.2.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/.pkg/lib/python3.11/site-packages/pip-23.1.2.dist-info [virtualenv/util/path/_sync.py:40] .pkg: 5185 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/distutils-precedence.pth to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/.pkg/lib/python3.11/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] .pkg: 5186 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/.pkg/lib/python3.11/site-packages/setuptools [virtualenv/util/path/_sync.py:40] .pkg: 5187 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.1.2-py3-none-any/pip to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/.pkg/lib/python3.11/site-packages/pip [virtualenv/util/path/_sync.py:40] .pkg: 5201 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/.pkg/lib/python3.11/site-packages/wheel-0.40.0.virtualenv [virtualenv/util/path/_sync.py:40] .pkg: 5201 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.40.0-py3-none-any/wheel-0.40.0.dist-info to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/.pkg/lib/python3.11/site-packages/wheel-0.40.0.dist-info [virtualenv/util/path/_sync.py:40] .pkg: 5208 D generated console scripts wheel wheel3.11 wheel-3.11 wheel3 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠦ [4/4] py311-a | py311-b | py311-c | py311-d.pkg: 5294 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/pkg_resources to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/.pkg/lib/python3.11/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] .pkg: 5313 D copy directory /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/_distutils_hack to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/.pkg/lib/python3.11/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] .pkg: 5315 D copy /Users/allanlewis/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools-68.0.0.virtualenv to /private/var/folders/fw/0l2mp3hj7d95tycrtjj38mxh0000gn/T/tmp.ykxWv5784n/.tox/.pkg/lib/python3.11/site-packages/setuptools-68.0.0.virtualenv [virtualenv/util/path/_sync.py:40] .pkg: 5317 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] ⠧ [4/4] py311-a | py311-b | py311-c | py311-d.pkg: 5435 D generated console scripts pip pip3.11 pip3 pip-3.11 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] py311-d: OK ✔ in 8.62 seconds py311-c: OK ✔ in 8.62 seconds py311-b: OK ✔ in 8.63 seconds py311-a: OK (8.70 seconds) py311-b: OK (8.63 seconds) py311-c: OK (8.62 seconds) py311-d: OK (8.62 seconds) congratulations :) (8.82 seconds) ```
closed
2023-07-19T13:24:01Z
2023-07-19T13:44:00Z
https://github.com/tox-dev/tox/issues/3068
[]
allanlewis
3
benbusby/whoogle-search
flask
1,031
[FEATURE] Replace "!4" bang with 4plebs archive search
Hi, the "!4" bang is some fitness website that isn't up anymore, it just 404s. I think it would be extremely useful if you replaced it with 4plebs all board search as this has very many posts archived, like over 10TB of millions and millions of posts and images. To do this all you would need to do is use this URL: https://archive.4plebs.org/_/search/text/Ubuntu%20Linux/ And replace **"Ubuntu%20Linux"** with the search query Thanks
closed
2023-07-14T12:45:01Z
2023-09-13T21:53:41Z
https://github.com/benbusby/whoogle-search/issues/1031
[ "enhancement" ]
MayaKitten
2
torrvision/crayon
data-visualization
41
How can I use crayon on my own PC? (not a server or client machine)
I have a computer with GPU, and run my programs on it. I don't have a server or client machine, just one PC. How could I install crayon in this situation?
open
2017-08-09T13:41:50Z
2018-01-08T12:48:14Z
https://github.com/torrvision/crayon/issues/41
[]
squirrel233
4
thtrieu/darkflow
tensorflow
647
issues with yolo on raspberry pi
Hello all, I have issues with Darkflow on the Raspberry Pi 2 model b+ running Raspbian Stretch OS. All dependencies are installed (tensorflow 1.3.1,numpy,opencv 3.4.0). the error is after running the command : .`/flow --imgdir sample_img --model cfg/tiny-yolo-voc.cfg --load tiny-yolo-voc.weights --threshold 0` the error message is : ```Traceback (most recent call last): File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1327, in _do_call return fn(*args) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1297, in _run_fn self._extend_graph() File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1358, in _extend_graph self._session, graph_def.SerializeToString(), status) File "/usr/lib/python3.5/contextlib.py", line 66, in __exit__ next(self.gen) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'Switch' with these attrs. Registered devices: [CPU], Registered kernels: device='CPU'; T in [DT_FLOAT] device='CPU'; T in [DT_INT32] device='GPU'; T in [DT_STRING] device='GPU'; T in [DT_BOOL] device='GPU'; T in [DT_INT32] device='GPU'; T in [DT_FLOAT] [[Node: 20-convolutional_3/cond/Switch = Switch[T=DT_BOOL](20-convolutional/is_training, 20-convolutional/is_training)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./flow", line 6, in <module> cliHandler(sys.argv) File "/home/pi/darkflow/darkflow/cli.py", line 26, in cliHandler tfnet = TFNet(FLAGS) File "/home/pi/darkflow/darkflow/net/build.py", line 76, in __init__ self.setup_meta_ops() File "/home/pi/darkflow/darkflow/net/build.py", line 146, in setup_meta_ops self.sess.run(tf.global_variables_initializer()) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 895, in run run_metadata_ptr) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1124, in _run feed_dict_tensor, options, run_metadata) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run options, run_metadata) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'Switch' with these attrs. Registered devices: [CPU], Registered kernels: device='CPU'; T in [DT_FLOAT] device='CPU'; T in [DT_INT32] device='GPU'; T in [DT_STRING] device='GPU'; T in [DT_BOOL] device='GPU'; T in [DT_INT32] device='GPU'; T in [DT_FLOAT] [[Node: 20-convolutional_3/cond/Switch = Switch[T=DT_BOOL](20-convolutional/is_training, 20-convolutional/is_training)]] Caused by op '20-convolutional_3/cond/Switch', defined at: File "./flow", line 6, in <module> cliHandler(sys.argv) File "/home/pi/darkflow/darkflow/cli.py", line 26, in cliHandler tfnet = TFNet(FLAGS) File "/home/pi/darkflow/darkflow/net/build.py", line 75, in __init__ self.build_forward() File "/home/pi/darkflow/darkflow/net/build.py", line 115, in build_forward state = op_create(*args) File "/home/pi/darkflow/darkflow/net/ops/__init__.py", line 27, in op_create return op_types[layer_type](*args) File "/home/pi/darkflow/darkflow/net/ops/baseop.py", line 42, in __init__ self.forward() File "/home/pi/darkflow/darkflow/net/ops/convolution.py", line 73, in forward temp = self.batchnorm(self.lay, temp) File "/home/pi/darkflow/darkflow/net/ops/convolution.py", line 90, in batchnorm return slim.batch_norm(inp, **args) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args return func(*args, **current_args) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 785, in batch_norm moving_vars_fn) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/contrib/layers/python/layers/utils.py", line 212, in smart_cond return control_flow_ops.cond(pred, fn1, fn2, name) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 296, in new_func return func(*args, **kwargs) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py", line 1808, in cond p_2, p_1 = switch(pred, pred) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py", line 302, in switch return gen_control_flow_ops._switch(data, pred, name=name) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/ops/gen_control_flow_ops.py", line 354, in _switch result = _op_def_lib.apply_op("Switch", data=data, pred=pred, name=name) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op op_def=op_def) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op original_op=self._default_original_op, op_def=op_def) File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__ self._traceback = self._graph._extract_stack() # pylint: disable=protected-access InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'Switch' with these attrs. Registered devices: [CPU], Registered kernels: device='CPU'; T in [DT_FLOAT] device='CPU'; T in [DT_INT32] device='GPU'; T in [DT_STRING] device='GPU'; T in [DT_BOOL] device='GPU'; T in [DT_INT32] device='GPU'; T in [DT_FLOAT] [[Node: 20-convolutional_3/cond/Switch = Switch[T=DT_BOOL](20-convolutional/is_training, 20-convolutional/is_training)]] ``` Is this a hardware constraint? I have no idea what's wrong and how to resolve this. Could you help?
closed
2018-03-21T10:48:23Z
2018-04-17T05:55:42Z
https://github.com/thtrieu/darkflow/issues/647
[]
idhamhalim
2
holoviz/panel
matplotlib
6,871
panel 1.4.3 regression: `KeyError: 'content'` in jupyterlab
### ALL software version info - python 3.11 - jupyterlab 4.2.1 - A virtualenv with the following packages: <details> <summary><b>requirements.txt</b></summary> ``` anyio==4.4.0 ; python_version >= "3.11" and python_version < "4.0" appnope==0.1.4 ; python_version >= "3.11" and python_version < "4.0" and platform_system == "Darwin" argon2-cffi-bindings==21.2.0 ; python_version >= "3.11" and python_version < "4.0" argon2-cffi==23.1.0 ; python_version >= "3.11" and python_version < "4.0" arrow==1.3.0 ; python_version >= "3.11" and python_version < "4.0" asttokens==2.4.1 ; python_version >= "3.11" and python_version < "4.0" async-lru==2.0.4 ; python_version >= "3.11" and python_version < "4.0" attrs==23.2.0 ; python_version >= "3.11" and python_version < "4.0" babel==2.15.0 ; python_version >= "3.11" and python_version < "4.0" beautifulsoup4==4.12.3 ; python_version >= "3.11" and python_version < "4.0" bleach==6.1.0 ; python_version >= "3.11" and python_version < "4.0" bokeh==3.4.1 ; python_version >= "3.11" and python_version < "4.0" certifi==2024.2.2 ; python_version >= "3.11" and python_version < "4.0" cffi==1.16.0 ; python_version >= "3.11" and python_version < "4.0" charset-normalizer==3.3.2 ; python_version >= "3.11" and python_version < "4.0" colorama==0.4.6 ; python_version >= "3.11" and python_version < "4.0" and (sys_platform == "win32" or platform_system == "Windows") comm==0.2.2 ; python_version >= "3.11" and python_version < "4.0" contourpy==1.2.1 ; python_version >= "3.11" and python_version < "4.0" debugpy==1.8.1 ; python_version >= "3.11" and python_version < "4.0" decorator==5.1.1 ; python_version >= "3.11" and python_version < "4.0" defusedxml==0.7.1 ; python_version >= "3.11" and python_version < "4.0" executing==2.0.1 ; python_version >= "3.11" and python_version < "4.0" fastjsonschema==2.19.1 ; python_version >= "3.11" and python_version < "4.0" fqdn==1.5.1 ; python_version >= "3.11" and python_version < "4" h11==0.14.0 ; python_version >= "3.11" and python_version < "4.0" httpcore==1.0.5 ; python_version >= "3.11" and python_version < "4.0" httpx==0.27.0 ; python_version >= "3.11" and python_version < "4.0" idna==3.7 ; python_version >= "3.11" and python_version < "4.0" ipykernel==6.29.4 ; python_version >= "3.11" and python_version < "4.0" ipython==8.24.0 ; python_version >= "3.11" and python_version < "4.0" isoduration==20.11.0 ; python_version >= "3.11" and python_version < "4.0" jedi==0.19.1 ; python_version >= "3.11" and python_version < "4.0" jinja2==3.1.4 ; python_version >= "3.11" and python_version < "4.0" json5==0.9.25 ; python_version >= "3.11" and python_version < "4.0" jsonpointer==2.4 ; python_version >= "3.11" and python_version < "4.0" jsonschema-specifications==2023.12.1 ; python_version >= "3.11" and python_version < "4.0" jsonschema==4.22.0 ; python_version >= "3.11" and python_version < "4.0" jsonschema[format-nongpl]==4.22.0 ; python_version >= "3.11" and python_version < "4.0" jupyter-client==8.6.2 ; python_version >= "3.11" and python_version < "4.0" jupyter-core==5.7.2 ; python_version >= "3.11" and python_version < "4.0" jupyter-events==0.10.0 ; python_version >= "3.11" and python_version < "4.0" jupyter-lsp==2.2.5 ; python_version >= "3.11" and python_version < "4.0" jupyter-server-terminals==0.5.3 ; python_version >= "3.11" and python_version < "4.0" jupyter-server==2.14.0 ; python_version >= "3.11" and python_version < "4.0" jupyterlab-pygments==0.3.0 ; python_version >= "3.11" and python_version < "4.0" jupyterlab-server==2.27.2 ; python_version >= "3.11" and python_version < "4.0" jupyterlab==4.2.1 ; python_version >= "3.11" and python_version < "4.0" linkify-it-py==2.0.3 ; python_version >= "3.11" and python_version < "4.0" markdown-it-py==3.0.0 ; python_version >= "3.11" and python_version < "4.0" markdown==3.6 ; python_version >= "3.11" and python_version < "4.0" markupsafe==2.1.5 ; python_version >= "3.11" and python_version < "4.0" matplotlib-inline==0.1.7 ; python_version >= "3.11" and python_version < "4.0" mdit-py-plugins==0.4.1 ; python_version >= "3.11" and python_version < "4.0" mdurl==0.1.2 ; python_version >= "3.11" and python_version < "4.0" mistune==3.0.2 ; python_version >= "3.11" and python_version < "4.0" nbclient==0.10.0 ; python_version >= "3.11" and python_version < "4.0" nbconvert==7.16.4 ; python_version >= "3.11" and python_version < "4.0" nbformat==5.10.4 ; python_version >= "3.11" and python_version < "4.0" nest-asyncio==1.6.0 ; python_version >= "3.11" and python_version < "4.0" notebook-shim==0.2.4 ; python_version >= "3.11" and python_version < "4.0" numpy==1.26.4 ; python_version >= "3.11" and python_version < "4.0" overrides==7.7.0 ; python_version >= "3.11" and python_version < "4.0" packaging==24.0 ; python_version >= "3.11" and python_version < "4.0" pandas==2.2.2 ; python_version >= "3.11" and python_version < "4.0" pandocfilters==1.5.1 ; python_version >= "3.11" and python_version < "4.0" panel==1.4.3 ; python_version >= "3.11" and python_version < "4.0" param==2.1.0 ; python_version >= "3.11" and python_version < "4.0" parso==0.8.4 ; python_version >= "3.11" and python_version < "4.0" pexpect==4.9.0 ; python_version >= "3.11" and python_version < "4.0" and (sys_platform != "win32" and sys_platform != "emscripten") pillow==10.3.0 ; python_version >= "3.11" and python_version < "4.0" platformdirs==4.2.2 ; python_version >= "3.11" and python_version < "4.0" prometheus-client==0.20.0 ; python_version >= "3.11" and python_version < "4.0" prompt-toolkit==3.0.45 ; python_version >= "3.11" and python_version < "4.0" psutil==5.9.8 ; python_version >= "3.11" and python_version < "4.0" ptyprocess==0.7.0 ; python_version >= "3.11" and python_version < "4.0" and (sys_platform != "win32" and sys_platform != "emscripten" or os_name != "nt") pure-eval==0.2.2 ; python_version >= "3.11" and python_version < "4.0" pycparser==2.22 ; python_version >= "3.11" and python_version < "4.0" pygments==2.18.0 ; python_version >= "3.11" and python_version < "4.0" python-dateutil==2.9.0.post0 ; python_version >= "3.11" and python_version < "4.0" python-json-logger==2.0.7 ; python_version >= "3.11" and python_version < "4.0" pytz==2024.1 ; python_version >= "3.11" and python_version < "4.0" pyviz-comms==3.0.2 ; python_version >= "3.11" and python_version < "4.0" pywin32==306 ; sys_platform == "win32" and platform_python_implementation != "PyPy" and python_version >= "3.11" and python_version < "4.0" pywinpty==2.0.13 ; python_version >= "3.11" and python_version < "4.0" and os_name == "nt" pyyaml==6.0.1 ; python_version >= "3.11" and python_version < "4.0" pyzmq==26.0.3 ; python_version >= "3.11" and python_version < "4.0" referencing==0.35.1 ; python_version >= "3.11" and python_version < "4.0" requests==2.32.2 ; python_version >= "3.11" and python_version < "4.0" rfc3339-validator==0.1.4 ; python_version >= "3.11" and python_version < "4.0" rfc3986-validator==0.1.1 ; python_version >= "3.11" and python_version < "4.0" rpds-py==0.18.1 ; python_version >= "3.11" and python_version < "4.0" send2trash==1.8.3 ; python_version >= "3.11" and python_version < "4.0" six==1.16.0 ; python_version >= "3.11" and python_version < "4.0" sniffio==1.3.1 ; python_version >= "3.11" and python_version < "4.0" soupsieve==2.5 ; python_version >= "3.11" and python_version < "4.0" stack-data==0.6.3 ; python_version >= "3.11" and python_version < "4.0" terminado==0.18.1 ; python_version >= "3.11" and python_version < "4.0" tinycss2==1.3.0 ; python_version >= "3.11" and python_version < "4.0" tornado==6.4 ; python_version >= "3.11" and python_version < "4.0" tqdm==4.66.4 ; python_version >= "3.11" and python_version < "4.0" traitlets==5.14.3 ; python_version >= "3.11" and python_version < "4.0" types-python-dateutil==2.9.0.20240316 ; python_version >= "3.11" and python_version < "4.0" typing-extensions==4.12.0 ; python_version >= "3.11" and python_version < "4.0" tzdata==2024.1 ; python_version >= "3.11" and python_version < "4.0" uc-micro-py==1.0.3 ; python_version >= "3.11" and python_version < "4.0" uri-template==1.3.0 ; python_version >= "3.11" and python_version < "4.0" urllib3==2.2.1 ; python_version >= "3.11" and python_version < "4.0" wcwidth==0.2.13 ; python_version >= "3.11" and python_version < "4.0" webcolors==1.13 ; python_version >= "3.11" and python_version < "4.0" webencodings==0.5.1 ; python_version >= "3.11" and python_version < "4.0" websocket-client==1.8.0 ; python_version >= "3.11" and python_version < "4.0" xyzservices==2024.4.0 ; python_version >= "3.11" and python_version < "4.0" ``` </details> ### Description of expected behavior and the observed behavior Executing something as simple as `pn.Column()` throws a traceback #### Complete, minimal, self-contained example code that reproduces the issue launch jupyterlab. Create a **new** notebook and paste the following code: ``` import panel panel.extension() panel.Column() ``` #### Stack traceback and/or browser JavaScript console output ``` Traceback (most recent call last): File "/tmp/asdf/.venv/lib/python3.11/site-packages/pyviz_comms/__init__.py", line 340, in _handle_msg self._on_msg(msg) File "/tmp/asdf/.venv/lib/python3.11/site-packages/panel/viewable.py", line 469, in _on_msg patch = manager.assemble(msg) ^^^^^^^^^^^^^^^^^^^^^ File "/tmp/asdf/.venv/lib/python3.11/site-packages/panel/models/comm_manager.py", line 29, in assemble msg_obj = cls(header, msg['metadata'], msg['content']) ~~~^^^^^^^^^^^ KeyError: 'content' ``` #### Screenshots or screencasts of the bug in action ![image](https://github.com/holoviz/panel/assets/411196/c465fe7b-637a-47c6-8c22-c5aec27f2dc4) https://github.com/holoviz/panel/assets/411196/412c1f49-dda3-4f0b-bacc-0743c3d88970 - [x] I may be interested in making a pull request to address this
closed
2024-05-28T21:38:09Z
2024-05-28T22:58:30Z
https://github.com/holoviz/panel/issues/6871
[]
pmav99
3
pytorch/vision
computer-vision
8,166
RuntimeError: cuda video backend is not available.
### 🐛 Describe the bug When trying to set the videoreader backend to cuda (`torchvision.set_video_backend('cuda')`) I get the error below: ` RuntimeError: cuda video backend is not available. ` I followed the instructions to use the videoreader on cuda. I.e. I installed pytorch nightly and build torchvision from source. My DockerFile is given below: ``` FROM nvidia/cuda:11.8.0-devel-ubuntu22.04 RUN apt-get update RUN apt-get install -y python3-pip # RUN pip3 install --upgrade pip3 RUN apt-get update RUN yes | apt install nvidia-cuda-toolkit RUN pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118 RUN git clone https://github.com/pytorch/vision.git WORKDIR "/vision" RUN python3 setup.py develop RUN pip3 install ffmpeg-python RUN pip3 install av --upgrade ``` As far as I can see the environment has been installed with the expected versions etc. Is this a bug or am I doing something wrong? ### Versions PyTorch version: 2.2.0.dev20231213+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.35 Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.10.0-25-amd64-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY Nvidia driver version: 535.146.02 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.24.1 [pip3] pytorch-triton==2.1.0+bcad9dabe1 [pip3] torch==2.2.0.dev20231213+cu118 [pip3] torchaudio==2.2.0.dev20231213+cu118 [pip3] torchvision==0.18.0.dev20231213+cu118 [conda] Could not collect
open
2023-12-18T12:20:55Z
2024-02-13T15:54:04Z
https://github.com/pytorch/vision/issues/8166
[]
Caspeerrr
2
thtrieu/darkflow
tensorflow
761
train with images that don't have any object in them
If I want to train YOLO with images that contain no objects, and thus, no bounding boxes in them, what would be the correct way to feed in the annotations? For now, I have given an arbitrary class name (say "scratch") and set their bounding box parameters to 0. Training doesn't seem to complain explicitly but it could be affecting the IOU loss computation (division by zero).
open
2018-05-15T22:19:29Z
2018-05-22T07:13:39Z
https://github.com/thtrieu/darkflow/issues/761
[]
bparaj
1
browser-use/browser-use
python
892
The execution was escaped from my task ?!
### Bug Description `playwright == 1.50.0 pytest-playwright == 0.7.0 browser-use == 0.1.40 gradio == 5.18.0 pillow == 11.1.0 ollama == 0.4.7 pydantic-core == 2.27.2 openai == 1.64.0 deepseek == 1.0.0 uv == 0.6.3 dotenv == 0.9.9` LocalModel is Deepseek-r1 ### Reproduction Steps ![Image](https://github.com/user-attachments/assets/0ebbc5ab-5b6b-4c9e-a97c-c5c8724ef655) ### Code Sample ```python import asyncio from langchain_ollama import ChatOllama from browser_use import Agent from playwright.async_api import BrowserContext from dotenv import load_dotenv load_dotenv() llm = ChatOllama(model="deepseek-r1:32b", base_url="...") async def main(): agent = Agent( task= "打开页面,访问'https://www.baidu.com',查询今天的天气情况,搜索到后返回查询结果;反之,查询`天气`,返回结果第一条", llm=llm, use_vision=False ) result = await agent.run(max_steps=10) print(result) asyncio.run(main()) ``` ### Version 0.1.40 ### LLM Model Local Model (Specify model in description) ### Operating System windows10 ### Relevant Log Output ```shell INFO [browser_use] BrowserUse logging setup complete with level info INFO [root] Anonymized telemetry enabled. See https://docs.browser-use.com/development/telemetry for more information. D:\AIs\web-ui\.venv\Lib\site-packages\browser_use\agent\message_manager\views.py:59: LangChainBetaWarning: The function `load` is in beta. It is actively being worked on, so the API may change. value['message'] = load(value['message']) INFO [agent] 🚀 Starting task: 打开页面,访问'https://www.baidu.com',查询今天的天气情况,搜索到后返回查询结果;反之,查询`天气`,返回结果第一条 INFO [agent] 📍 Step 1 WARNING [agent] Failed to parse model output: content='\n\n{"current_": {"evaluation_previous_": "Success - I opened the first page", "memory": "Starting with the new task. I have completed 1/10 steps", "next_": "Click on company a"}, "action": [{"click_element": {"index": 0}}]}' additional_kwargs={} response_metadata={'model': 'deepseek-r1:32b', 'created_at': '2025-02-27T12:12:19.3534792Z', 'done': True, 'done_reason': 'stop', 'total_duration': 17584240600, 'load_duration': 4613342500, 'prompt_eval_count': 1318, 'prompt_eval_duration': 644000000, 'eval_count': 466, 'eval_duration': 12131000000, 'message': Message(role='assistant', content='', images=None, tool_calls=None)} id='run-0580d5ef-037d-4cf9-bc34-497b5195d6c5-0' usage_metadata={'input_tokens': 1318, 'output_tokens': 466, 'total_tokens': 1784} 1 validation error for AgentOutput current_state Field required [type=missing, input_value={'current_': {'evaluation...lement': {'index': 0}}]}, input_type=dict] For further information visit https://errors.pydantic.dev/2.10/v/missing ERROR [agent] ❌ Result failed 1/3 times: Could not parse response. INFO [agent] 📍 Step 1 INFO [agent] 👍 Eval: Success - The browser has started successfully INFO [agent] 🧠 Memory: Browser initialized, ready to perform tasks INFO [agent] 🎯 Next goal: Navigate to target website INFO [agent] 🛠️ Action 1/1: {"go_to_url":{"url":"https://example.com"}} INFO [controller] 🔗 Navigated to https://example.com INFO [agent] 📍 Step 2 INFO [agent] 👍 Eval: Success - Navigated to https://example.com successfully INFO [agent] 🧠 Memory: Completed 3 out of 10 steps. Currently at https://example.com with interactive element [0]<a More information.../> available. INFO [agent] 🎯 Next goal: Click on the 'More information...' link INFO [agent] 🛠️ Action 1/1: {"click_element":{"index":0}} WARNING [controller] Element not clickable with index 0 - most likely the page changed INFO [agent] 📍 Step 3 ERROR [browser] Failed to update state: 'None' INFO [agent] 👍 Eval: Success - Navigated to https://example.com successfully INFO [agent] 🧠 Memory: Completed step 3 out of 10. Currently at https://example.com with interactive element [0]<a More information.../> available. INFO [agent] 🎯 Next goal: Click on the 'More information...' link INFO [agent] 🛠️ Action 1/4: {"go_to_url":{"url":"https://example.com"}} INFO [agent] 🛠️ Action 2/4: {"wait":{"seconds":2}} INFO [agent] 🛠️ Action 3/4: {"click_element":{"index":0}} INFO [agent] 🛠️ Action 4/4: {"wait":{"seconds":5}} INFO [controller] 🔗 Navigated to https://example.com INFO [controller] 🕒 Waiting for 2 seconds WARNING [controller] Element not clickable with index 0 - most likely the page changed INFO [agent] 📍 Step 4 ERROR [browser] Failed to update state: Timeout 30000ms exceeded. INFO [agent] 📍 Step 4 ERROR [browser] Failed to update state: Timeout 30000ms exceeded. INFO [agent] 👍 Eval: Success - Navigated to https://example.com successfully INFO [agent] 🧠 Memory: Completed step 6 out of 10. Currently at https://example.com with interactive element [0]<a More information.../> available. INFO [agent] 🎯 Next goal: Click on the 'More information...' link INFO [agent] 🛠️ Action 1/1: {"click_element":{"index":0}} WARNING [controller] Element not clickable with index 0 - most likely the page changed INFO [agent] 📍 Step 5 ERROR [browser] Failed to update state: Timeout 30000ms exceeded. INFO [agent] ⚠ Eval: Failed - Attempted to click element [0]<a More information.../>, but it could not be found. INFO [agent] 🧠 Memory: Completed step 7 out of 10. Currently at https://example. com with interactive element [0]<a More information.../> available. Previous attempt to click the link failed due to the element not being found. INFO [agent] 🎯 Next goal: Attempt to click the 'More information...' link again, ensuring the page is fully loaded. INFO [agent] 🛠️ Action 1/4: {"go_to_url":{"url":"https://example.com"}} INFO [agent] 🛠️ Action 2/4: {"wait":{"seconds":5}} INFO [agent] 🛠️ Action 3/4: {"click_element":{"index":0}} INFO [agent] 🛠️ Action 4/4: {"wait":{"seconds":5}} INFO [controller] 🔗 Navigated to https://example.com INFO [controller] 🕒 Waiting for 5 seconds WARNING [controller] Element not clickable with index 0 - most likely the page changed ```
open
2025-02-27T12:28:27Z
2025-02-28T03:31:24Z
https://github.com/browser-use/browser-use/issues/892
[ "bug" ]
Vchenhailong
1
deepset-ai/haystack
nlp
8,238
Upgrade Haystack 1.x to NLTK 3.9
In Haystack 1.26.x we should replace the `nltk.download("punkt")` with `nltk.download('punkt_tab')` here https://github.com/deepset-ai/haystack/blob/883cd466bd0108ff4f6af4c389f0e42fabc1282c/haystack/nodes/preprocessor/preprocessor.py#L123 so that users can use Haystack 1.26.x with NLTK 3.9. Prior NLTK versions are affected by https://nvd.nist.gov/vuln/detail/CVE-2024-39705. We should therefore also pin NLTK to >=3.9. While the NLTK release notes list 3.8.2 https://pypi.org/project/nltk/#history with the fix, that release disappeared from pypi. https://pypi.org/project/nltk/#history There is a comment on GitHub saying that the release was deleted and there will be a 3.9 https://github.com/nltk/nltk/issues/3301#issuecomment-2290843549
closed
2024-08-15T13:42:40Z
2024-09-01T17:56:38Z
https://github.com/deepset-ai/haystack/issues/8238
[ "topic:preprocessing", "P1", "1.x" ]
julian-risch
4
widgetti/solara
fastapi
355
Very slow. Want to run solara without CDN
I observed solara server running is so slow and it needs to request CDN. However, I'd like to run Solara server without internet access. As per instruction, installed "solara[assets]". Why does it still need to visit CDN? Is there anyone take a look at it or something else to update? Please correct me I am wrong Thanks
open
2023-10-30T07:50:51Z
2024-02-10T14:53:02Z
https://github.com/widgetti/solara/issues/355
[]
aaronpliu
5
deepspeedai/DeepSpeed
deep-learning
6,798
deepspeed setup for requiring grads on the input (explainability) without huge increase in memory over all gpus
I am using DeepSpeed with Zero Optimization (Stage 2) to train a custom model on multiple GPUs. i want to compute gradients on the input for explainability. However, I am facing challenges when integrating gradient computation for the input in this setup. The memory usage increases significantly, and I lose the memory savings typically achieved by DeepSpeed. Below is the relevant DeepSpeed configuration I use, passed to the Hugging Face Trainer via the deepspeed argument: DeepSpeed JSON Configuration (./scripts/zero2.json): json Copy code { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "train_micro_batch_size_per_gpu": "auto", "train_batch_size": "auto", "gradient_accumulation_steps": "auto", "zero_optimization": { "stage": 2, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto" } } Below is a minimal example to reproduce the issue: to launch this script i use this code /home/user/miniconda3/envs/proejct/bin/python -m deepspeed.launcher.launch --world_info=eyIxMjcuMC4wLjEiOiBbMCwgMSwgMiwgMywgNCwgNSwgNiwgN119 --master_addr=127.0.0.1 --master_port=4242 --no_local_rank /home/user/project/explain/explain.py --deepspeed ./scripts/zero2.json import torch from torch import nn from transformers import BertForSequenceClassification, BertTokenizer, Trainer, TrainingArguments from torch.utils.data import Dataset, DataLoader from tqdm import tqdm class ExampleDataset(Dataset): def __init__(self, tokenizer, size=10000, max_length=128): self.tokenizer = tokenizer self.texts = [f"This is example text {i}" for i in range(size)] self.labels = torch.randint(0, 2, (size,)) # Binary classification self.max_length = max_length def __len__(self): return len(self.texts) def __getitem__(self, idx): tokenized = self.tokenizer( self.texts[idx], padding="max_length", truncation=True, max_length=self.max_length, return_tensors="pt", ) input_ids = tokenized["input_ids"].squeeze(0) aux = torch.rand((5,)) * 100 # Random auxiliary input return { "aux": aux, "input_ids": input_ids, "attention_mask": tokenized["attention_mask"].squeeze(0), "labels": self.labels[idx], } class CustomModel(nn.Module): def __init__(self, base_model_name="bert-base-uncased", num_labels=2): super(CustomModel, self).__init__() self.bert = BertForSequenceClassification.from_pretrained(base_model_name, num_labels=num_labels) self.embedding_dim = self.bert.config.hidden_size self.aux_linear = nn.Linear(5, self.embedding_dim) def forward(self, input_ids, attention_mask, aux, labels=None): aux_embedded = self.aux_linear(aux) # Embed auxiliary input input_embeddings = self.bert.bert.embeddings(input_ids) input_embeddings[:, 0, :] += aux_embedded # Modify embeddings outputs = self.bert( inputs_embeds=input_embeddings, attention_mask=attention_mask, labels=labels, ) return outputs def compute_saliency_maps(trainer, loader, device, repeat_factor=1000): """ Compute gradients for auxiliary input tensor (`aux`) in a DeepSpeed-enabled setting. """ model = trainer._wrap_model(trainer.model, training=false, dataloader=loader) model.eval() for _ in range(repeat_factor): for batch in tqdm(loader, desc="Computing Saliency Maps"): batch = {key: value.to(device) if isinstance(value, torch.Tensor) else value for key, value in batch.items()} batch["aux"].requires_grad_(True) # Enable gradients for aux outputs = model( input_ids=batch["input_ids"], attention_mask=batch["attention_mask"], aux=batch["aux"], labels=batch["labels"] ) loss = outputs.loss loss.backward() # Compute gradients grads = batch["aux"].grad print(f"Gradient norm: {grads.norm().item()}") if __name__ == "__main__": tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") model = CustomModel() dataset = ExampleDataset(tokenizer, size=200) loader = DataLoader(dataset, batch_size=1000) training_args = TrainingArguments( output_dir="./results", per_device_eval_batch_size=8, do_train=False, do_eval=True, logging_dir="./logs", deepspeed="./scripts/zero2.json", ) trainer = Trainer( model=model, args=training_args, eval_dataset=dataset, ) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) compute_saliency_maps(trainer, loader, device, repeat_factor=3) Observations: Without Gradient Computation for aux: The model works as expected, and DeepSpeed successfully reduces memory usage. With Gradient Computation for aux: Memory usage increases significantly, negating the benefits of Zero Optimization Stage 2. Increased Memory Usage in Multi-GPU Setting: While the toy example fits in memory, my actual model OOMs when gradients are enabled for aux.
open
2024-11-27T18:58:48Z
2024-11-27T18:58:48Z
https://github.com/deepspeedai/DeepSpeed/issues/6798
[]
GonyRosenman
0
FactoryBoy/factory_boy
sqlalchemy
201
FactoryBoy fails when custom Manager is defined on abstract model
factory-boy==2.5.2 django==1.8 FactoryBoy fails when custom Manager is defined on abstract model (it works fine with factory-boy==2.4.1). Example: ``` class CustomManager(models.Manager): pass class BaseModel(models.Model): custom_objects = CustomManager() class Meta: abstract = True class MyModel(BaseModel): pass class MyModelFactory(factory.DjangoModelFactory): class Meta: model = MyModel MyModelFactory.create() ``` Traceback (most recent call last): File "/home/www/venvs/local/lib/python2.7/site-packages/factory/base.py", line 559, in create return cls._generate(True, attrs) File "/home/www/venvs/local/lib/python2.7/site-packages/factory/base.py", line 484, in _generate obj = cls._prepare(create, *_attrs) File "/home/www/venvs/local/lib/python2.7/site-packages/factory/base.py", line 459, in _prepare return cls._create(model_class, *args, *_kwargs) File "/home/www/venvs/local/lib/python2.7/site-packages/factory/django.py", line 144, in _create manager = cls._get_manager(model_class) File "/home/www/venvs/local/lib/python2.7/site-packages/factory/django.py", line 118, in _get_manager manager = model_class.objects AttributeError: type object 'MyModel' has no attribute 'objects'
closed
2015-04-24T08:30:38Z
2015-05-31T09:48:29Z
https://github.com/FactoryBoy/factory_boy/issues/201
[ "Bug" ]
user0007
0
docarray/docarray
fastapi
917
docs: code snippet needed: Document construct
The [construct page](https://docarray.jina.ai/fundamentals/document/construct/#unknown-attributes-handling) doesn't have a code snippet for `unknown_fields_handler` Note: Please leave snippet as comment, don't fix directly. I'm working on docs right now and don't want merge conflicts
closed
2022-12-08T13:39:08Z
2023-04-22T09:38:29Z
https://github.com/docarray/docarray/issues/917
[]
alexcg1
0
davidteather/TikTok-Api
api
1,175
[BUG] - using hastag search seems cannot fetch all media data even using loop
I want to search ukraine related video in America region, but seems can only fetch 30-50 records. But checked in Tiktok, has 7.1M records, could we download all, or is there anyway to search by time range **My code snipet** async def search_videos_hashtag(hashtag, time_from, time_to, current_video_amount=0, count=100, times=0) -> None: global result, api, current_os, result_tik_id_set format_style = '%m/%d/%y' if current_os == 'Windows' else '%Y/%m/%d' sleep(random.Random().randint(a=3, b=5)) temp = 0 temp_video_amount = current_video_amount if api is not None: if len(api.sessions) == 0: await api.create_sessions(ms_tokens=[ms_token], num_sessions=1, sleep_after=3, headless=False) #ms_token is None async for searchRes in api.hashtag(hashtag).videos(count=count, cursor=current_video_amount): temp += 1 current_video_amount += 1 time_to_add_one_day = int(( datetime.fromtimestamp(format_str_timestamp(time_to, format_style)) + timedelta(days=1)).timestamp()) if format_str_timestamp(time_from, format_style) <= searchRes.as_dict['createTime'] <= time_to_add_one_day \ and searchRes.id not in result_tik_id_set: author = construct_author_metadata(searchRes) publish = construct_publish_metadata(searchRes) author.append_publish(publish) result.append(author) result_tik_id_set.add(searchRes.id) print('append one tik tok data, current search: ' + str(current_video_amount)) if temp_video_amount == current_video_amount: sleep(random.Random().randint(a=3, b=5)) video_urls = list(map(lambda res: res.publish[0].link, result)) for url in video_urls: await search_related_videos(url, time_from, time_to, required_video_amount=count, current_video_amount=0, count=int(count / len(video_urls))) if temp < count and times < 100: await search_videos_hashtag(hashtag, time_from, time_to, current_video_amount, count, times=times + 1)
open
2024-07-27T15:17:45Z
2024-08-26T20:15:22Z
https://github.com/davidteather/TikTok-Api/issues/1175
[ "bug" ]
zhangzyg
3
supabase/supabase-py
flask
557
There doesn't seem to be a way to check the difference between null and 'null' in the is() filter
**Describe the bug** It seems that when using `.is()`, there is no way to query for string literals that say "null". In the supabase js library, this is the given example for `is()`: DB ```sql create table countries (id int8 primary key, name text); insert into countries (id, name) values (1, 'null'), (2, null); ``` Query ```ts const { data, error } = await supabase .from('countries') .select() .is('name', null) ``` Response ```json { "data": [ { "id": 1, "name": "null" } ], "status": 200, "statusText": "OK" } ``` This behavior cannot be replicated in the python library because rather than substituting `None` for the js keyword `null`, the python library uses the string `'null'` to check for nullness in the database. Replicating this in the python library gives a different result, row 2. ```python data, count = supabase.table('countries') \ .select('*') \ .is_('name', 'null') \ .execute() ``` ``` ('data', [{'id': 2, 'name': None}]) ```
closed
2023-09-22T21:22:29Z
2024-06-25T07:08:40Z
https://github.com/supabase/supabase-py/issues/557
[]
AdamGEmerson
2
keras-team/autokeras
tensorflow
1,919
Why was the TimeSeries parts of the package removed in the latest version?
open
2024-05-14T10:47:13Z
2024-05-14T10:47:13Z
https://github.com/keras-team/autokeras/issues/1919
[ "feature request" ]
astro-kevin
0
babysor/MockingBird
deep-learning
522
请问一下ppg提取的预训练模型是重新训练的嘛
你好,我想问一下ppg提取的预训练模型作者是在librispeech上训练的,并不是纯中文的,如果去做中文的迁移是需要重新训练嘛,谢谢
open
2022-04-27T07:09:30Z
2022-07-19T15:46:19Z
https://github.com/babysor/MockingBird/issues/522
[]
madosma
3
jina-ai/clip-as-service
pytorch
895
基于WIN11的LINUX子系统安装CLIP报错
**Describe the bug** <!-- A clear and concise description of what the bug is. --> ![45c1417755ef8a200329a2b91d972b8](https://user-images.githubusercontent.com/102899726/222028994-d99bfe02-ae34-43de-8097-dc5b20197af4.jpg) **Describe how you solve it** <!-- copy past your code/pull request link --> --- <!-- Optional, but really help us locate the problem faster --> **Environment** <!-- Run `jina --version-full` and copy paste the output here --> **Screenshots** <!-- If applicable, add screenshots to help explain your problem. -->
closed
2023-03-01T02:24:37Z
2023-03-06T03:32:10Z
https://github.com/jina-ai/clip-as-service/issues/895
[]
billhuang6277
6
pytorch/pytorch
numpy
148,968
Slow evaluation on Mac with custom-built library
I have built libtorch on Mac (Apple Silicon) with these settings ``` `# GENERAL` \ -DCMAKE_INSTALL_PREFIX=$output_dir \ -DCMAKE_BUILD_TYPE=Release \ `# PYTORCH SPECIFIC` \ -DBUILD_PYTHON=OFF \ -DUSE_NUMPY=OFF \ -DUSE_DISTRIBUTED=OFF `# distributed computing tools` \ -DUSE_FBGEMM=OFF `# quantized operators` \ -DATEN_NO_TEST=ON \ -DUSE_CUDA=OFF \ -DUSE_ROCM=OFF `# amd GPU support` \ -DUSE_XPU=OFF `# intel GPU support` \ -DUSE_KINETO=OFF `# profiling tools` \ `# OPENBLAS/OPENMP` \ -DBLAS="OpenBLAS" \ -DOpenBLAS_INCLUDE_DIR=$openblas_dir/include \ -DOpenBLAS_LIB=$openblas_dir/lib/libopenblas.a \ -DOpenMP_C_FLAGS="-I$omp_dir -Xpreprocessor -fopenmp" \ -DOpenMP_CXX_FLAGS="-I$omp_dir -Xpreprocessor -fopenmp" \ -DOpenMP_C_LIB_NAMES="libomp" \ -DOpenMP_CXX_LIB_NAMES="libomp" \ -DOpenMP_libomp_LIBRARY="$omp_dir/libomp.dylib" \ ``` using an openblas built with these settings ``` make CORE=ARMv8 BINARY=64 INTERFACE64=0 CFLAGS="-DDTB_DEFAULT_ENTRIES=64 -O3 -mmacosx-version-min=12.0" ``` This is a standalone program that runs a convolution and times it: ```cpp #include <torch/torch.h> #include <iostream> #include <chrono> using namespace torch; int main() { Tensor input = torch::ones({10, 256, 128, 128}); Tensor w = torch::ones({256, 256, 3, 3}); Tensor b = torch::ones({256}); std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now(); at::convolution( input, w, b, {1, 1}, // stride {0, 0}, // padding {1, 1}, // dilation false, // transposed {0, 0}, // out padding 1 // groups ); std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now(); std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(end - begin).count() << "\n"; } ``` And I have built it with this CMake script (needs arguments `PyTorch_DIR` and `OMP_DIR`): ``` cmake_minimum_required(VERSION 3.18 FATAL_ERROR) project(example-app) list(APPEND CMAKE_PREFIX_PATH ${PyTorch_DIR}) find_package(Torch REQUIRED) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}") add_executable(example-app example-app.cpp) target_link_libraries(example-app "${TORCH_LIBRARIES}") set_target_properties(example-app PROPERTIES CXX_STANDARD 17 BUILD_RPATH "${OMP_DIR}" INSTALL_RPATH "${INSTALL_RPATH};${OMP_DIR}" ) ``` Running the executable on an 8-core M1 Mac Mini gives about 1000ms of evaluation time, while on a less powerful Linux laptop (running a libtorch built with MKL) I get about 300ms. On more complicated training examples I can see all cores spinning but it's still very slow, actually much worse than a 3x slowdown that this case is showing. I guess the problem is somewhere in my libtorch build settings, or maybe with linking to openblas at all? What's the recommended way to build for Apple Silicon? cc @msaroufim @malfet @albanD @snadampal @milpuz01
open
2025-03-11T15:58:37Z
2025-03-17T16:04:39Z
https://github.com/pytorch/pytorch/issues/148968
[ "module: performance", "triaged", "module: macos", "module: arm" ]
matteosal
3
jmcnamara/XlsxWriter
pandas
735
Issue with cell range not being set correctly with write_array_formula()
Issue reported on [StackOverflow](https://stackoverflow.com/q/63216996/10238). The following doesn't write the array formula in cell B1 of Sheet2. This is due to a bug in the range calculation. The bug is hidden in Sheet1 due to other cells setting the range. ```python import xlsxwriter workbook = xlsxwriter.Workbook('test.xlsx') sheet1 = workbook.add_worksheet('Sheet1') sheet2 = workbook.add_worksheet('Sheet2') sheet1.write('A1', 'Foo') sheet1.write('A2', 'Bar') sheet1.write_array_formula('B1:B2', '{=Sheet1!$A$1:$A$2}') sheet2.write_array_formula('B1:B2', '{=Sheet1!$A$1:$A$2}') workbook.close() ```
closed
2020-08-03T00:29:56Z
2020-08-03T12:08:52Z
https://github.com/jmcnamara/XlsxWriter/issues/735
[ "bug", "short term" ]
jmcnamara
1
ResidentMario/geoplot
matplotlib
59
Add cbar demonstration to the kdeplot documentation
As it stands `kdeplot` lacks any demonstration of legend support in the documentation. Adding it is a good idea.
closed
2018-05-08T13:25:47Z
2018-05-10T03:07:51Z
https://github.com/ResidentMario/geoplot/issues/59
[]
ResidentMario
0