repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
nteract/testbook | pytest | 152 | ModuleNotFoundError while using @testbook('....ipynb', execute=True) | Hello
In my notebook, I import a variable in a py file.
While the notebook is running while executed manually, it failed to be executed while using @testbook

```python
#mini_config.py
schema_name = 'ds_handson_sanfeu'
```

```python
#test_mini_loader.py
from testbook import testbook
@testbook('./src/mini_loader.ipynb', execute=True)
def test_check_country_spit(tb):
assert True
```
The test fails with this error log:
```
./tests/test_mini_loader.py::test_check_module Failed: [undefined]nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
------------------
from mini_config import schema_name
------------------
#x1B[1;31m---------------------------------------------------------------------------#x1B[0m
#x1B[1;31mModuleNotFoundError#x1B[0m Traceback (most recent call last)
Cell #x1B[1;32mIn[1], line 1#x1B[0m
#x1B[1;32m----> 1#x1B[0m #x1B[38;5;28;01mfrom#x1B[39;00m #x1B[38;5;21;01mmini_config#x1B[39;00m #x1B[38;5;28;01mimport#x1B[39;00m schema_name
#x1B[1;31mModuleNotFoundError#x1B[0m: No module named 'mini_config'
ModuleNotFoundError: No module named 'mini_config'
args = (), kwargs = {}
@functools.wraps(func)
def wrapper(*args, **kwargs): # pragma: no cover
with self.client.setup_kernel():
> self._prepare()
.venv\lib\site-packages\testbook\testbook.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv\lib\site-packages\testbook\testbook.py:46: in _prepare
self.client.execute()
.venv\lib\site-packages\testbook\client.py:147: in execute
super().execute_cell(cell, index)
.venv\lib\site-packages\jupyter_core\utils\__init__.py:166: in wrapped
return loop.run_until_complete(inner)
C:\Python39_64\lib\asyncio\base_events.py:642: in run_until_complete
return future.result()
.venv\lib\site-packages\nbclient\client.py:1021: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <testbook.client.TestbookNotebookClient object at 0x000001D10D95A3A0>
cell = {'cell_type': 'code', 'execution_count': 1, 'id': '149206e7-45bc-46df-9799-a3cd321ccc7d', 'metadata': {'tags': [], 'ex...1b[1;31mModuleNotFoundError\x1b[0m: No module named 'mini_config'"]}], 'source': 'from mini_config import schema_name'}
cell_index = 0
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': '3eb7cbaf-...e, 'engine': '3eb7cbaf-c9fa-4c23-ab69-14121e5bb42c', 'started': '2023-01-06T17:40:58.354058Z', 'status': 'error'}, ...}
async def _check_raise_for_error(
self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:
if exec_reply is None:
return None
exec_reply_content = exec_reply['content']
if exec_reply_content['status'] != 'error':
return None
cell_allows_errors = (not self.force_raise_errors) and (
self.allow_errors
or exec_reply_content.get('ename') in self.allow_error_names
or "raises-exception" in cell.metadata.get("tags", [])
)
await run_hook(
self.on_cell_error, cell=cell, cell_index=cell_index, execute_reply=exec_reply
)
if not cell_allows_errors:
> raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)
E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E from mini_config import schema_name
E ------------------
E
E #x1B[1;31m---------------------------------------------------------------------------#x1B[0m
E #x1B[1;31mModuleNotFoundError#x1B[0m Traceback (most recent call last)
E Cell #x1B[1;32mIn[1], line 1#x1B[0m
E #x1B[1;32m----> 1#x1B[0m #x1B[38;5;28;01mfrom#x1B[39;00m #x1B[38;5;21;01mmini_config#x1B[39;00m #x1B[38;5;28;01mimport#x1B[39;00m schema_name
E
E #x1B[1;31mModuleNotFoundError#x1B[0m: No module named 'mini_config'
E ModuleNotFoundError: No module named 'mini_config'
.venv\lib\site-packages\nbclient\client.py:915: CellExecutionError
```
I am working with python 3.9.6, in a venv built directly in the repertory.
My environment is
```
anyio==3.6.2
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arrow==1.2.3
asttokens==2.2.1
attrs==22.2.0
Babel==2.11.0
backcall==0.2.0
beautifulsoup4==4.11.1
bleach==5.0.1
certifi==2022.12.7
cffi==1.15.1
charset-normalizer==2.1.1
colorama==0.4.6
comm==0.1.2
debugpy==1.6.5
decorator==5.1.1
defusedxml==0.7.1
entrypoints==0.4
exceptiongroup==1.1.0
executing==1.2.0
fastjsonschema==2.16.2
fqdn==1.5.1
idna==3.4
importlib-metadata==6.0.0
iniconfig==1.1.1
ipykernel==6.19.4
ipython==8.8.0
ipython-genutils==0.2.0
ipywidgets==8.0.4
isoduration==20.11.0
jedi==0.18.2
Jinja2==3.1.2
json5==0.9.11
jsonpointer==2.3
jsonschema==4.17.3
jupyter==1.0.0
jupyter-console==6.4.4
jupyter-events==0.5.0
jupyter_client==7.4.8
jupyter_core==5.1.2
jupyter_server==2.0.6
jupyter_server_terminals==0.4.3
jupyterlab==3.5.2
jupyterlab-pygments==0.2.2
jupyterlab-widgets==3.0.5
jupyterlab_server==2.18.0
MarkupSafe==2.1.1
matplotlib-inline==0.1.6
mistune==2.0.4
nbclassic==0.4.8
nbclient==0.7.2
nbconvert==7.2.7
nbformat==5.7.1
nest-asyncio==1.5.6
notebook==6.5.2
notebook_shim==0.2.2
packaging==22.0
pandocfilters==1.5.0
parso==0.8.3
pickleshare==0.7.5
platformdirs==2.6.2
pluggy==1.0.0
prometheus-client==0.15.0
prompt-toolkit==3.0.36
psutil==5.9.4
pure-eval==0.2.2
pycparser==2.21
Pygments==2.14.0
pyrsistent==0.19.3
pytest==7.2.0
python-dateutil==2.8.2
python-json-logger==2.0.4
pytz==2022.7
pywin32==305
pywinpty==2.0.10
PyYAML==6.0
pyzmq==24.0.1
qtconsole==5.4.0
QtPy==2.3.0
requests==2.28.1
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
Send2Trash==1.8.0
six==1.16.0
sniffio==1.3.0
soupsieve==2.3.2.post1
stack-data==0.6.2
terminado==0.17.1
testbook==0.4.2
tinycss2==1.2.1
tomli==2.0.1
tornado==6.2
traitlets==5.8.0
uri-template==1.2.0
urllib3==1.26.13
wcwidth==0.2.5
webcolors==1.12
webencodings==0.5.1
websocket-client==1.4.2
widgetsnbextension==4.0.5
zipp==3.11.0
```
I tried __init__ / play with sys.path but without success.
Sorry if I missed something obvious. Can you help?
PS: Thanks a lot for this module, which will probably change the life of my coworkers soon. | open | 2023-01-06T17:48:54Z | 2024-07-17T14:54:01Z | https://github.com/nteract/testbook/issues/152 | [] | sanfeu | 2 |
deeppavlov/DeepPavlov | nlp | 1,036 | How to train custom NER model for text paragraphs? | Getting errors related to token size if I pass a paragraph instead of a sentence.
I have increased the size of max_seq_length in the file bert_preprocessor.py. After that, i getting the below error.
InvalidArgumentError: assertion failed: [] [Condition x <= y did not hold element-wise:x (bert/embeddings/strided_slice_3:0) = ] [580] [y (bert/embeddings/assert_less_equal/y:0) = ] [512]
[[{{node bert/embeddings/assert_less_equal/Assert/Assert}}]] | closed | 2019-10-09T07:14:56Z | 2020-10-14T11:53:10Z | https://github.com/deeppavlov/DeepPavlov/issues/1036 | [] | DLNSimha | 4 |
STVIR/pysot | computer-vision | 7 | 评价多个算法性能 | eval.py载入多个算法时
trackers = glob(os.path.join(args.tracker_path, args.dataset, args.tracker_prefix+'*'))
trackers = [x.split('/')[-1] for x in trackers]
这样语法会存在错误吧,是不是应该改成
trackers = args.tracker_prefix.split(" ") | closed | 2019-05-15T03:08:15Z | 2019-06-06T12:28:30Z | https://github.com/STVIR/pysot/issues/7 | [] | kongbia | 1 |
gradio-app/gradio | deep-learning | 10,497 | reload mode doesn't work collectly using Google Colab | ### Describe the bug
I use the latest gradio module refering official sample, the style is not applied and it does not work properly.
[gradio's official sample]
https://colab.research.google.com/drive/1zAuWoiTIb3O2oitbtVb2_ekv1K6ggtC1?hl=ja#scrollTo=TgV_xIPvUoEY
Here is the reproduction.
https://colab.research.google.com/drive/1EzhLxVnEwGH6RIVwTFYAoha5y_oLAI_F?usp=sharing
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
!pip install gradio==5.14.0
```
```python
%load_ext gradio
```
```python
import gradio as gr
```
```python
%%blocks
def greet(name):
return "Hello " + name + "!"
with gr.Blocks() as demo:
name = gr.Textbox(label="Nameaa")
output = gr.Textbox(label="Output Box")
greet_btn = gr.Button("Greet")
greet_btn.click(fn=greet, inputs=name, outputs=output, api_name="greet")
```
### Screenshot

### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.14.0
gradio_client version: 1.7.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 3.7.1
audioop-lts is not installed.
fastapi: 0.115.8
ffmpy: 0.5.0
gradio-client==1.7.0 is not installed.
httpx: 0.28.1
huggingface-hub: 0.27.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.2
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.4
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.28.1
huggingface-hub: 0.27.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
Blocking usage of gradio | closed | 2025-02-04T00:28:27Z | 2025-02-07T05:42:25Z | https://github.com/gradio-app/gradio/issues/10497 | [
"bug"
] | shin1103 | 2 |
mage-ai/mage-ai | data-science | 4,997 | Pipeline level concurrency - Env Variable settings request | We were having an issue where a developer was crashing the web server because he was running 60+ blocks concurrently. We used Pipeline Level Concurrency of 10, so that 60+ blocks don't run in this pipeline concurrently and only 10 of them run concurrently at any given point within his pipeline. We did this with "queue_config:concurrency:" in the metadata.yml file. This resolved the issue If that can be enforced/locked down at the platform level, without dev's having an option to edit it, that would be great. It being either an environment variable or Workspace option in the settings would be great | closed | 2024-04-26T20:28:55Z | 2024-05-10T22:12:06Z | https://github.com/mage-ai/mage-ai/issues/4997 | [
"enhancement"
] | Arthidon | 0 |
huggingface/datasets | tensorflow | 6,916 | ```push_to_hub()``` - Prevent Automatic Generation of Splits | ### Describe the bug
I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening?
### Steps to reproduce the bug
1. Have a unsplit dataset
```python
Dataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], num_rows: 944685 })
```
2. Push it to huggingface
```python
dataset.push_to_hub(dataset_name)
```
3. On the hugging face dataset repo, the dataset then appears to be splited:

4. Indeed, when loading the dataset from this repo, the dataset is split in two testing and training set.
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("Jetlime/NF-CSE-CIC-IDS2018-v2", streaming=True)
dataset
```
output:
```
IterableDatasetDict({
train: IterableDataset({
features: ['input', 'output', 'Attack', '__index_level_0__'],
n_shards: 2
})
test: IterableDataset({
features: ['input', 'output', 'Attack', '__index_level_0__'],
n_shards: 1
})
```
### Expected behavior
The dataset shall not be splited, as not requested.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.0
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1 | closed | 2024-05-22T23:52:15Z | 2024-05-23T00:07:53Z | https://github.com/huggingface/datasets/issues/6916 | [] | jetlime | 0 |
microsoft/nni | tensorflow | 4,805 | Pytorch Demo: all trails failed! "cmd.exe : python: can't open file 'model.py'" | **Describe the issue**:
I'm a freshman. When I run the pytorch demo, **all trails failed**,

and the trail error says " cmd.exe : python: can't open file 'model.py': [Errno 2] No such file or directory", and stdout is empty

**Environment**:
- NNI version: 2.6.1
- Training service (local|remote|pai|aml|etc): local
- Client OS: windows10
- Server OS (for remote mode only):
- Python version: 3.8
- PyTorch/TensorFlow version: 1.10.2 cuda
- Is conda/virtualenv/venv used?: use conda
- Is running in Docker?: No
**Configuration**:
- Experiment config (remember to remove secrets!):
-
- Search space:
**Log message**:
- nnimanager.log:
- [2022-04-25 20:43:26] INFO (NNIDataStore) Datastore initialization done
[2022-04-25 20:43:26] INFO (RestServer) RestServer start
[2022-04-25 20:43:26] WARNING (NNITensorboardManager) Tensorboard may not installed, if you want to use tensorboard, please check if tensorboard installed.
[2022-04-25 20:43:26] INFO (RestServer) RestServer base port is 8080
[2022-04-25 20:43:26] INFO (main) Rest server listening on: http://0.0.0.0:8080
[2022-04-25 20:43:29] INFO (NNIManager) Starting experiment: zwyfcrl7
[2022-04-25 20:43:29] INFO (NNIManager) Setup training service...
[2022-04-25 20:43:30] INFO (LocalTrainingService) Construct local machine training service.
[2022-04-25 20:43:30] INFO (NNIManager) Setup tuner...
[2022-04-25 20:43:30] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: RUNNING
[2022-04-25 20:43:30] INFO (NNIManager) Add event listeners
[2022-04-25 20:43:30] INFO (LocalTrainingService) Run local machine training service.
[2022-04-25 20:43:30] INFO (NNIManager) NNIManager received command from dispatcher: ID,
[2022-04-25 20:43:30] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.0008303904360771582, "momentum": 0.6181306569552493}, "parameter_index": 0}
[2022-04-25 20:43:30] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 1, "parameter_source": "algorithm", "parameters": {"features": 512, "lr": 0.0003695263964422363, "momentum": 0.4149188862339417}, "parameter_index": 0}
[2022-04-25 20:43:35] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 0,
hyperParameters: {
value: '{"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.0008303904360771582, "momentum": 0.6181306569552493}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-04-25 20:43:35] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 1,
hyperParameters: {
value: '{"parameter_id": 1, "parameter_source": "algorithm", "parameters": {"features": 512, "lr": 0.0003695263964422363, "momentum": 0.4149188862339417}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-04-25 20:43:45] INFO (NNIManager) Trial job bSGpz status changed from WAITING to FAILED
[2022-04-25 20:43:45] INFO (NNIManager) Trial job x7sMq status changed from WAITING to FAILED
[2022-04-25 20:43:45] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 2, "parameter_source": "algorithm", "parameters": {"features": 512, "lr": 0.001089900485927162, "momentum": 0.008550750115060124}, "parameter_index": 0}
[2022-04-25 20:43:45] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 3, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.0012066180002456614, "momentum": 0.3589205910925304}, "parameter_index": 0}
[2022-04-25 20:43:50] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 2,
hyperParameters: {
value: '{"parameter_id": 2, "parameter_source": "algorithm", "parameters": {"features": 512, "lr": 0.001089900485927162, "momentum": 0.008550750115060124}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-04-25 20:43:50] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 3,
hyperParameters: {
value: '{"parameter_id": 3, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.0012066180002456614, "momentum": 0.3589205910925304}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-04-25 20:43:55] INFO (NNIManager) Trial job CWUBB status changed from WAITING to FAILED
[2022-04-25 20:43:55] INFO (NNIManager) Trial job X19kT status changed from WAITING to FAILED
[2022-04-25 20:43:55] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 4, "parameter_source": "algorithm", "parameters": {"features": 512, "lr": 0.000891197701762958, "momentum": 0.9183177099002631}, "parameter_index": 0}
[2022-04-25 20:43:55] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 5, "parameter_source": "algorithm", "parameters": {"features": 256, "lr": 0.00017781986380464154, "momentum": 0.5223018577419882}, "parameter_index": 0}
[2022-04-25 20:44:00] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 4,
hyperParameters: {
value: '{"parameter_id": 4, "parameter_source": "algorithm", "parameters": {"features": 512, "lr": 0.000891197701762958, "momentum": 0.9183177099002631}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-04-25 20:44:00] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 5,
hyperParameters: {
value: '{"parameter_id": 5, "parameter_source": "algorithm", "parameters": {"features": 256, "lr": 0.00017781986380464154, "momentum": 0.5223018577419882}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-04-25 20:44:06] INFO (NNIManager) Trial job rsS3c status changed from WAITING to FAILED
[2022-04-25 20:44:06] INFO (NNIManager) Trial job Wj2oy status changed from WAITING to FAILED
[2022-04-25 20:44:06] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 6, "parameter_source": "algorithm", "parameters": {"features": 256, "lr": 0.0001790518130213302, "momentum": 0.358621599855963}, "parameter_index": 0}
[2022-04-25 20:44:06] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 7, "parameter_source": "algorithm", "parameters": {"features": 256, "lr": 0.0016370084611949665, "momentum": 0.03431602566387759}, "parameter_index": 0}
[2022-04-25 20:44:11] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 6,
hyperParameters: {
value: '{"parameter_id": 6, "parameter_source": "algorithm", "parameters": {"features": 256, "lr": 0.0001790518130213302, "momentum": 0.358621599855963}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-04-25 20:44:11] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 7,
hyperParameters: {
value: '{"parameter_id": 7, "parameter_source": "algorithm", "parameters": {"features": 256, "lr": 0.0016370084611949665, "momentum": 0.03431602566387759}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-04-25 20:44:16] INFO (NNIManager) Trial job hVzMt status changed from WAITING to FAILED
[2022-04-25 20:44:16] INFO (NNIManager) Trial job UHK5I status changed from WAITING to FAILED
[2022-04-25 20:44:16] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 8, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.002296909478824145, "momentum": 0.16713995462925002}, "parameter_index": 0}
[2022-04-25 20:44:16] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 9, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.000847715846557005, "momentum": 0.4711180433456926}, "parameter_index": 0}
[2022-04-25 20:44:21] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 8,
hyperParameters: {
value: '{"parameter_id": 8, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.002296909478824145, "momentum": 0.16713995462925002}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-04-25 20:44:21] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 9,
hyperParameters: {
value: '{"parameter_id": 9, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.000847715846557005, "momentum": 0.4711180433456926}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-04-25 20:44:26] INFO (NNIManager) Trial job ZQZzn status changed from WAITING to FAILED
[2022-04-25 20:44:26] INFO (NNIManager) Trial job R0nfO status changed from WAITING to FAILED
[2022-04-25 20:44:26] INFO (NNIManager) Change NNIManager status from: RUNNING to: NO_MORE_TRIAL
[2022-04-25 20:44:26] INFO (NNIManager) Change NNIManager status from: NO_MORE_TRIAL to: DONE
[2022-04-25 20:44:26] INFO (NNIManager) Experiment done.
[2022-04-25 20:44:32] INFO (NNIManager) Change NNIManager status from: DONE to: STOPPING
[2022-04-25 20:44:32] INFO (NNIManager) Stopping experiment, cleaning up ...
[2022-04-25 20:44:33] INFO (LocalTrainingService) Stopping local machine training service...
[2022-04-25 20:44:33] INFO (NNIManager) Change NNIManager status from: STOPPING to: STOPPED
[2022-04-25 20:44:33] INFO (NNIManager) Experiment stopped.
[2022-04-25 20:44:33] INFO (NNITensorboardManager) Forced stopping all tensorboard task.
[2022-04-25 20:44:33] INFO (NNITensorboardManager) All tensorboard task stopped.
[2022-04-25 20:44:33] INFO (NNITensorboardManager) Tensorboard manager stopped.
[2022-04-25 20:45:19] INFO (NNIDataStore) Datastore initialization done
[2022-04-25 20:45:19] INFO (RestServer) RestServer start
[2022-04-25 20:45:19] WARNING (NNITensorboardManager) Tensorboard may not installed, if you want to use tensorboard, please check if tensorboard installed.
[2022-04-25 20:45:19] INFO (RestServer) RestServer base port is 8080
[2022-04-25 20:45:19] INFO (main) Rest server listening on: http://0.0.0.0:8080
[2022-04-25 20:45:23] INFO (NNIManager) Resuming experiment: zwyfcrl7
[2022-04-25 20:45:23] INFO (NNIManager) Setup training service...
[2022-04-25 20:45:23] INFO (LocalTrainingService) Construct local machine training service.
[2022-04-25 20:45:23] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: VIEWED
[2022-04-25 20:46:18] ERROR (NNIRestHandler) Error: File not found: C:\Users\88305\nni-experiments\zwyfcrl7\trials\bSGpz\trial.log
at LocalTrainingService.getTrialFile (F:\software\anaconda3\lib\site-packages\nni_node\training_service\local\localTrainingService.js:146:19)
at NNIManager.getTrialFile (F:\software\anaconda3\lib\site-packages\nni_node\core\nnimanager.js:351:37)
at F:\software\anaconda3\lib\site-packages\nni_node\rest_server\restHandler.js:287:29
at Layer.handle [as handle_request] (F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\layer.js:95:5)
at next (F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\route.js:137:13)
at Route.dispatch (F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\route.js:112:3)
at Layer.handle [as handle_request] (F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\layer.js:95:5)
at F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\index.js:281:22
at param (F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\index.js:360:14)
at param (F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\index.js:371:14)
[2022-04-25 21:05:14] ERROR (NNIRestHandler) Error: File not found: C:\Users\88305\nni-experiments\zwyfcrl7\trials\bSGpz\trial.log
at LocalTrainingService.getTrialFile (F:\software\anaconda3\lib\site-packages\nni_node\training_service\local\localTrainingService.js:146:19)
at NNIManager.getTrialFile (F:\software\anaconda3\lib\site-packages\nni_node\core\nnimanager.js:351:37)
at F:\software\anaconda3\lib\site-packages\nni_node\rest_server\restHandler.js:287:29
at Layer.handle [as handle_request] (F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\layer.js:95:5)
at next (F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\route.js:137:13)
at Route.dispatch (F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\route.js:112:3)
at Layer.handle [as handle_request] (F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\layer.js:95:5)
at F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\index.js:281:22
at param (F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\index.js:360:14)
at param (F:\software\anaconda3\lib\site-packages\nni_node\node_modules\express\lib\router\index.js:371:14)
- dispatcher.log:
- [2022-04-25 20:43:25] INFO (nni.experiment/MainThread) Creating experiment, Experiment ID: [36mzwyfcrl7[0m
[2022-04-25 20:43:25] INFO (nni.experiment/MainThread) Starting web server...
[2022-04-25 20:43:27] INFO (nni.experiment/MainThread) Setting up...
[2022-04-25 20:43:30] INFO (nni.experiment/MainThread) Web UI URLs: [36mhttp://192.168.199.132:8080 http://169.254.226.180:8080 http://127.0.0.1:8080[0m
[2022-04-25 20:43:30] INFO (nni.tuner.tpe/MainThread) Using random seed 28781843
[2022-04-25 20:43:30] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher started
[2022-04-25 20:44:30] INFO (nni.experiment/MainThread) Stopping experiment, please wait...
[2022-04-25 20:44:32] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher exiting...
[2022-04-25 20:44:33] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher terminiated
[2022-04-25 20:45:19] INFO (nni.experiment/MainThread) Creating experiment, Experiment ID: [36mzwyfcrl7[0m
[2022-04-25 20:45:19] INFO (nni.experiment/MainThread) Starting web server...
[2022-04-25 20:45:21] INFO (nni.experiment/MainThread) Setting up...
[2022-04-25 20:45:23] INFO (nni.experiment/MainThread) Web UI URLs: [36mhttp://192.168.199.132:8080 http://169.254.226.180:8080 http://127.0.0.1:8080[0m
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/Nnictl.md#nnictl%20log%20stdout
-->
**How to reproduce it?**: | closed | 2022-04-25T13:08:31Z | 2022-04-26T04:51:06Z | https://github.com/microsoft/nni/issues/4805 | [] | xixilllll | 2 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 3,008 | New straightforward way to manage notebooks on Kubernetes | Hello,
I've already posted on Jupyter discourse https://discourse.jupyter.org/t/a-new-and-simple-way-to-manage-notebooks-on-kubernetes/17655, but I didn't get the feedback I was expecting :)
I'd like to draw your attention to [notebook-on-kube](https://github.com/machine424/notebook-on-kube), a new tool I open sourced that provides a new way of managing notebooks with the bare minimum of extra/new code, re-using existing Kubernetes features (RBAC, Helm, ingress-nginx etc.)
I talk more about my approach in the README https://github.com/machine424/notebook-on-kube#how-and-why.
Give it a try and let me know what you think about it.
| closed | 2023-01-30T19:54:23Z | 2023-01-30T21:09:10Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3008 | [
"enhancement"
] | machine424 | 4 |
supabase/supabase-py | flask | 69 | Which version is most up-to-date? | **Describe the bug**
pip by default install now version 0.1.2. But newest version is 0.0.3.
https://pypi.org/project/supabase/#history

**To Reproduce**
Steps to reproduce the behavior:
1. `pip install supabase`
**Expected behavior**
It should install latest version
**Screenshots**
Above
**Desktop (please complete the following information):**
n/a
**Smartphone (please complete the following information):**
n/a
**Additional context**
n/a | closed | 2021-10-17T00:59:24Z | 2022-02-09T17:32:35Z | https://github.com/supabase/supabase-py/issues/69 | [] | karolzlot | 5 |
noirbizarre/flask-restplus | flask | 792 | How to protect swagger-UI ? | My default path for swagger-UI is `/api/doc`. So, anyone who hit this url in browser, they can easily access this swagger-UI !
So, how can I protect this specific URL only. And suggest all possible ways. | closed | 2020-03-30T21:32:45Z | 2020-04-09T18:07:55Z | https://github.com/noirbizarre/flask-restplus/issues/792 | [] | shivangpatel | 2 |
modoboa/modoboa | django | 3,040 | Updating user fails | # Impacted versions
maybe related to #3037 as it is exactly the same error message
# Steps to reproduce
# Current behavior
When updating a user with about 50 aliases the update seems to go through on new-admin, although the server responses with:
```
Status Code: 400
Response Payload: {"aliases":["An alias with this name already exists."]}
```
# Expected behavior
1. propagate errors on the frontend
2. don't throw this error
You can get more in depth infos via DM on Discord as I don't want to leak sensitive information
| closed | 2023-08-06T12:14:20Z | 2023-08-28T12:26:37Z | https://github.com/modoboa/modoboa/issues/3040 | [] | dorsax | 0 |
donnemartin/system-design-primer | python | 366 | Print is fixed with one language. | When I am trying to print the readme file, it will preview the Chinese language not the displayed language | closed | 2020-02-19T08:49:08Z | 2020-07-04T16:00:39Z | https://github.com/donnemartin/system-design-primer/issues/366 | [
"question"
] | shireefadel | 1 |
keras-team/keras | tensorflow | 20,251 | Allow to pass **kwargs to optimizers.get | https://github.com/keras-team/keras/blob/f6c4ac55692c132cd16211f4877fac6dbeead749/keras/src/optimizers/__init__.py#L72-L97
When dynamically getting an optimizer by using tf.keras.optimizers.get(<OPT_NAME>), it would be extremely useful if one could also pass extra arguments to the function, so that the optimizer gets initialized properly. See below a test example of the behavior I would like to see:
```python
optimizer_name = 'adam'
opt_params = {'learning_rate': 3e-3, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon' : 1e-07, 'amdsgrad': True}
import tensorflow as tf
opt = tf.keras.optimizers.get(optimizer_name, **opt_params)
assert(opt.learning_rate == opt_params['learning_rate']), "Opt learning rate not being correctly initialized"
``` | closed | 2024-09-11T20:21:18Z | 2024-09-11T22:31:30Z | https://github.com/keras-team/keras/issues/20251 | [
"type:feature",
"keras-team-review-pending"
] | manuelblancovalentin | 1 |
modelscope/modelscope | nlp | 754 | modelscope - WARNING - Download interval is too small | I am trying to download Qwen/Qwen-VL and getting this error
Here the code
```
DEFAULT_CKPT_PATH = 'Qwen/Qwen-VL'
def _load_model_tokenizer(args):
tokenizer = AutoTokenizer.from_pretrained(
args.checkpoint_path, trust_remote_code=True, resume_download=True,
)
if args.cpu_only:
device_map = "cpu"
else:
device_map = "cuda"
model = AutoModelForCausalLM.from_pretrained(
args.checkpoint_path,
device_map=device_map,
trust_remote_code=True,
resume_download=True,
).eval()
model.generation_config = GenerationConfig.from_pretrained(
args.checkpoint_path, trust_remote_code=True, resume_download=True,
)
return model, tokenizer
```
Here the error
```
(venv) G:\Qwen-VL\Qwen-VL>python web_demo_mm.py
2024-02-06 05:58:10,509 - modelscope - INFO - PyTorch version 2.2.0+cu121 Found.
2024-02-06 05:58:10,510 - modelscope - INFO - Loading ast index from C:\Users\King\.cache\modelscope\ast_indexer
2024-02-06 05:58:10,511 - modelscope - INFO - No valid ast index found from C:\Users\King\.cache\modelscope\ast_indexer, generating ast index from prebuilt!
2024-02-06 05:58:10,565 - modelscope - INFO - Loading done! Current index file version is 1.12.0, with md5 3c01d591cf6305a9cce371917cc97dc4 and a total number of 964 components indexed
2024-02-06 05:58:13,237 - modelscope - WARNING - Model revision not specified, use revision: v1.0.3
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 1.16k/1.16k [00:00<?, ?B/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 1.13k/1.13k [00:00<?, ?B/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 2.04k/2.04k [00:00<?, ?B/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████| 485k/485k [00:01<00:00, 362kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████| 242k/242k [00:01<00:00, 205kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 2.10k/2.10k [00:00<?, ?B/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████| 5.96k/5.96k [00:00<00:00, 110kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████| 6.83k/6.83k [00:00<00:00, 126kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████| 6.03k/6.03k [00:00<00:00, 111kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 5.26k/5.26k [00:00<?, ?B/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████| 7.02k/7.02k [00:00<00:00, 129kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 218/218 [00:00<?, ?B/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████| 6.74k/6.74k [00:00<00:00, 123kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████| 107k/107k [00:00<00:00, 120kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████| 43.2k/43.2k [00:00<00:00, 89.5kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████| 2.44M/2.44M [00:01<00:00, 1.66MB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████| 14.5k/14.5k [00:00<00:00, 52.1kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████| 92.4k/92.4k [00:00<00:00, 148kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████| 23.6k/23.6k [00:00<00:00, 91.5kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████| 10.0M/10.0M [00:02<00:00, 4.44MB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████| 21.0k/21.0k [00:00<00:00, 62.8kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 173/173 [00:00<?, ?B/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████| 14.2k/14.2k [00:00<00:00, 51.3kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████| 8.31k/8.31k [00:00<00:00, 153kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████| 10.9k/10.9k [00:00<00:00, 184kB/s]
2024-02-06 05:58:55,988 - modelscope - WARNING - Download interval is too small, use local cache
Traceback (most recent call last):
File "G:\Qwen-VL\Qwen-VL\web_demo_mm.py", line 246, in <module>
main()
File "G:\Qwen-VL\Qwen-VL\web_demo_mm.py", line 240, in main
model, tokenizer = _load_model_tokenizer(args)
File "G:\Qwen-VL\Qwen-VL\web_demo_mm.py", line 55, in _load_model_tokenizer
model = AutoModelForCausalLM.from_pretrained(
File "G:\Qwen-VL\Qwen-VL\venv\lib\site-packages\modelscope\utils\hf_util.py", line 111, in from_pretrained
module_obj = module_class.from_pretrained(model_dir, *model_args,
File "G:\Qwen-VL\Qwen-VL\venv\lib\site-packages\transformers\models\auto\auto_factory.py", line 561, in from_pretrained
return model_class.from_pretrained(
File "G:\Qwen-VL\Qwen-VL\venv\lib\site-packages\modelscope\utils\hf_util.py", line 74, in from_pretrained
return ori_from_pretrained(cls, model_dir, *model_args, **kwargs)
File "G:\Qwen-VL\Qwen-VL\venv\lib\site-packages\transformers\modeling_utils.py", line 3338, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory C:\Users\King\.cache\modelscope\hub\Qwen\Qwen-VL.
```
here the pip freeze
```
(venv) G:\Qwen-VL\Qwen-VL>pip freeze
absl-py==2.1.0
accelerate==0.26.1
addict==2.4.0
aiofiles==23.2.1
aiohttp==3.9.3
aiosignal==1.3.1
aliyun-python-sdk-core==2.14.0
aliyun-python-sdk-kms==2.16.2
altair==5.2.0
annotated-types==0.6.0
anyio==4.2.0
async-timeout==4.0.3
attrs==23.2.0
bitsandbytes @ https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl
cachetools==5.3.2
certifi==2024.2.2
cffi==1.16.0
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
contourpy==1.2.0
crcmod==1.7
cryptography==42.0.2
cycler==0.12.1
datasets==2.16.1
deepspeed @ https://huggingface.co/MonsterMMORPG/SECourses/resolve/main/deepspeed-0.11.2_cuda121-cp310-cp310-win_amd64.whl
dill==0.3.7
einops==0.7.0
exceptiongroup==1.2.0
fastapi==0.109.2
ffmpy==0.3.1
filelock==3.13.1
fonttools==4.47.2
frozenlist==1.4.1
fsspec==2023.10.0
gast==0.5.4
google-auth==2.27.0
google-auth-oauthlib==1.2.0
gradio==4.16.0
gradio_client==0.8.1
grpcio==1.60.1
h11==0.14.0
hjson==3.1.0
httpcore==1.0.2
httpx==0.26.0
huggingface-hub==0.20.3
idna==3.6
importlib-metadata==7.0.1
importlib-resources==6.1.1
Jinja2==3.1.3
jmespath==0.10.0
jsonschema==4.21.1
jsonschema-specifications==2023.12.1
kiwisolver==1.4.5
Markdown==3.5.2
markdown-it-py==3.0.0
MarkupSafe==2.1.5
matplotlib==3.8.2
mdurl==0.1.2
modelscope==1.12.0
mpmath==1.3.0
multidict==6.0.5
multiprocess==0.70.15
networkx==3.2.1
ninja==1.11.1.1
numpy==1.26.4
oauthlib==3.2.2
orjson==3.9.13
oss2==2.18.4
packaging==23.2
pandas==2.2.0
pillow==10.2.0
platformdirs==4.2.0
protobuf==4.25.2
psutil==5.9.8
py-cpuinfo==9.0.0
pyarrow==15.0.0
pyarrow-hotfix==0.6
pyasn1==0.5.1
pyasn1-modules==0.3.0
pycparser==2.21
pycryptodome==3.20.0
pydantic==2.6.1
pydantic_core==2.16.2
pydub==0.25.1
Pygments==2.17.2
pynvml==11.5.0
pyparsing==3.1.1
python-dateutil==2.8.2
python-multipart==0.0.7
pytz==2024.1
PyYAML==6.0.1
referencing==0.33.0
regex==2023.12.25
requests==2.31.0
requests-oauthlib==1.3.1
rich==13.7.0
rpds-py==0.17.1
rsa==4.9
ruff==0.2.1
safetensors==0.4.2
scipy==1.12.0
semantic-version==2.10.0
shellingham==1.5.4
simplejson==3.19.2
six==1.16.0
sniffio==1.3.0
sortedcontainers==2.4.0
starlette==0.36.3
sympy==1.12
tensorboard==2.15.1
tensorboard-data-server==0.7.2
tiktoken==0.5.2
tokenizers==0.15.1
tomli==2.0.1
tomlkit==0.12.0
toolz==0.12.1
torch==2.2.0+cu121
torchaudio==2.2.0+cu121
torchvision==0.17.0+cu121
tqdm==4.66.1
transformers==4.37.2
transformers-stream-generator==0.0.4
triton @ https://huggingface.co/MonsterMMORPG/SECourses/resolve/main/triton-2.1.0-cp310-cp310-win_amd64.whl
typer==0.9.0
typing_extensions==4.9.0
tzdata==2023.4
urllib3==2.2.0
uvicorn==0.27.0.post1
websockets==11.0.3
Werkzeug==3.0.1
xformers==0.0.24
xxhash==3.4.1
yapf==0.40.2
yarl==1.9.4
zipp==3.17.0
(venv) G:\Qwen-VL\Qwen-VL>
``` | closed | 2024-02-06T03:01:11Z | 2024-03-04T02:53:11Z | https://github.com/modelscope/modelscope/issues/754 | [] | FurkanGozukara | 3 |
axnsan12/drf-yasg | django | 745 | Serve swagger static docs using generated spec file | Hi, i was wondering that if i am able to generate static docs inside my project using the spec JSON file that is generated from this module ? Or do i have to use an external module like swagger-codegen to achieve this | open | 2021-10-04T10:06:12Z | 2025-03-07T12:11:13Z | https://github.com/axnsan12/drf-yasg/issues/745 | [
"triage"
] | quangtudng | 0 |
GibbsConsulting/django-plotly-dash | plotly | 215 | Enable use in Django 3.0 and later | Django 3.0 has been released, and contains some changes that are not yet compatible with `django-plotly-dash`.
For starters, `X-FRAME-OPTIONS` now prevents the serving of content into iframes by default. This is probably the cause of #214 | closed | 2019-12-11T13:26:19Z | 2021-07-19T22:08:39Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/215 | [
"enhancement"
] | GibbsConsulting | 11 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 398 | Add explanation of what happens to lone query labels in AccuracyCalculator | If a query label doesn't appear in the reference set, then it's impossible for that label to have non-zero accuracy. Zero accuracy for this label doesn't indicate anything about the quality of the embedding space, so it is excluded from the calculation. | closed | 2021-12-11T20:36:24Z | 2021-12-28T04:28:19Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/398 | [
"documentation",
"fixed in dev branch"
] | KevinMusgrave | 0 |
Avaiga/taipy | data-visualization | 2,459 | [🐛 BUG] Issue with width of selectors when inline=True | ### What went wrong? 🤔
The width of the selector is not equal to 120px even if specified.

We also see that the two boxes are not vertically aligned.
### Expected Behavior
The width should be 120px on the app.
### Steps to Reproduce Issue
```python
from taipy import Gui
import taipy.gui.builder as tgb
first_name = ""
statuses = ["Active", "Inactive", "Pending"]
status = "Active"
with tgb.Page() as page:
tgb.input(
"{first_name}",
label="First Name",
width="120px",
inline=True,
)
tgb.selector(
"{status}",
lov=statuses,
dropdown=True,
label="Status",
width="120px",
inline=True,
)
if __name__ == "__main__":
gui = Gui(page)
gui.run(title="Taipy Layout Test", debug=True)
```

### Version of Taipy
4.0.2
### Additional Context
```bash
```
### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2025-02-24T11:16:09Z | 2025-03-13T17:54:41Z | https://github.com/Avaiga/taipy/issues/2459 | [
"🖰 GUI",
"💥Malfunction",
"🆘 Help wanted",
"🟨 Priority: Medium"
] | FlorianJacta | 0 |
iperov/DeepFaceLive | machine-learning | 168 | hi,is there a startup parameter description file here? | hi, when i use "python main.py run DeepfaceLive --xx xx" to startup DeepFaceLive, I don't know what startup parameters there are,so is there a startup parameter description file here? | closed | 2023-05-30T14:37:20Z | 2023-05-30T14:46:20Z | https://github.com/iperov/DeepFaceLive/issues/168 | [] | zhaojigang | 1 |
Kanaries/pygwalker | pandas | 526 | [BUG] pygwalker bug report - interactive panel doesn't show up | **Describe the bug**
A clear and concise description of what the bug is.
```import numpy as np
import pandas as pd
import pygwalker as pyg
df = pd.read_csv('data.csv')
chart = pyg.walk(df)```
After installing and invocking pygwalker, the interactive panel doesn't show up, showing the following error message:
Open Browser Console for more detailed log - Double click to close this message]
Failed to load model class 'BoxModel' from module '@jupyter-widgets/controls'
Error: Module @jupyter-widgets/controls, version ^1.5.0 is not registered, however, 2.0.0 is
at f.loadClass (http://localhost:8888/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/134.fe2572ece3b7955c89bb.js?v=fe2572ece3b7955c89bb:1:75054)
at f.loadModelClass (http://localhost:8888/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/336.0a90bd910629a565bb7e.js?v=0a90bd910629a565bb7e:1:10728)
at f._make_model (http://localhost:8888/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/336.0a90bd910629a565bb7e.js?v=0a90bd910629a565bb7e:1:7516)
at f.new_model (http://localhost:8888/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/336.0a90bd910629a565bb7e.js?v=0a90bd910629a565bb7e:1:5136)
at f.handle_comm_open (http://localhost:8888/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/336.0a90bd910629a565bb7e.js?v=0a90bd910629a565bb7e:1:3893)
at _handleCommOpen (http://localhost:8888/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/134.fe2572ece3b7955c89bb.js?v=fe2572ece3b7955c89bb:1:73470)
at v._handleCommOpen (http://localhost:8888/static/notebook/3676.bundle.js:1:30808)
at async v._handleMessage (http://localhost:8888/static/notebook/3676.bundle.js:1:32702)
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
To open the interactive graphic interface
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Versions**
- pygwalker version: latest
- python version: 3.9.19
- browser: edge latest
- jupyter: 1.0.0
- jupyterlab: 4.0.7
- jupyterlab-widgets: 1.1.7
- ipywidgets; 7.7.5
**Additional context**
Add any other context about the problem here.
Problem started when updated to latest Anaconda version
| closed | 2024-04-12T19:00:05Z | 2024-04-26T11:10:01Z | https://github.com/Kanaries/pygwalker/issues/526 | [
"bug"
] | PabloJMoreno | 2 |
wandb/wandb | data-science | 8,953 | [Bug]: parameter sweep fails when run using the %run command in ipython | ### Describe the bug
<!--- Describe your issue here --->
The following code works fine:
```
import pytorch_lightning as pl
import torch.nn.functional as F
import torch
from pytorch_lightning.loggers import WandbLogger
import wandb
from torch.utils.data import DataLoader,Dataset
import os
import torch
from torch.utils.data import Dataset
class MyDataset(Dataset):
def __init__(self, data, labels):
self.data = data
self.labels = labels
def __len__(self):
return len(self.data)
def __getitem__(self, index):
x = self.data[index]
y = self.labels[index]
return x, y
class MyDataModule(pl.LightningDataModule):
def __init__(self, config):
super().__init__()
N=10000
x = torch.unsqueeze(torch.linspace(-1, 1, N), dim=1)
y = x.pow(2) + config.noise*torch.rand(x.size())
self.my_dataset = MyDataset(x,y)
print(config.noise)
def train_dataloader(self):
return DataLoader(self.my_dataset, batch_size=100,shuffle=True, num_workers=2)
def val_dataloader(self):
return DataLoader(self.my_dataset,batch_size=100,shuffle=False)
class Net(torch.nn.Module):
def __init__(self, n_feature, n_hidden, n_output):
super(Net, self).__init__()
self.hidden = torch.nn.Linear(n_feature, n_hidden)
self.predict = torch.nn.Linear(n_hidden, n_output)
def forward(self, x):
x= F.relu(self.hidden(x))
x = self.predict(x)
return x
class MyLightningModule(pl.LightningModule):
def __init__(self, config): #n_hidden, lr):
super().__init__()
self.net = Net(n_feature=1, n_hidden=config.n_hidden, n_output=1)
self.lr = config.lr
def training_step(self, batch, batch_idx):
# training_step defines the train loop.
x, y = batch
y_hat = self.net(x)
loss = F.mse_loss(y_hat, x)
self.log('training_loss',loss,on_step=True, on_epoch=True,prog_bar=True)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.net(x)
loss = F.mse_loss(y_hat, x)
self.log('validation_loss',loss)
return loss
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.parameters(), lr=self.lr)
return optimizer
def train_model():
wandb.init(project="sweep")
config=wandb.config
wandb_logger = WandbLogger()
data = MyDataModule(config)
module = MyLightningModule(config)
wandb_logger.watch(module.net)
trainer = pl.Trainer(accelerator='gpu', devices=1, max_epochs=10,
default_root_dir="./lightning-example", logger=wandb_logger)
trainer.fit(module, data)
if __name__ == '__main__':
sweep_config = {
'method': 'random',
'name': 'first_sweep',
'metric': {
'goal': 'minimize',
'name': 'validation_loss'
},
'parameters': {
'n_hidden': {'values': [2,3,5,10]},
'lr': {'max': 1.0, 'min': 0.0001},
'noise': {'max': 1.0, 'min': 0.}
}
}
sweep_id=wandb.sweep(sweep_config, project="test_sweep")
wandb.agent(sweep_id=sweep_id, function=train_model, count=5)
```
However, changing the last lines from
```
if __name__ == '__main__':
sweep_config = {
'method': 'random',
'name': 'first_sweep',
'metric': {
'goal': 'minimize',
'name': 'validation_loss'
},
'parameters': {
'n_hidden': {'values': [2,3,5,10]},
'lr': {'max': 1.0, 'min': 0.0001},
'noise': {'max': 1.0, 'min': 0.}
}
}
sweep_id=wandb.sweep(sweep_config, project="test_sweep")
wandb.agent(sweep_id=sweep_id, function=train_model, count=5)
```
to
```
sweep_config = {
'method': 'random',
'name': 'first_sweep',
'metric': {
'goal': 'minimize',
'name': 'validation_loss'
},
'parameters': {
'n_hidden': {'values': [2,3,5,10]},
'lr': {'max': 1.0, 'min': 0.0001},
'noise': {'max': 1.0, 'min': 0.}
}
}
sweep_id=wandb.sweep(sweep_config, project="test_sweep")
wandb.agent(sweep_id=sweep_id, function=train_model, count=5)
```
when run in ipython with the command `%run myscript.py` results in the following error:
```
wandb: ERROR Run -------- errored:
wandb: ERROR Traceback (most recent call last):
wandb: ERROR File "---/lib/python3.12/site-packages/wandb/agents/pyagent.py", line 300, in _run_job
wandb: ERROR wandb.sdk.wandb_setup._setup(_reset=True)
wandb: ERROR File "---/lib/python3.12/site-packages/wandb/sdk/wandb_setup.py", line 323, in _setup
wandb: ERROR teardown()
wandb: ERROR File "---/lib/python3.12/site-packages/wandb/sdk/wandb_setup.py", line 405, in teardown
wandb: ERROR setup_instance._teardown(exit_code=exit_code)
wandb: ERROR File "---/lib/python3.12/site-packages/wandb/sdk/wandb_setup.py", line 276, in _teardown
wandb: ERROR internal_exit_code = self._connection.teardown(exit_code or 0)
wandb: ERROR ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wandb: ERROR File "---/lib/python3.12/site-packages/wandb/sdk/lib/service_connection.py", line 197, in teardown
wandb: ERROR raise WandbServiceNotOwnedError(
wandb: ERROR wandb.sdk.lib.service_connection.WandbServiceNotOwnedError: Cannot tear down service started by different process
```
Of course there is the solution just mentioned, but I don't understand this behavior. Python 3.12.7, IPython 8.29.0, wandb 0.18.7 | open | 2024-11-27T04:06:15Z | 2024-12-18T17:42:42Z | https://github.com/wandb/wandb/issues/8953 | [
"ty:bug",
"c:sweeps"
] | lrast | 5 |
autokey/autokey | automation | 709 | Changes to autokey.log | ### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Enhancement
### Which Linux distribution did you use?
N/A
### Which AutoKey GUI did you use?
_No response_
### Which AutoKey version did you use?
_No response_
### How did you install AutoKey?
_No response_
### Can you briefly describe the issue?
The AutoKey log lives in the AutoKey configuration directory tree where it is primarily unnoticed.
The existence of this log needs to be justified. I have used AutoKey for decades and have never used this log. Why is it there? What purpose does it serve?
If I need a trace, I run one.
If there is no good answer to this, the log should be eliminated.
This log grows forever. If it is good for something, it needs to have logrotate applied to it so the current version is relatively small and fresh and older versions are periodically removed.
### Can the issue be reproduced?
_No response_
### What are the steps to reproduce the issue?
_No response_
### What should have happened?
This log should be used or eliminated. If used, then its size should be limited via log rotation and something should be added to our wiki describing the log and how to use it.
### What actually happened?
The log is relatively invisible and unused and grows forever wasting megabytes of space.
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
The log can be manually deleted or otherwise edited from outside of AutoKey with no consequences (at least while AutoKey isn't actively in the process of writing to it). | open | 2022-06-28T11:25:29Z | 2024-12-23T19:02:30Z | https://github.com/autokey/autokey/issues/709 | [
"enhancement",
"documentation",
"low-priority"
] | josephj11 | 8 |
roboflow/supervision | deep-learning | 1,641 | DetectionDataset merge fails when class name contains capital letter | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
Hello, thanks for this great library! I'm facing an issue while trying to merge 2 datasets when any of the class names contain a capital letter.
Error:
```
ValueError: Class Animal not found in target classes. source_classes must be a subset of target_classes.
```
The issue stems from the ```merge_class_lists function``` at https://github.com/roboflow/supervision/blob/37cacec70443a2c28ea6642f6bc54e6c5151c111/supervision/dataset/utils.py#L53
where the class names are converted to lower-case, but ```build_class_index_mapping``` keeps the class names as it is. For my use case, I was able to get around by removing the lower-case conversion.
### Environment
- Supervision 0.24.0
- OS: Windows 10
- Python 3.10.14
### Minimal Reproducible Example
Example: I downloaded 2 roboflow datasets - https://universe.roboflow.com/cvlab-6un5p/cv-lab-kpdek and https://universe.roboflow.com/padidala-indhu-e1dhl/animals-gzsxr and tried to merge them
```python
import supervision as sv
def main():
ds1 = sv.DetectionDataset.from_coco("data/CV-LAB.v1i.coco/train", "data/CV-LAB.v1i.coco/train/_annotations.coco.json")
ds2 = sv.DetectionDataset.from_coco("data/Animals.v1i.coco/train", "data/Animals.v1i.coco/train/_annotations.coco.json")
sv.DetectionDataset.merge([ds1, ds2])
if __name__ == '__main__':
main()
```
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2024-11-01T08:49:54Z | 2024-11-02T06:16:37Z | https://github.com/roboflow/supervision/issues/1641 | [
"bug"
] | Suhas-G | 5 |
pywinauto/pywinauto | automation | 1,285 | pywinauto cannot get the latest identifier when working in the dynamic application | ## Expected Behavior
Working in one application, when navigate to different tab inside the application, pywinauto should locate the identifier correctly
## Actual Behavior
When navigate to different tab inside the application, pywinauto cannot locate the latest identifier, seems like it always save the previous identifier or the first time identifier
| open | 2023-02-28T21:58:18Z | 2024-03-12T10:29:25Z | https://github.com/pywinauto/pywinauto/issues/1285 | [] | xinglin2016 | 1 |
yinkaisheng/Python-UIAutomation-for-Windows | automation | 234 | 用SendKeys('{PageDown}')或DragDrop滚动页面后,下方的控件无法处理,报Can not move cursor. TextControl's BoundingRectangle is (0,0,0,0)[0x0]. SearchProperties: {Name: 'XXX', ControlType: TextControl}错误 | pyWindow = auto.WindowControl(searchDepth=1, Name='aaa')
pyWindow.SendKeys('{PageDown}')
或这段pyWindow.GroupControl().DragDrop(464, 500, 464, 680, moveSpeed=5, waitTime=1)
pyWindow.TextControl(Name='bbb').Click()
执行后控制台报
<module> -> Can not move cursor. TextControl's BoundingRectangle is (0,0,0,0)[0x0]. SearchProperties: {Name: 'bbb', ControlType: TextControl} 有人遇到过类似的问题吗? | open | 2023-02-06T01:23:14Z | 2024-09-19T07:30:25Z | https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/234 | [] | UshioYu | 4 |
pytest-dev/pytest-mock | pytest | 117 | mocker.patch on function | Hi,
When I want to mock a function in a module, the mock not work if I use this syntax :
from mypkg.mymodule import myfunction
+ myfunction()
but work with this one :
from mypkg import mymodule
+mymodule.myfunction()
Any idea ?
| closed | 2018-06-13T07:33:30Z | 2020-11-06T15:57:33Z | https://github.com/pytest-dev/pytest-mock/issues/117 | [] | ulyssejdv | 3 |
PaddlePaddle/models | computer-vision | 4,761 | 有没有使用I3D模型或是TSN模型提取视频的RGB和光流特征的例子 | 有没有使用I3D模型或是TSN模型提取视频的RGB和光流特征的例子 | open | 2020-07-20T02:14:17Z | 2024-02-26T05:10:52Z | https://github.com/PaddlePaddle/models/issues/4761 | [] | liu824 | 2 |
amidaware/tacticalrmm | django | 1,592 | Feature Request - Ability to run tasks on first audit | **Is your feature request related to a problem? Please describe.**
It would be nice to be able to setup onboarding tasks for clients which have multiple client specific software install scripts within it.
**Describe the solution you'd like**
The ability to assign tasks at client level and also be able to assign global tasks and client level tasks with a "Run on first audit" schedule. (When the agent first checks into the dashboard)
| closed | 2023-08-09T10:17:20Z | 2023-08-09T12:41:12Z | https://github.com/amidaware/tacticalrmm/issues/1592 | [] | mearkats | 1 |
aleju/imgaug | deep-learning | 11 | loading augmenters in python scripts vs in jupyter notebooks | I came across this error just now. When loading the augmenters module in Jupyter notebooks, the following line works
`import augmenters as iaa`
However, the same line fails if used in a script (in the same directory).
Any Idea what might be happening?? I think it's the (rather notorious) relative import system of python that's to blame.
PS: I forgot to tell you. I've been testing all of it in Python 3.5, and the import troubles (earlier and now this) are the only problems I have encountered. | open | 2016-12-26T13:10:41Z | 2017-03-29T11:43:15Z | https://github.com/aleju/imgaug/issues/11 | [] | SarthakYadav | 3 |
rougier/from-python-to-numpy | numpy | 40 | In Introduction 2.1 Simple Example - most benefit is not from vectorization | In chapter "2.1 Simple example" you have example of "Vectorized approach" which leaves impression that most performance benefits come from itertools.accumulate(). This is not true - the main speed gain comes from use of random.choices() instead of random.randint() in previous sample.
```
>>> import random
>>> from timeit import timeit
>>> # original example
... def random_walk(n):
... position = 0
... walk = [position]
... for i in range(n):
... position += 2*random.randint(0, 1)-1
... walk.append(position)
... return walk
...
>>>
>>> timeit("random_walk(n=10000)", number=100, globals=globals())
1.7758303454734055
>>> # lets use random.choices() instead
... def random_walk_with_choices(n):
... position = 0
... steps = random.choices([1,-1], k=n)
... walk = [position]
... for step in steps:
... position += step
... walk.append(position)
... return walk
...
>>> timeit("random_walk_with_choices(n=10000)", number=100, globals=globals())
0.39364298073223836
>>>
>>> # original random_walk_faster with accumulate
... def random_walk_faster(n=1000):
... from itertools import accumulate
... steps = random.choices([1,-1], k=n)
... return list(accumulate(steps))
...
>>> timeit("random_walk_faster(n=10000)", number=100, globals=globals())
0.264255993680635
```
It is clearly visible that accumulate has minor effect on overall speed (which is mainly driven by itertools() native implementation).
As such example is clearly misleading.
Another minor bug with example: random_walk() returns N+1 elements while random_walk_faster() returns N elements.
>>> len(random_walk(1000))
1001
>>> len(random_walk_faster(1000))
1000
| closed | 2017-01-10T12:39:14Z | 2017-01-11T09:10:03Z | https://github.com/rougier/from-python-to-numpy/issues/40 | [] | reidfaiv | 1 |
NVIDIA/pix2pixHD | computer-vision | 185 | Bug in NLayerDiscriminator: padw ceil instead of floor | Hey, I believe there's a minor bug in NLayerDiscriminator: padding is set as `padw = int(np.ceil((kw-1.0)/2))`, but it should be floor, not ceil (assuming you want to have same padding, similar to pix2pix).
As a result, the output of NLayerDiscriminator is a tiny bit (5 pixels or so) bigger than pix2pix patchgan, but MSEloss/BCEloss will average over all outputs anyways, so I guess it shouldn't matter much performance-wise.
EDIT: fixed it and made a pull request in case you want to change it.
Best regards,
Felix | open | 2020-03-08T21:19:40Z | 2020-03-08T21:30:39Z | https://github.com/NVIDIA/pix2pixHD/issues/185 | [] | fa9r | 0 |
sammchardy/python-binance | api | 1,530 | ERROR Unknown exception (sent 1000 (OK); then received 1000 (OK)) | Not really a bug, but every time when a socket connection is closed, the message is logged, I have set up a 3rd party monitoring service that monitors my logs, as a result it generates ton of notifications because of this "error" message.
Maybe it's better to treat this as a normal message instead of error?
Or if you think I am not using the socket correctly, please let me know, my usage of socket is like this:
```python
# bm is BinanceSocketManager
async with bm.futures_user_socket() as socket:
while True:
res = await socket.recv()
finished = do_something(res)
if finished:
break
```
Thanks. | closed | 2024-12-26T10:38:28Z | 2024-12-30T10:46:15Z | https://github.com/sammchardy/python-binance/issues/1530 | [
"bug"
] | tsunamilx | 0 |
mwaskom/seaborn | data-visualization | 3,830 | Controlling zorder per group in scatterplot | I'm running into a situation where it would be nice to be able to set or override the zorder for different groups within a scatterplot, and it doesn't seem like there's an obvious way to do this (short of overlaying multiple scatterplots by hand and then adjusting all the elements after).
For example, I have a scatterplot with three distinct groups (call them A, B, and C), which I'm currently mapping to hue. It so happens that groups B and C are tightly clustered, while group A is both larger and more dispersed. As a result, the A group tends to visually crowd out the B and C groups. I can fudge this with transparency, but it's not a terribly satisfying solution.
If it was possible to set the zorder for each group, it would be easy to have the B and C groups draw above the A group and prevent crowding without relying on transparency. I tried to do this in a postprocessing step, but it appears that all scatterplot points are put into one collection object, so there's no direct way to update the zorder by group.
For right now, I can hack around this by sorting the dataframe prior to calling `scatterplot()` to produce my preferred draw order. I don't think this is guaranteed behavior though, so it doesn't strike me as a stable or recommended approach.
I expect that exposing zorder here might entail quite a bit of complexity under the hood - it seems like matplotlib pathcollections only allow zorder at the collection level, not individual elements. That probably means that the collection would need to be broken into multiple collections, and if zorder is mapped to a field with high cardinality (or continuous values) that could get unwieldy.
Still, it seems like it could be useful to provide some way to influence the draw order, so I figured I'd raise the issue here. | open | 2025-03-07T19:00:45Z | 2025-03-10T15:40:02Z | https://github.com/mwaskom/seaborn/issues/3830 | [] | bmcfee | 2 |
TheKevJames/coveralls-python | pytest | 34 | Remove extra dependencies to speed up installation | ## Installation takes ~10 seconds per build
Due to all coverall-python's dependencies, installation takes around 10 seconds. Probably most of that time is spend on compiling PyYAML, only for parsing a config file most users will not be using. Also installing `requests` (great lib!) for doing a [single post](https://github.com/coagulant/coveralls-python/blob/master/coveralls/api.py#L82) only, seems somewhat overkill. Reducing the external dependencies allows for a faster install, and speeds up all builds.
```
$ time pip install coveralls
Downloading/unpacking coveralls
Downloading coveralls-0.3.zip
Running setup.py egg_info for package coveralls
Downloading/unpacking PyYAML>=3.10 (from coveralls)
Downloading PyYAML-3.10.tar.gz (241kB): 241kB downloaded
Running setup.py egg_info for package PyYAML
Downloading/unpacking docopt>=0.6.1 (from coveralls)
Downloading docopt-0.6.1.tar.gz
Running setup.py egg_info for package docopt
Downloading/unpacking coverage>=3.6 (from coveralls)
Downloading coverage-3.7.tar.gz (283kB): 283kB downloaded
Running setup.py egg_info for package coverage
warning: no previously-included files matching '*.pyc' found anywhere in distribution
Downloading/unpacking requests>=1.0.0 (from coveralls)
Downloading requests-2.0.1.tar.gz (412kB): 412kB downloaded
Running setup.py egg_info for package requests
Downloading/unpacking sh>=1.08 (from coveralls)
Downloading sh-1.09.tar.gz
Running setup.py egg_info for package sh
Installing collected packages: coveralls, PyYAML, docopt, coverage, requests, sh
Running setup.py install for coveralls
Installing coveralls script to /home/travis/virtualenv/python2.6/bin
Running setup.py install for PyYAML
checking if libyaml is compilable
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.6 -c build/temp.linux-x86_64-2.6/check_libyaml.c -o build/temp.linux-x86_64-2.6/check_libyaml.o
checking if libyaml is linkable
gcc -pthread build/temp.linux-x86_64-2.6/check_libyaml.o -lyaml -o build/temp.linux-x86_64-2.6/check_libyaml
building '_yaml' extension
(...)
Running setup.py install for docopt
Running setup.py install for coverage
building 'coverage.tracer' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.6 -c coverage/tracer.c -o build/temp.linux-x86_64-2.6/coverage/tracer.o
gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions build/temp.linux-x86_64-2.6/coverage/tracer.o -o build/lib.linux-x86_64-2.6/coverage/tracer.so
warning: no previously-included files matching '*.pyc' found anywhere in distribution
Installing coverage2 script to /home/travis/virtualenv/python2.6/bin
Installing coverage-2.6 script to /home/travis/virtualenv/python2.6/bin
Installing coverage script to /home/travis/virtualenv/python2.6/bin
Running setup.py install for requests
Running setup.py install for sh
Successfully installed coveralls PyYAML docopt coverage requests sh
Cleaning up...
real 0m10.539s
user 0m8.336s
sys 0m0.781s
```
Using `--use-mirrors`, the result is only worse:
```
$ time pip install coveralls --use-mirrors
real 1m44.662s
user 0m10.684s
sys 0m1.183s
```
| closed | 2013-11-19T08:13:35Z | 2015-05-16T19:29:40Z | https://github.com/TheKevJames/coveralls-python/issues/34 | [] | Bouke | 11 |
django-import-export/django-import-export | django | 1,973 | Feature proposal - import and export management commands | I believe it would be useful to have Django management commands to import and export data from the command line.
This could also be useful for automating repeated imports and exports.
Below is the proposed syntax with invocation examples.
---
### import_data command
#### Syntax
```bash
manage.py import_data [options] <resource> <import_file_name>
```
* `<resource>` - Resource class or Model class as dotted path, ie: mymodule.resources.MyResource or auth.User
* `<import_file_name>` - file to import
Options:
* `--format=FORMAT` - import export format, guess from mimetype if empty
* `--dry-run` - Dry run
* `--raise-errors` - Raise errors
* `--no-raise-errors` - Do not raise errors
#### Examples
Import data from file into `auth.User` model using default model resource:
```bash
python manage.py import_data auth.User users.csv
```
Import data from file using custom model resource, raising errors:
```bash
python manage.py import_data --raise-errors helper.MyUserResource users.csv
```
### export_data command
#### Syntax
```bash
manage.py export_data [options] <format> <resource>
```
* `<format>` - export format
* `<resource>` - Resource class or Model class as dotted path, ie: mymodule.resources.MyResource or auth.User
#### Example
Export data from `auth.User` model in CSV format to standard output.
```bash
python manage.py export_data CSV auth.User
```
---
Please take a moment to read through the details and share any feedback, suggestions, or concerns you might have.
| closed | 2024-10-17T15:58:33Z | 2024-11-19T06:53:42Z | https://github.com/django-import-export/django-import-export/issues/1973 | [] | bmihelac | 13 |
nolar/kopf | asyncio | 555 | Some spec elements received empty on handler when CR applied | ## Long story short
I have a CRD which has several fields with different data types. When I apply a CR for this CRD, kopf create handler receives and empty **groups** field in spec, same goes for body['spec'] as well.
## Description
<!-- Please provide as much information as possible. Lack of information may result in a delayed response. As a guideline, use the following placeholders, and add yours as needed. -->
When CR applied for the CRD, **groups** field that exists in the spec is received as an empty dict. While creating the CR, no validation errors occur so I believe groups data structure is fine. Below is the output for the spec received in which the **groups** is an empty dict:
```
Spec provided:{'groups': {}, 'namespace': 'testhosting', 'status': 'Initial', 'version': '1.0.0'}
```
<details><summary>The code snippet to reproduce the issue</summary>
**Here's the CRD:**
```apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: hosting.myproduct.com
spec:
group: myproduct.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
apiVersion:
type: string
spec:
type: object
properties:
status:
type: string
namespace:
type: string
version:
type: string
groups:
type: object
items:
type: array
items:
type: string
scope: Namespaced
names:
kind: Hosting
listKind: HostingList
plural: hostings
shortNames:
- hst
- hsts
singular: hosting
```
**Here's the CR:**
```apiVersion: hosting.myproduct.com/v1
kind: Hosting
metadata:
name: testhosting
spec:
status: 'Initial'
namespace: testhosting
version: 1.0.0
groups:
devops: ["random.user@email.com", "another.user@email.com"]
engineering: ["x.user@email.com", "y.user@email.com"]
admin: ["admin@email.com", "admin.1@email.com"]
```
Handler:
```python
import kopf
@kopf.on.create('myproduct.com', 'v1', 'hostings')
def create_fn(spec, body, name, logger, **kwargs):
spec_name = name
spec_ns = spec['namespace']
spec_groups = spec['groups']
spec_body = spec['body']
logger.info(f"Spec provided:{spec}") # groups field is empty
logger.info(f"Name:{name}")
logger.info(f"Namespace:{spec_ns}")
logger.info(f"Spec Groups:{spec_groups}") # empty
logger.info(f"Body:{spec_body}") # empty
```
</details>
<details><summary>The exact command to reproduce the issue</summary>
```bash
kubectl apply -f crd.yaml
kopf run hostings.py --verbose
kubectl apply -f cr.yaml
```
</details>
<details><summary>The full output of the command that failed</summary>
There's no error produced however **body** is empty and **spec** is missing **groups** field data as shown below:
**Spec provided:{'groups': {}, 'namespace': 'someproduct', 'status': 'Initial', 'version': '1.0.0'}**
</details>
## Environment
<!-- The following commands can help:
`kopf --version` or `pip show kopf`
`kubectl version`
`python --version`
-->
* Kopf version: kopf, version 0.27
* Kubernetes version: 1.18
* Python version: 3.8.5
* OS/platform: MacOS
<details><summary>Python packages installed</summary>
<!-- use `pip freeze --all` -->
```
aiohttp==3.6.2
aiojobs==0.2.2
async-timeout==3.0.1
attrs==20.1.0
avionix==0.3.1
boto3==1.14.54
botocore==1.17.54
cachetools==4.1.1
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
docutils==0.15.2
gitdb==4.0.5
GitPython==3.1.7
google-auth==1.21.0
grpcio==1.31.0
grpcio-tools==1.31.0
idna==2.10
iso8601==0.1.12
jmespath==0.10.0
kopf==0.27
kubernetes==11.0.0
multidict==4.7.6
oauthlib==3.1.0
pip==20.1.1
protobuf==3.13.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyhelm==2.14.5
pykube-ng==20.7.2
python-dateutil==2.8.1
PyYAML==5.3.1
requests==2.24.0
requests-oauthlib==1.3.0
rsa==4.6
s3transfer==0.3.3
setuptools==49.2.0
six==1.15.0
smmap==3.0.4
supermutes==0.2.5
typing-extensions==3.7.4.3
urllib3==1.25.10
websocket-client==0.57.0
wheel==0.34.2
yarl==1.5.1
```
</details>
| open | 2020-09-22T07:38:22Z | 2020-09-24T14:01:34Z | https://github.com/nolar/kopf/issues/555 | [
"bug"
] | neocorp | 2 |
huggingface/transformers | tensorflow | 36,674 | NotImplementedError: aten::_log_softmax_backward_data with SparseCUDA backend | ### System Info
- `transformers` version: 4.49.0
- Platform: Linux-4.18.0-147.mt20200626.413.el8_1.x86_64-x86_64-with-glibc2.17
- Python version: 3.12.3
- Huggingface_hub version: 0.26.3
- Safetensors version: 0.4.5
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: 0.15.4
- PyTorch version (GPU?): 2.5.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA Graphics Device
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
labels is a sparse coo tensor
```python
class NDPTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False, num_items_in_batch=None):
input_ids = inputs.pop("input_ids")
attention_mask = inputs.pop("attention_mask")
cnt_list = inputs.pop(
"cnt_list"
)
labels= inputs.pop("label")
result = model(
input_ids=input_ids,
attention_mask=attention_mask,
)
model_logits = result.logits # bsz x seqlen x dim
ce_loss = CrossEntropyLoss(ignore_index=-100)
loss = ce_loss(model_logits, labels)
if return_outputs:
return loss, {"logits": model_logits}
else:
return loss
```
```bash
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/ruanjunhao/ndp/new_version/train.py", line 107, in <module>
trainer.train()
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/transformers/trainer.py", line 2241, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/transformers/trainer.py", line 3740, in training_step
self.accelerator.backward(loss, **kwargs)
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/accelerate/accelerator.py", line 2329, in backward
loss.backward(**kwargs)
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/_tensor.py", line 581, in backward
torch.autograd.backward(
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Could not run 'aten::_log_softmax_backward_data' with arguments from the 'SparseCUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_log_softmax_backward_data' is only available for these backends: [CPU, CUDA, HIP, MPS, IPU, XPU, HPU, VE, MTIA, PrivateUse1, PrivateUse2, PrivateUse3, Meta, FPGA, MAIA, Vulkan, Metal, QuantizedCPU, QuantizedCUDA, QuantizedHIP, QuantizedMPS, QuantizedIPU, QuantizedXPU, QuantizedHPU, QuantizedVE, QuantizedMTIA, QuantizedPrivateUse1, QuantizedPrivateUse2, QuantizedPrivateUse3, QuantizedMeta, CustomRNGKeyId, MkldnnCPU, SparseCsrCPU, SparseCsrCUDA, SparseCsrHIP, SparseCsrMPS, SparseCsrIPU, SparseCsrXPU, SparseCsrHPU, SparseCsrVE, SparseCsrMTIA, SparseCsrPrivateUse1, SparseCsrPrivateUse2, SparseCsrPrivateUse3, SparseCsrMeta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
```
However, PyTorch seems to support backpropagation of sparse tensors, such as:
```python
import torch
import torch.nn.functional as F
from torch.nn import CrossEntropyLoss
indices = torch.tensor([[0, 1, 2], [0, 1, 2]])
values = torch.tensor([1.0, 2.0, 3.0])
size = torch.Size([3, 3])
sparse_tensor = torch.sparse_coo_tensor(indices, values, size)
target = torch.tensor([0, 1, 2])
logits = torch.randn((3,3),requires_grad=True)
loss_func = CrossEntropyLoss(reduction='sum')
loss = loss_func(logits, target)
loss.backward()
print(loss)
```
### Expected behavior
If I set a breakpoint using pdb in the trainer, calling `loss.backward()` works fine. It seems that something in the trainer is causing an issue and breaking everything. | closed | 2025-03-12T14:59:33Z | 2025-03-14T05:30:16Z | https://github.com/huggingface/transformers/issues/36674 | [
"bug"
] | rangehow | 5 |
FactoryBoy/factory_boy | sqlalchemy | 834 | How does factory boy know of what type a field is? | I'm trying to make a tweak to factory boy so that it resolves properly pydantic models, but I'm not able to find a code responsible for getting the types for Django or SQLAlchemy models & generating proper values for these parameters.
Where is the code that's responsible for this? | closed | 2021-01-15T13:32:07Z | 2021-01-15T16:04:50Z | https://github.com/FactoryBoy/factory_boy/issues/834 | [] | mdczaplicki | 1 |
gradio-app/gradio | data-visualization | 10,023 | gr.Plot does not work with matplotlib plots properly anymore | ### Describe the bug
Hey Gradio Team,
I just wanted to let you know that the latest gradio version does not seem to be working properly with Matplotlib plots/figures.
Previous gradio versions (i.e 5.5) seem to work fine.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as demo:
test_plot = gr.Plot()
test_btn = gr.Button("Test", variant="primary")
test_btn .click(test_fn, inputs=[], outputs=[test,plot])
demo.launch()
```
I use the following code to create Matplotlib plot:
```python
def test_fn()
fig, ax = plt.subplots()
plt.close(fig)
return fig
```
==================================================================
I managed to make it work like so:
I used gr.Image instead of gr.Plot
and for matplotlib part:
```python
def test_fn():
fig, ax = plt.subplots()
import io
from PIL import Image
out_fig = io.BytesIO()
fig.savefig(out_fig, format='png')
out_fig.seek(0)
img = Image.open(out_fig)
return img
```
So there is some kinda issue between latest gradio and matplotlib when it comes to gr.Plot.
Please fix it. I would really appreciate it.
Thank you.
Sinceerly,
Alex
### Screenshot
_No response_
### Logs
```shell
======================================================================
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.10/site-packages/gradio/queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "/home/ubuntu/.local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/home/ubuntu/.local/lib/python3.10/site-packages/gradio/blocks.py", line 2025, in process_api
data = await self.postprocess_data(block_fn, result["prediction"], state)
File "/home/ubuntu/.local/lib/python3.10/site-packages/gradio/blocks.py", line 1831, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/home/ubuntu/.local/lib/python3.10/site-packages/gradio/components/plot.py", line 138, in postprocess
out_y = processing_utils.encode_plot_to_base64(value, self.format)
File "/home/ubuntu/.local/lib/python3.10/site-packages/gradio/processing_utils.py", line 158, in encode_plot_to_base64
plt.savefig(output_bytes, format=fmt)
File "/usr/lib/python3/dist-packages/matplotlib/figure.py", line 3019, in savefig
self.canvas.print_figure(fname, **kwargs)
File "/usr/lib/python3/dist-packages/matplotlib/backend_bases.py", line 2259, in print_figure
canvas = self._get_output_canvas(backend, format)
File "/usr/lib/python3/dist-packages/matplotlib/backend_bases.py", line 2188, in _get_output_canvas
raise ValueError(
ValueError: Format 'webp' is not supported (supported formats: eps, jpeg, jpg, pdf, pgf, png, ps, raw, rgba, svg, svgz, tif, tiff)
```
### System Info
```shell
Matplotlib version: 3.5.1
========================================================
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.6.0
gradio_client version: 1.4.3
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.5
ffmpy: 0.4.0
gradio-client==1.4.3 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.0.3
markupsafe: 2.0.1
numpy: 1.21.5
orjson: 3.10.11
packaging: 21.3
pandas: 1.3.5
pillow: 9.0.1
pydantic: 2.10.1
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 5.4.1
ruff: 0.8.0
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit==0.12.0 is not installed.
typer: 0.13.1
typing-extensions: 4.12.2
urllib3: 1.26.5
uvicorn: 0.32.1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.3.1
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 21.3
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it | closed | 2024-11-23T00:41:31Z | 2024-11-29T08:48:30Z | https://github.com/gradio-app/gradio/issues/10023 | [
"bug",
"Regression"
] | asigalov61 | 1 |
HumanSignal/labelImg | deep-learning | 554 | Installation via pip - problem on windows | While installing on windows with python3 via pip I encountered the following error during initial execution of labelImg:
from PyQt5.QtGui import *
ImportError: DLL load failed: The specified procedure could not be found.
During handling of the above exception, another exception occurred:
<--- Insert any number of errors here while I was trying to figure out the problem -->
This is solved by downloading and installing the following https://github.com/winpython/winpython
- **OS:** --> windows10
- **PyQt version:** --> latest installed by pip (5.14 I think)
| open | 2020-02-14T23:59:22Z | 2020-02-14T23:59:22Z | https://github.com/HumanSignal/labelImg/issues/554 | [] | hephaestus9 | 0 |
PeterL1n/RobustVideoMatting | computer-vision | 146 | How to remove the "flame" in the picture? | In the picture, there will be a "flame" effect around the character, how to remove it

| closed | 2022-03-07T12:09:50Z | 2022-03-09T03:12:46Z | https://github.com/PeterL1n/RobustVideoMatting/issues/146 | [] | Alvazz | 1 |
samuelcolvin/dirty-equals | pytest | 83 | Makefile command for testing in local development | I think there should be Makefile command which runs tests with `--update-examples` and without coverage for speed, and this would be the recommended workflow when iterating and testing in local development.
In fact, I think that's what `make test` should be, and the current `test` command (i.e. with coverage and without updating examples) could be renamed to `test:slow` or `ci` or something to indicate that it's the slower version that you run less often to double check things. | closed | 2023-09-20T11:32:18Z | 2024-08-12T21:29:55Z | https://github.com/samuelcolvin/dirty-equals/issues/83 | [] | alexmojaki | 1 |
littlecodersh/ItChat | api | 159 | 微信gif图的识别与处理 | 目前通过微信直接发送的gif图片itchat接收到时,识别为'MsgType': 49, 'Type': 'Sharing', 'AppMsgType': 8,属SHARING类型,无法正常的接收保存。
修改messages.py中的produce_msg函数,在elif m['MsgType'] == 49: # sharing分支下,增加:
```python
elif m['AppMsgType'] == 8:
download_fn = get_download_fn(core,
'%s/webwxgetmsgimg' % core.loginInfo['url'], m['NewMsgId'])
msg = {
'Type' : 'Picture',
'FileName' : '%s.%s'%(time.strftime('%y%m%d-%H%M%S', time.localtime()),'gif'),
'Text' : download_fn, }
```
以正确识别处理接收到的gif图片
| closed | 2016-11-23T10:48:20Z | 2016-11-27T05:26:26Z | https://github.com/littlecodersh/ItChat/issues/159 | [
"bug"
] | 6bigfire | 2 |
saulpw/visidata | pandas | 2,562 | Update to 3.1: circular import error | **Small description**
I have installed it both via pip and pipx, and I have "ImportError: cannot import name 'GuideSheet' from partially initialized module 'visidata'" error
**Steps to reproduce**
I have installed vd in two ways:
- pip3 install visidata
- pipx install visidata
Then I run `vd` and I have
```
Traceback (most recent call last):
File "/home/aborruso/.local/bin/vd", line 3, in <module>
import visidata.main
File "/home/aborruso/.local/pipx/venvs/visidata/lib/python3.11/site-packages/visidata/__init__.py", line 157, in <module>
importFeatures()
File "/home/aborruso/.local/pipx/venvs/visidata/lib/python3.11/site-packages/visidata/__init__.py", line 139, in importFeatures
vd.importSubmodules('visidata.features')
File "/home/aborruso/.local/pipx/venvs/visidata/lib/python3.11/site-packages/visidata/settings.py", line 508, in importSubmodules
vd.importModule(pkgname + '.' + module.name)
File "/home/aborruso/.local/pipx/venvs/visidata/lib/python3.11/site-packages/visidata/settings.py", line 491, in importModule
r = importlib.import_module(pkgname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aborruso/.local/pipx/venvs/visidata/lib/python3.11/site-packages/visidata/features/errors_guide.py", line 1, in <module>
from visidata import GuideSheet, vd
ImportError: cannot import name 'GuideSheet' from partially initialized module 'visidata' (most likely due to a circular import) (/home/aborruso/.local/pipx/venvs/visidata/lib/python3.11/site-packages/visidata/__init__.py)
```
**Expected result**
**Actual result with screenshot**
[If there is an error message, please include the full stack trace shown with `Ctrl+E`.]
**Configuration**
- Does this issue reproduce without any plugins or configuration (using the `-N` CLI flag)?
Yes
- Does this issue reproduce with either the [latest release](https://www.visidata.org/releases/), or with the [develop branch](https://www.visidata.org/install/#update-visidata-from-an-existing-installation)?
yes
**Additional context**
- What platform and version are you using (Linux, MacOS, Windows)?
Debian 12
- Which version of Python?
Python 3.11.2
| closed | 2024-10-14T17:35:45Z | 2024-10-14T21:41:03Z | https://github.com/saulpw/visidata/issues/2562 | [
"bug",
"fixed"
] | aborruso | 12 |
huggingface/text-generation-inference | nlp | 2,946 | Serverless Inference API OpenAI /v1/chat/completions route broken | ### System Info
Trying to access the serverless inference endpoints using the OpenAI compatible route leads to status 400.
```
Invalid URL: missing field `name`
```
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [x] An officially supported command
- [ ] My own modifications
### Reproduction
Here is a curl command to run inference, requires to setup your HF_TOKEN.
```sh
curl https://api-inference.huggingface.co/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $HF_TOKEN" \
-d '{
"model": "meta-llama/Llama-3.3-70B-Instruct",
"messages": [
{
"role": "user",
"content": "Write a short poem."
}
]
}'
```
### Expected behavior
This endpoint should be "openai" compatible. | open | 2025-01-23T15:20:27Z | 2025-01-24T02:25:39Z | https://github.com/huggingface/text-generation-inference/issues/2946 | [] | pelikhan | 1 |
desec-io/desec-stack | rest-api | 164 | Introduce Consistent Status Codes for Validation Errors | Currently, if validation of subname or type fails in Django, we send status 404 (see URL conf). On the other hand, if validation fails in pdns, we send 422.
Unfortunately, I do not have a subname handy that will pass the URL conf but fail pdns and hence exhibit the inconsistency corner case.
| closed | 2019-04-18T08:30:15Z | 2024-10-07T16:54:05Z | https://github.com/desec-io/desec-stack/issues/164 | [] | nils-wisiol | 2 |
tensorpack/tensorpack | tensorflow | 817 | problems training resnet-dorefa | 1. running "python alexnet_dorefa.py --dorefa 8,8,8 --data Imagenet"
tensorflow1.13.0(docker), cuda8.0, cudnn6, anaconda2
tensorpack version:
```
>>> import tensorpack
/root/anaconda2/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
>>> print(tensorpack.__version__)
0.8.6
```
modify the data augmentation code in imagenet_utils.py like this:
and the other code remains the same as the original code.
```
def fbresnet_augmentor(isTrain):
"""
Augmentor used in fb.resnet.torch, for BGR images in range [0,255].
"""
if isTrain:
augmentors = [
imgaug.ResizeShortestEdge(256, cv2.INTER_CUBIC),
imgaug.RandomCrop((224,224)),
#GoogleNetResize(),
# It's OK to remove these augs if your CPU is not fast enough.
# Removing brightness/contrast/saturation does not have a significant effect on accuracy.
# Removing lighting leads to a tiny drop in accuracy.
imgaug.Flip(horiz=True),
]
else:
augmentors = [
imgaug.ResizeShortestEdge(256, cv2.INTER_CUBIC),
imgaug.CenterCrop((224, 224)),
]
return augmentors
```
and the log look like:(still very high error after many epoches)
[log.log](https://github.com/tensorpack/tensorpack/files/2178724/log.log)
| closed | 2018-07-10T03:23:27Z | 2019-04-10T16:27:08Z | https://github.com/tensorpack/tensorpack/issues/817 | [
"examples"
] | brisker | 7 |
ets-labs/python-dependency-injector | asyncio | 759 | DependencyContainer: await or not your deps? | Hey, I'm curios is there a way to "fix" whenever DependencyContainer.dependencies require await to be created or not?
I have found this extremely irritating that if you change some deps, chances are some components that are using them started to require to be created via await or stop requiring that causing all previous places where they were called via await or not to be broken.
Is there way to fix that behavior? So if I'm using a DependencyContainer in an asyncio-based project, I need to await all of my components during creation (despite their dependencies). | open | 2023-10-31T16:54:15Z | 2023-10-31T16:54:15Z | https://github.com/ets-labs/python-dependency-injector/issues/759 | [] | roma-glushko | 0 |
rio-labs/rio | data-visualization | 81 | Content of `MultilineTextInput` Not Scrollable on Safari | ### Describe the bug
The `MultilineTextInput` component is not scrollable when the content exceeds the visible area on Safari. This issue prevents users from being able to view or edit the full content.
### Expected Behavior
The `MultilineTextInput` should be scrollable, allowing users to view and edit all content.
### Actual Behavior
The `MultilineTextInput` is not scrollable, and the overflow content is not accessible.
### Steps to Reproduce
1. Open Safari.
2. Navigate to the page containing the `MultilineTextInput` component.
3. Enter content that exceeds the visible area of the `MultilineTextInput`.
### Screenshots/Videos
_No response_
### Operating System
MacOS
### What browsers are you seeing the problem on?
Safari
### Browser version
_No response_
### What device are you using?
Desktop, Mobile
### Related Issues:
#82 #83 | closed | 2024-07-07T12:35:47Z | 2024-08-11T16:28:27Z | https://github.com/rio-labs/rio/issues/81 | [
"bug",
"layout rework"
] | Sn3llius | 2 |
docarray/docarray | pydantic | 1,866 | HNSWLib Indexer cannot knn query in subindex | ### Initial Checks
- [X] I have read and followed [the docs](https://docs.docarray.org/) and still think this is a bug
### Description
Searching vectors in subindex fails in HNSWLib index store:
```bash
Traceback (most recent call last):
File "/Users/oytuntez/motaword/jina-documents/clip-deploy.py", line 72, in <module>
di.find_subindex(td, 'texts', 'embedding', 1)
File "/Users/oytuntez/motaword/docarray/docarray/index/abstract.py", line 532, in find_subindex
sub_docs, scores = self._find_subdocs(
File "/Users/oytuntez/motaword/docarray/docarray/index/abstract.py", line 1189, in _find_subdocs
return self._subindices[fields[0]].find(
File "/Users/oytuntez/motaword/docarray/docarray/index/abstract.py", line 503, in find
docs, scores = self._find(
File "/Users/oytuntez/motaword/docarray/docarray/index/backends/hnswlib.py", line 332, in _find
docs, scores = self._find_batched(
File "/Users/oytuntez/motaword/docarray/docarray/index/backends/hnswlib.py", line 324, in _find_batched
return self._search_and_filter(
File "/Users/oytuntez/motaword/docarray/docarray/index/backends/hnswlib.py", line 690, in _search_and_filter
result_das = [
File "/Users/oytuntez/motaword/docarray/docarray/index/backends/hnswlib.py", line 691, in <listcomp>
self._get_docs_sqlite_hashed_id(
File "/Users/oytuntez/motaword/docarray/docarray/index/backends/hnswlib.py", line 555, in _get_docs_sqlite_hashed_id
docs_unsorted = self._get_docs_sqlite_unsorted(hashed_ids)
File "/Users/oytuntez/motaword/docarray/docarray/index/backends/hnswlib.py", line 541, in _get_docs_sqlite_unsorted
docs.append(self._doc_from_bytes(data_bytes, reconstruct_embeddings, out))
File "/Users/oytuntez/motaword/docarray/docarray/index/backends/hnswlib.py", line 596, in _doc_from_bytes
schema_cls._get_field_annotation(k)
AttributeError: 'NoneType' object has no attribute '_to_node_protobuf'
```
Here is the case to replicate the issue:
```python
import numpy as np
import os
from docarray.index import HnswDocumentIndex
os.environ['JINA_LOG_LEVEL'] = 'DEBUG'
from docarray import DocList, BaseDoc
from docarray.documents.text import TextDoc
from docarray.documents.image import ImageDoc
class QuoteFile(BaseDoc):
quote_file_id: int = None
texts: DocList[TextDoc] = None
images: DocList[ImageDoc] = None
class SearchResult(BaseDoc):
results: DocList[QuoteFile] = None
hnsw_config_quote_files = HnswDocumentIndex.DBConfig(index_name='quote_files', work_dir=os.path.abspath('./.cache'), default_column_config={
np.ndarray: {
'dim': 512,
'index': True,
'space': 'l2', # 'l2', 'ip', 'cosine'
'max_elements': 50000,
'ef_construction': 200,
'ef': 100,
'M': 16,
'allow_replace_deleted': True,
'num_threads': 1,
},
'TEXT': {},
'INTEGER': {},
None: {},
},)
di = HnswDocumentIndex[QuoteFile](hnsw_config_quote_files)
td = TextDoc(text="Hello World")
td.id = "test-text"
td.embedding = np.zeros(512)
qf = QuoteFile()
qf.id = "test"
qf.quote_file_id = 109
qf.texts = DocList[TextDoc]([td])
di.index(qf)
sqf = QuoteFile()
sqf.id = "test"
sqff = di.filter({'quote_file_id': {'$eq': 109}})
print(sqff[0].summary())
td = TextDoc(text="Hello World")
td.id = "test-text"
td.embedding = np.zeros(512)
di.find_subindex(td, 'texts', 'embedding', 1)
```
### Example Code
_No response_
### Python, DocArray & OS Version
```Text
jina==3.23.2
docarray=latest and our minor fork at motaword/docarray
```
### Affected Components
- [X] [Vector Database / Index](https://docs.docarray.org/user_guide/storing/docindex/)
- [ ] [Representing](https://docs.docarray.org/user_guide/representing/first_step)
- [ ] [Sending](https://docs.docarray.org/user_guide/sending/first_step/)
- [ ] [storing](https://docs.docarray.org/user_guide/storing/first_step/)
- [X] [multi modal data type](https://docs.docarray.org/data_types/first_steps/) | closed | 2024-02-15T18:40:11Z | 2024-02-16T10:44:41Z | https://github.com/docarray/docarray/issues/1866 | [] | oytuntez | 8 |
deepinsight/insightface | pytorch | 2,722 | ModuleNotFoundError: No module named 'insightface' | hi,
the whole day i try to solve the problem.
i have installed insightface on
appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python310\site-packages
but on comfyui the python version is 311. could this be the problem?
if this could be the problem how can i solve this? (i am a beginner)
thx! | open | 2025-01-24T22:08:25Z | 2025-02-08T13:48:30Z | https://github.com/deepinsight/insightface/issues/2722 | [] | carlmoss22 | 1 |
ansible/awx | django | 15,416 | AWX Office Hours - August 13th 2024 | # AWX Office Hours
## Proposed agenda based on topics
## What
After a successful Contributor Summit in October 2023, one of the bits of feedback we got was to host a regular time for the Automation Controller (AWX) Team to be available for your folks in the AWX Community, so we are happy to announce a new regular video meeting.
This kind of feedback loop is vital to the success of AWX and the AWX team wants to make it as easy as possible for you - our community - to get involved.
## Where & When
Our next meeting will be held on Tuesday, August 13th, 2024 at [1500 UTC](https://dateful.com/time-zone-converter?t=15:00&tz=UTC)
* [Google Meet](https://meet.google.com/vyk-dfow-cfi)
* Via Phone PIN: 842522378 [Guide](https://support.google.com/meet/answer/9518557)
This meeting is held once a month, on the second Tuesday of the month, at [1500 UTC](https://dateful.com/time-zone-converter?t=15:00&tz=UTC)
## How
Add one topic per comment in this GitHub issue
If you don't have a GitHub account, jump on [#awx:ansible.com](https://matrix.to/#/#awx:ansible.com) on Matrix and we can add the topic for you
## Talk with us
As well as the fortnightly video meeting you can join the Community (inc development team) on Matrix Chat.
* Matrix: [#awx:ansible.com](https://matrix.to/#/#awx:ansible.com) (recomended)
* libera.chat IRC: `#ansible-awx` (If you are already setup on IRC)
The Matrix & IRC channels are bridged, you'll just have a better experience on Matrix
## Links
[AWX YouTube Chanel](https://www.youtube.com/@ansible-awx)
[Previous Meeting](#15319)
[Meeting recording]()
Next Meeting
See you soon!
| closed | 2024-08-01T19:52:20Z | 2024-09-10T15:06:08Z | https://github.com/ansible/awx/issues/15416 | [
"needs_triage"
] | thedoubl3j | 1 |
graphql-python/graphene-django | graphql | 743 | How to add description to fields? | It is connected to #208, to [this comment](https://github.com/graphql-python/graphene-django/issues/208#issuecomment-326203930)
Let's say I have this model:
```python
class Country(models.Model):
code = models.CharField(max_length=5, null=True)
name = models.CharField(max_length=100, null=True)
```
and in schema i have
```python
class CountryNode(DjangoObjectType):
class Meta:
model = Country
description = "This is a Country Node"
```
I can add a description for CountryNode itself, but how to do it for Country `code` and `name`? | closed | 2019-08-12T02:13:23Z | 2019-11-28T10:41:50Z | https://github.com/graphql-python/graphene-django/issues/743 | [
"wontfix"
] | Dawidpol | 4 |
vitalik/django-ninja | pydantic | 294 | Is this project production ready ? | Sorry if the question has already been asked but it's been a year since I've last checked this project and I'm still hesitating between django ninja and DRF for a work project since I already have experience whith the later. Any area where it's lacking comparing to DRF that I should be aware of ?
Thanks in advance | closed | 2021-12-02T14:53:18Z | 2021-12-02T16:46:42Z | https://github.com/vitalik/django-ninja/issues/294 | [] | StitiFatah | 2 |
ultralytics/yolov5 | pytorch | 12,936 | Similar Dataloader in yolov5 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello friends,
Imagine I have two datasets with the same images (same scene) but different domains (e.g., clear weather and foggy images). Is it possible in YOLOv5 to create two Dataloaders with similar augmentation methods? I mean, every time the Data Loader returns images based on the number of batches, I want these images to be similar in all aspects just have different domains. I've checked _seed=opt.seed_ in **_create_dataloader_** it seems it is for pytorch Dataloader not for **_LoadImagesAndLabels_** because augmentaiton methods apply in this class.
thanks
### Additional
_No response_ | closed | 2024-04-17T10:59:22Z | 2024-04-19T17:58:51Z | https://github.com/ultralytics/yolov5/issues/12936 | [
"question"
] | BehdadSDP | 3 |
jpadilla/django-rest-framework-jwt | django | 377 | When JWT_AUTH_COOKIE is set to True, unable to refresh and validate | Because in serializers.py
```
.....
class RefreshJSONWebTokenSerializer(VerificationBaseSerializer):
"""
Refresh an access token.
"""
def validate(self, attrs):
token = attrs['token']
....
```
It only read token from request.data | open | 2017-09-21T23:34:47Z | 2019-11-10T21:45:49Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/377 | [] | c0dezli | 3 |
sammchardy/python-binance | api | 872 | Is ws_interval being used in class DepthCacheManager(BaseDepthCacheManager) | I am testing the DepthCacheManager with
`dcm = DepthCacheManager(client = client, symbol = 'ADABUSD', ws_interval=100)`
updates however still arriving in each second instead of every 100ms.
I do not see self._ws_interval being used for anything in the code as of now. Is it just a placeholder variable? | open | 2021-05-23T15:48:16Z | 2021-06-01T12:05:57Z | https://github.com/sammchardy/python-binance/issues/872 | [] | blaze1st | 2 |
dmlc/gluon-nlp | numpy | 1,173 | BERT lr scheduler | We can move the lr scheduling logic https://github.com/dmlc/gluon-nlp/blob/master/scripts/bert/finetune_classifier.py#L565-L571 for BERT to a LRScheduler API implementing the `mxnet.lr_scheduler.LRScheduler` API | closed | 2020-02-24T07:42:15Z | 2020-07-14T05:12:05Z | https://github.com/dmlc/gluon-nlp/issues/1173 | [
"enhancement",
"help wanted"
] | eric-haibin-lin | 1 |
inducer/pudb | pytest | 573 | "Falling back to custom shell" message printed in internal shell console | When the custom shell is not installed, pudb automatically fallsback to the classic Python shell. But the message for this is printed in the internal shell console. This is useless because this isn't shown when the shell is active. It should be printed to the terminal. I think this used to be the case, so this likely was changed at some point. | closed | 2022-11-17T22:49:21Z | 2022-11-20T00:46:02Z | https://github.com/inducer/pudb/issues/573 | [
"Bug"
] | asmeurer | 2 |
nvbn/thefuck | python | 1,268 | Getting `nologin git push` when pushing to a git branch without upstream | <!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
### The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
The Fuck 3.32 using Python 3.9.4 and ZSH 5.8
### Your system (Debian 7, ArchLinux, Windows, etc.):
macOS Monterey version 12.0.1
### How to reproduce the bug:
This might also be related to my git configuration, so mostly posting to see if someone else has this issue and how they fixed it. If this happened to any git user it would definitely have an easy to find solution.
To reproduce, create a branch in any git project, add a commit, push the branch to origin. You'll get a message like this:
```
fatal: The current branch foobranch has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin foobranch
```
Use the command `fuck`, I get the following message:
```
nologin git push [enter/↑/↓/ctrl+c]
```
Hitting enter results in the message "This account is currently not available."
### The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):
same output
### If the bug only appears with a specific application, the output of that application and its version:
git --version: git version 2.24.3 (Apple Git-128)
### Anything else you think is relevant:
Happy to answer any question to help find what's going on.
<!-- It's only with enough information that we can do something to fix the problem. -->
| open | 2022-01-18T20:03:02Z | 2024-10-02T09:51:33Z | https://github.com/nvbn/thefuck/issues/1268 | [] | dgrcode | 4 |
2noise/ChatTTS | python | 142 | TypeError: load_models() got an unexpected keyword argument 'sourc | 2024-05-31 22:00:21,251 - modelscope - INFO - PyTorch version 2.2.1+cu118 Found.
2024-05-31 22:00:21,253 - modelscope - INFO - Loading ast index from C:\Users\WongJ\.cache\modelscope\ast_indexer
2024-05-31 22:00:21,398 - modelscope - INFO - Loading done! Current index file version is 1.14.0, with md5 1dc625ae51c37a4717218d26c94fe70f and a total number of 976 components indexed
Traceback (most recent call last):
File "e:\pyfile\ChatTTS-main\ChatTTS\test01.py", line 30, in <module>
chat.load_models(
TypeError: load_models() got an unexpected keyword argument 'source'
请问如何解决,感谢!!! | closed | 2024-05-31T14:01:23Z | 2024-08-03T04:01:26Z | https://github.com/2noise/ChatTTS/issues/142 | [
"stale"
] | YangKeAng | 4 |
airtai/faststream | asyncio | 2,122 | docs: replace f-string in logger usage | Some our documentation examples uses f-strings – eg https://faststream.airt.ai/latest/getting-started/serialization/examples/#__codelineno-11-23
We should replacte them to follow official logging usage recomendations `logger.log("%s", "message")`
It doesn't related to framework sources! Just documentation examples only | closed | 2025-03-15T11:47:58Z | 2025-03-17T16:26:41Z | https://github.com/airtai/faststream/issues/2122 | [
"documentation",
"good first issue"
] | Lancetnik | 2 |
sammchardy/python-binance | api | 624 | API v1 vs v3 - deprecating or upgrading for unsigned endpoints | Opening a separate issue for this as discussed on PR #622.
The issue is that ...
```
client.stream_get_listen_key()
client.stream_keepalive(listenKey)
client.stream_close(listenKey)
```
... are implemented on v1. As their ```signed``` is ```False``` it picks up v1. For endpoints with ```signed``` is ```True```, they use the correct v3 API endpoints.
As @mfiro noted, the issue shouldn't be solved by changing signed to True because this only should be used when a method needs a signature (those which have HMAC SHA256 next to their endpoint address).
_Originally posted by @mfiro in https://github.com/sammchardy/python-binance/issues/622#issuecomment-734411129_
Opening to work out the best approach for dealing with this issue. | open | 2020-12-01T19:16:07Z | 2023-12-21T16:38:43Z | https://github.com/sammchardy/python-binance/issues/624 | [] | ttamg | 8 |
axnsan12/drf-yasg | rest-api | 211 | Put model into $ref with @swagger_auto_schema | I'm trying to manually define a model multiple times but my generator create one model per request instead of reusing the same object, so I want to put it into ref definition to be reused.
I can't manage to do it, here is what I've tried:
```
definitions = openapi.ReferenceResolver(openapi.SCHEMA_DEFINITIONS)
definitions.set('ReceiptRequest', openapi.Schema(type=openapi.TYPE_OBJECT,
properties={
'method': openapi.Schema(type=openapi.TYPE_STRING)},
required=['method']
)
)
RECEIPT_REQUEST_SCHEMA = openapi.SchemaRef(definitions, "ReceiptRequest")
```
This is not working and give me:
```
assert real_scope and real_scope in self._objects, "invalid scope %s" % scope
AssertionError: invalid scope None
```
If I don't do `definitions.set ` then I get
```
assert name in self._objects[scope], "#/%s/%s is not defined" % (scope, name)
AssertionError: #/definitions/ReceiptRequest is not defined
```
How can I achieve this please ? | closed | 2018-09-14T08:31:09Z | 2021-02-13T14:43:20Z | https://github.com/axnsan12/drf-yasg/issues/211 | [] | jaumard | 5 |
keras-team/keras | deep-learning | 20,193 | Use Keras to load dataset | How can I use keras with pytorch backend to load my custom dataset. I also want to use the data augmentations that are available in the preprocessing layers. My model is written in pytorch.
Can anyone guide me about this? | closed | 2024-09-01T20:57:18Z | 2024-09-18T19:55:32Z | https://github.com/keras-team/keras/issues/20193 | [] | jawi289o | 3 |
pytorch/vision | machine-learning | 8,878 | `sigma` argument of `ElasticTransform()` should completely avoid negative values, giving error and the doc should have the explanation. | ### 📚 The doc issue
`sigma` argument of [ElasticTransform()](https://pytorch.org/vision/main/generated/torchvision.transforms.v2.ElasticTransform.html) doesn't like negative values as I show in [this issue](https://github.com/pytorch/vision/issues/8877).
And, setting `0` and `-100` to `sigma` argument of [ElasticTransform()](https://pytorch.org/vision/main/generated/torchvision.transforms.v2.ElasticTransform.html) gets the same results as shown below:
```python
from torchvision.datasets import OxfordIIITPet
from torchvision.transforms.v2 import ElasticTransform
my_data = OxfordIIITPet(
root="data"
)
import matplotlib.pyplot as plt
def show_images(data, main_title=None, a=50, s=5, f=0):
plt.figure(figsize=(10, 5))
plt.suptitle(t=main_title, y=0.8, fontsize=14)
for i, (im, _) in zip(range(1, 6), data):
plt.subplot(1, 5, i)
et = ElasticTransform(alpha=a, sigma=s, fill=f) # Here
plt.imshow(X=et(im)) # Here
plt.xticks(ticks=[])
plt.yticks(ticks=[])
plt.tight_layout()
plt.show()
show_images(data=my_data, main_title="sigma0_data", s=0) # Here
show_images(data=my_data, main_title="sigma-100", s=-100) # Here
```


### Suggest a potential alternative/fix
So, `sigma` argument should completely avoid negative values, giving error and [the doc](https://pytorch.org/vision/main/generated/torchvision.transforms.v2.ElasticTransform.html) should have the explanation. | open | 2025-01-24T02:50:52Z | 2025-02-19T13:34:53Z | https://github.com/pytorch/vision/issues/8878 | [] | hyperkai | 1 |
gee-community/geemap | streamlit | 1,927 | Release geemap | Test | closed | 2024-02-29T21:38:05Z | 2024-03-15T13:57:08Z | https://github.com/gee-community/geemap/issues/1927 | [
"release"
] | jdbcode | 1 |
randyzwitch/streamlit-folium | streamlit | 13 | Make available with anaconda (conda - forge) | Hi there, this is more of a request than an issue, but would it be possible for this package to be made available through conda forge?
[conda forge anaconda channel](https://conda-forge.org/#page-top)
:) | closed | 2021-02-04T14:57:40Z | 2021-02-10T19:27:53Z | https://github.com/randyzwitch/streamlit-folium/issues/13 | [] | saguerraty | 4 |
jupyter/nbviewer | jupyter | 938 | 404 Not found | **Describe the bug**
Jupyter nbviewer is showing : ```404 : Not Found```
**To Reproduce**
Steps to reproduce the behavior:
1. Open the line in github```'https://github.com/WeijieChen-MacroAnalyst/Linear_Algebra_With_Python/blob/master/Chapter%208%20-%20Vector%20Space%20and%20Subspace.ipynb'```
2. Copy the link and paste into ```'https://nbviewer.jupyter.org/'```
3. Then nbviewer shows ```404 : Not Found Remote HTTP 404: Chapter 8 - Vector Space and Subspace.ipynb not found among 10 files```
**Expected behavior**
Supposed to be opened by nbviewer.
**Desktop (please complete the following information):**
- OS: [Windows 7]
- Browser [Chrome]
- Version [Version 80.0.3987.132 (Official Build) (64-bit)]
**Additional context**
Add any other context about the problem here.
| open | 2020-06-09T11:58:00Z | 2021-07-15T15:47:00Z | https://github.com/jupyter/nbviewer/issues/938 | [] | weijie-chen | 9 |
KaiyangZhou/deep-person-reid | computer-vision | 407 | "torchreid\metrics\rank_cylib\rank_cy.pyx" didn't work in Windows----ValueError: Buffer dtype mismatch, expected 'long' but got 'long long' | When I try to evaluate the model effect:
----------------------------------------------------
##### Evaluating market1501 (source) #####
Extracting features from query set ...
Done, obtained 3368-by-2048 matrix
Extracting features from gallery set ...
Done, obtained 15913-by-2048 matrix
Speed: 0.0258 sec/batch
Computing distance matrix with metric=euclidean ...
Computing CMC and mAP ...
----------------------------------------------------
Traceback (most recent call last):
File "induction.py", line 67, in <module>
main()
File "induction.py", line 62, in main
test_only=False
File "d:\pycharm\ha-cnn_reid\deep-person-reid\torchreid\engine\engine.py", line 207, in run
ranks=ranks
File "d:\pycharm\ha-cnn_reid\deep-person-reid\torchreid\engine\engine.py", line 335, in test
rerank=rerank
File "D:\anaconda_home\envs\general\lib\site-packages\torch\autograd\grad_mode.py", line 49, in decorate_no_grad
return func(*args, **kwargs)
File "d:\pycharm\ha-cnn_reid\deep-person-reid\torchreid\engine\engine.py", line 413, in _evaluate
use_metric_cuhk03=use_metric_cuhk03
File "d:\pycharm\ha-cnn_reid\deep-person-reid\torchreid\metrics\rank.py", line 201, in evaluate_rank
use_metric_cuhk03
File "torchreid\metrics\rank_cylib\rank_cy.pyx", line 25, in torchreid.metrics.rank_cylib.rank_cy.evaluate_cy
File "torchreid\metrics\rank_cylib\rank_cy.pyx", line 33, in torchreid.metrics.rank_cylib.rank_cy.evaluate_cy
ValueError: Buffer dtype mismatch, expected 'long' but got 'long long'
So, how should I change this file to fit the correct datatype?
cpdef eval_market1501_cy(float[:,:] distmat, long[:] q_pids, long[:]g_pids,
long[:]q_camids, long[:]g_camids, long max_rank):
--->
cpdef eval_market1501_cy(float[:,:] distmat, longlong[:] q_pids, longlong[:]g_pids,
longlong[:]q_camids, longlong[:]g_camids, longlong max_rank):
It's this right? | closed | 2021-01-20T07:11:56Z | 2021-01-20T13:59:55Z | https://github.com/KaiyangZhou/deep-person-reid/issues/407 | [] | tomFoxxxx | 1 |
psf/requests | python | 5,943 | urllib3 v1.26.7 break Session Object with Proxy | The urllib3 library has released `1.26.7` and the changes have caused some additional issues with proxies and SSL certificates.
#### Version with issue
requests (latest)
urllib3 - 1.26.7 (latest)
#### Problem
Any port forwarding proxy breaks with code change.
#### Code to Reproduce
To reproduce:
1. Open fiddler
2. Run the following
```python
import requests
s = requests.Session()
s.verify=False
s.trust_env=True
try:
s.get("https://httpbin.org/get?a=1&b=2").json()
except Exception as e:
print(e)
raise e
```
You get the following error:
```
requests.exceptions.SSLError: HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded with url: /get?a=1&b=2 (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1091)')))
```
So now the type name mismatch isn't happening like with `1.26.6` now this is being bubbled up. This means `verify` still isn't being honored because `verify=False` should just ignore any SSL certificate.
Now let's specify the proxy for the session:
```
import urllib.request
s.proxies = urllib.request.getproxies()
s.get("https://httpbin.org/get?a=1&b=2").json()
```
This results in the following:
```
requests.exceptions.SSLError: HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded with url: /get?a=1&b=2 (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1091)')))
```
Same issue.
Outside of a session:
```
proxy = urllib.request.getproxies()
requests.get("https://httpbin.org/get?a=1&b=2", proxies=proxy, verify=False).json()
```
Result:
```
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\envs\pro-dev\lib\site-packages\requests\api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "C:\envs\pro-dev\lib\site-packages\requests\api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "C:\envs\pro-dev\lib\site-packages\requests\sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "C:\envs\pro-dev\lib\site-packages\requests\sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "C:\envs\pro-dev\lib\site-packages\requests\adapters.py", line 573, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded with url: /get?a=1&b=2 (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1091)')))
```
#### Solution:
TBD
| closed | 2021-09-23T10:25:37Z | 2021-12-23T14:00:19Z | https://github.com/psf/requests/issues/5943 | [] | achapkowski | 20 |
plotly/dash-bio | dash | 285 | In Circos demo app, under "Table", user can pick colours... in hexadecimal notation. | So, under "Table", it doesn't look user-friendly to me to offer color options in hexadecimal notation...

When we hover the graph, color is given in RGB notation, so it's not even consistent.
_Originally posted by @mkcor in https://github.com/plotly/dash-bio/pull/279#issuecomment-478039011_ | closed | 2019-03-30T19:10:44Z | 2021-05-04T20:27:48Z | https://github.com/plotly/dash-bio/issues/285 | [
"nice-to-have"
] | mkcor | 0 |
thtrieu/darkflow | tensorflow | 1,121 | what are the parameters in the [region] section? | anchors = 0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828
I am guessing anchors are the aspect ratios of the anchor boxes?
bias_match=1
?
classes=1
number of classes to look for
coords=4
?
num=5
?
softmax=1
?
jitter=.2
?
rescore=0
?
object_scale=5
?
noobject_scale=1
?
class_scale=1
?
coord_scale=1
?
absolute=1
?
thresh = .6
default threshold when inferencing?
random=1
?
| open | 2020-01-09T21:51:47Z | 2021-08-06T10:26:18Z | https://github.com/thtrieu/darkflow/issues/1121 | [] | skywo1f | 1 |
sebp/scikit-survival | scikit-learn | 443 | SurvivalTree is handling sample_weight incorrectly | <!--
Before submitting a bug, please make sure the issue hasn't been already
addressed by searching through the past issues.
-->
**Describe the bug**
Weighting samples by passing `sample_weight` to `SurvivalTree.fit()` is not considered.
This is essential for `RandomSurvivalForest` to work correctly, because bootstrap samples for each tree in the ensemble are created by passing `sample_weight` to `SurvivalTree.fit()`. For instance, `sample_weight=[1, 0, 2, 1]` would represent a bootstrap dataset where the first and last sample appear once, the second sample is not part of the sample, and the third sample appears twice.
**Code Sample to Reproduce the Bug**
<Details>
```python
from sksurv.datasets import load_whas500
from sksurv.preprocessing import OneHotEncoder
from sksurv.tree import SurvivalTree
X, y = load_whas500()
Xt = OneHotEncoder().fit_transform(X)
n_samples = Xt.shape[0]
weights = np.ones(n_samples, dtype=int)
weights[:11] = np.arange(11, dtype=int)
y_array = np.empty((Xt.shape[0], 2), dtype=float)
y_array[:, 0] = y["lenfol"]
y_array[:, 1] = y["fstat"].astype(float)
X_repeat = np.repeat(Xt, weights, axis=0)
y_repeat = np.repeat(y_array, weights, axis=0)
# fit on the full data to create unique_times_ and is_event_time_
t0 = SurvivalTree(random_state=2).fit(Xt, y)
# fit on dataset where samples have been copiied
t1 = SurvivalTree(random_state=2)._fit(
X_repeat, (y_repeat, t0.unique_times_, t0.is_event_time_),
check_input=False
)
# fit on dataset where sample_weight is used
t2 = SurvivalTree(random_state=2)._fit(
Xt.values, (y_array, t0.unique_times_, t0.is_event_time_),
sample_weight=weights.astype(float),
check_input=False
)
value_1 = t1.tree_.value
value_2 = t2.tree_.value
# check that both trees are identical
assert np.allclose(value_1, value_2)
```
The example fails, whereas using a DecisionTreeClassifier for illustration does result in identical trees.
```python
from sklearn.tree import DecisionTreeClassifier
t1 = DecisionTreeClassifier(random_state=2).fit(
X_repeat, y_repeat[:, 1]
)
t2 = DecisionTreeClassifier(random_state=2).fit(
Xt, y["fstat"], sample_weight=weights
)
value_1 = t1.tree_.value
value_2 = t2.tree_.value
print(np.allclose(value_1, value_2))
```
</Details>
**Expected Results**
The trees `t1` and `t2` should be identical.
**Actual Results**
```
ValueError: operands could not be broadcast together with shapes (195,395,2) (187,395,2)
```
**Versions**
Latest version from the master branch. | closed | 2024-03-28T15:35:02Z | 2024-06-29T15:49:24Z | https://github.com/sebp/scikit-survival/issues/443 | [
"bug"
] | sebp | 0 |
tiangolo/full-stack | sqlalchemy | 19 | How to troubleshoot a service when it fails to run [workflow for docker swarm deploy] | Hi!
I'm trying to deploy a cluster using these instructions .
https://github.com/tiangolo/full-stack/blob/master/docker-swarm-cluster-deploy.md
It seem like the DB is not coming online and causing the backend to also fail after many attempts from tenacity.
$ docker service logs db tells me "no such task or service: db", but I can see it is listed with
$ docker service ls, with replicas 0/1
I don't have previous experience with swarm, is there a way to access failed service logs?
I tried following advice like print journalctl, but that doesnt seem very useful in this case. Should I include something extra to output logs?
| closed | 2018-12-27T13:04:35Z | 2019-01-03T11:35:26Z | https://github.com/tiangolo/full-stack/issues/19 | [] | AYEG | 3 |
autogluon/autogluon | data-science | 4,196 | [BUG] GPU training not working with non-default tabular presets | **Bug Report Checklist**
- [ ] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [ ] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [x] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
Fitting the dataset using `num_gpus=1` only works when `presets='medium_quality'`.
This might have been introduced by the recent `ray` version bump #3774
In the ray dashboard I see these events, and the tasks are stuck:
`Warning: The following resource request cannot be scheduled right now: {'CPU': 12.0, 'GPU': 0.5}. This is likely due to all cluster resources being claimed by actors. Consider creating fewer actors or adding more nodes to this Ray cluster.`
I only have 1 gpu, setting `num_gpus` to some value <1 seems to at least let the ray tasks run, albeit this doesn’t really work since `num_gpus>=1` is expected downstream
From what I can tell this issue is present on google colab too, so it should be easily reproducible.
**Expected behavior**
GPU training should work even for more complex tabular model presets
**To Reproduce**
Run [the tabular quickstart example notebook](https://github.com/autogluon/autogluon/blob/master/docs/tutorials/tabular/tabular-quick-start.ipynb) with `predictor = TabularPredictor(label=label).fit(train_data, num_gpus=1, presets='good_quality')` or better presets
**Installed Versions**
`autogluon==1.1.0` | closed | 2024-05-14T09:41:35Z | 2024-05-21T20:57:40Z | https://github.com/autogluon/autogluon/issues/4196 | [
"bug",
"module: tabular",
"Needs Triage",
"priority: 1"
] | Newtoniano | 2 |
albumentations-team/albumentations | machine-learning | 2,435 | [Feature request] Add apply_to_images to Spatter | open | 2025-03-11T01:17:46Z | 2025-03-11T01:17:59Z | https://github.com/albumentations-team/albumentations/issues/2435 | [
"enhancement",
"good first issue"
] | ternaus | 0 | |
scrapy/scrapy | python | 6,691 | DropItem Noise | The `DropItem` exception logs the entire item in the logs. Is there a way to exclude certain large fields from item just for logs or suppress item logging altogether when `DropItem` is raised? The current logging adds too much noise.
```python
def process_item(self, item: Dict, spider: Spider) -> Dict:
_id = item.get("_id", None)
if _id:
if self.db_exists(id=_id):
raise DropItem(f"Already exists [{_id}]")
```
 | closed | 2025-02-23T09:51:04Z | 2025-02-23T10:00:23Z | https://github.com/scrapy/scrapy/issues/6691 | [] | Ehsan-U | 1 |
graphql-python/gql | graphql | 325 | Parsing incorrectly wipes out mapping | **Describe the bug**
The parsing logic completely wipes out parent value, if one of the fields in null.
e.g. the value is a list with a type in it with the following definition (removed all the other fields for simplicity)
```
node {
...other fields
from {
address
}
to {
address
}
}
}
```
if I get the following result:
```
[
{'node': {'from': {'address': '0x45b9ad45995577fe'}, 'to': {'address': '0x6394e988297f5ed2'}}},
{'node': {'from': None, 'to': {'address': '0x6394e988297f5ed2'}}},
]
```
the parsed result will end up being:
```
[
{'node': {'from': {'address': '0x45b9ad45995577fe'}, 'to': {'address': '0x6394e988297f5ed2'}}},
None
]
```
**Expected behavior**
Result should be:
```
[
{'node': {'from': {'address': '0x45b9ad45995577fe'}, 'to': {'address': '0x6394e988297f5ed2'}}},
{'node': {'from': None, 'to': {'address': '0x6394e988297f5ed2'}}},
]
```
**System info (please complete the following information):**
- OS: Linux
- Python version: 3.10
- gql version: 3.2.0
- graphql-core version: 3.2.1
| closed | 2022-05-03T13:31:07Z | 2022-05-20T08:36:07Z | https://github.com/graphql-python/gql/issues/325 | [
"type: bug"
] | pvanderlinden | 12 |
coqui-ai/TTS | python | 3,529 | [Bug] GlowTTS / Tacotron2 Training stuck and fail | ### Describe the bug
Hi, I'm trying to train GlowTTS and Tacotron2 with an Dataset with the same format of LJSpeech.
I used the same dataset to train it with XTTS v2 and it worked but when I try to train GlowTTS or Tacotron2 it's look like that is stuck and return an exeception.
This is the dataset: https://huggingface.co/datasets/xjabr/british_old_lady
### To Reproduce
This is the code:
```python
import os
# Trainer: Where the ✨️ happens.
# TrainingArgs: Defines the set of arguments of the Trainer.
from trainer import Trainer, TrainerArgs
# GlowTTSConfig: all model related values for training, validating and testing.
from TTS.tts.configs.glow_tts_config import GlowTTSConfig
# BaseDatasetConfig: defines name, formatter and path of the dataset.
from TTS.tts.configs.shared_configs import BaseDatasetConfig
from TTS.tts.datasets import load_tts_samples
from TTS.tts.models.glow_tts import GlowTTS
from TTS.tts.utils.text.tokenizer import TTSTokenizer
from TTS.utils.audio import AudioProcessor
# we use the same path as this script as our training folder.
output_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "run", "glow_tts")
# DEFINE DATASET CONFIG
# Set LJSpeech as our target dataset and define its path.
# You can also use a simple Dict to define the dataset and pass it to your custom formatter.
dataset_config = BaseDatasetConfig(
formatter="ljspeech",
dataset_name="old-lady",
path="data/old-lady",
meta_file_train="metadata.csv",
)
# INITIALIZE THE TRAINING CONFIGURATION
# Configure the model. Every config class inherits the BaseTTSConfig.
config = GlowTTSConfig(
batch_size=16,
eval_batch_size=4,
num_loader_workers=1,
num_eval_loader_workers=1,
run_eval=True,
test_delay_epochs=-1,
epochs=10,
text_cleaner="phoneme_cleaners",
use_phonemes=False,
phoneme_language="en-us",
phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
print_step=25,
print_eval=False,
mixed_precision=True,
output_path=output_path,
datasets=[dataset_config],
)
# INITIALIZE THE AUDIO PROCESSOR
# Audio processor is used for feature extraction and audio I/O.
# It mainly serves to the dataloader and the training loggers.
ap = AudioProcessor.init_from_config(config)
# INITIALIZE THE TOKENIZER
# Tokenizer is used to convert text to sequences of token IDs.
# If characters are not defined in the config, default characters are passed to the config
tokenizer, config = TTSTokenizer.init_from_config(config)
# LOAD DATA SAMPLES
# Each sample is a list of ```[text, audio_file_path, speaker_name]```
# You can define your custom sample loader returning the list of samples.
# Or define your custom formatter and pass it to the `load_tts_samples`.
# Check `TTS.tts.datasets.load_tts_samples` for more details.
train_samples, eval_samples = load_tts_samples(
dataset_config,
eval_split=True,
eval_split_max_size=config.eval_split_max_size,
eval_split_size=config.eval_split_size,
)
# INITIALIZE THE MODEL
# Models take a config object and a speaker manager as input
# Config defines the details of the model like the number of layers, the size of the embedding, etc.
# Speaker manager is used by multi-speaker models.
model = GlowTTS(config, ap, tokenizer, speaker_manager=None)
# INITIALIZE THE TRAINER
# Trainer provides a generic API to train all the 🐸TTS models with all its perks like mixed-precision training,
# distributed training, etc.
trainer = Trainer(
TrainerArgs(), config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples
)
# AND... 3,2,1... 🚀
trainer.fit()
```
### Expected behavior
_No response_
### Logs
```shell
root@3726f96612ad:/workspace# python train_glowtts.py
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log10
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:True
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:None
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:20.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:45
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:10
| > hop_length:256
| > win_length:1024
| > Found 227 files in /workspace/data/old-lady
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
> Training Environment:
| > Backend: Torch
| > Mixed precision: True
| > Precision: fp16
| > Current device: 0
| > Num. of GPUs: 1
| > Num. of CPUs: 64
| > Num. of Torch Threads: 32
| > Torch seed: 54321
| > Torch CUDNN: True
| > Torch CUDNN deterministic: False
| > Torch CUDNN benchmark: False
| > Torch TF32 MatMul: False
> Start Tensorboard: tensorboard --logdir=/workspace/run/glow_tts/run-January-19-2024_10+00AM-0000000
> Model has 28597969 parameters
> EPOCH: 0/10
--> /workspace/run/glow_tts/run-January-19-2024_10+00AM-0000000
> DataLoader initialization
| > Tokenizer:
| > add_blank: False
| > use_eos_bos: False
| > use_phonemes: False
| > Number of instances : 225
| > Preprocessing samples
| > Max text length: 102
| > Min text length: 15
| > Avg text length: 56.111111111111114
|
| > Max audio length: 496125
| > Min audio length: 66150
| > Avg audio length: 171205.46666666667
| > Num. instances discarded samples: 0
| > Batch group size: 0.
> TRAINING (2024-01-19 10:00:31)
/usr/local/lib/python3.10/dist-packages/librosa/core/spectrum.py:256: UserWarning: n_fft=1024 is too large for input signal of length=2
warnings.warn(
! Run is removed from /workspace/run/glow_tts/run-January-19-2024_10+00AM-0000000
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/trainer/trainer.py", line 1833, in fit
self._fit()
File "/usr/local/lib/python3.10/dist-packages/trainer/trainer.py", line 1785, in _fit
self.train_epoch()
File "/usr/local/lib/python3.10/dist-packages/trainer/trainer.py", line 1503, in train_epoch
for cur_step, batch in enumerate(self.train_loader):
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 630, in __next__
data = self._next_data()
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/usr/local/lib/python3.10/dist-packages/torch/_utils.py", line 694, in reraise
raise exception
AssertionError: Caught AssertionError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
return self.collate_fn(data)
File "/usr/local/lib/python3.10/dist-packages/TTS/tts/datasets/dataset.py", line 475, in collate_fn
mel = prepare_tensor(mel, self.outputs_per_step)
File "/usr/local/lib/python3.10/dist-packages/TTS/tts/utils/data.py", line 29, in prepare_tensor
return np.stack([_pad_tensor(x, pad_len) for x in inputs])
File "/usr/local/lib/python3.10/dist-packages/TTS/tts/utils/data.py", line 29, in <listcomp>
return np.stack([_pad_tensor(x, pad_len) for x in inputs])
File "/usr/local/lib/python3.10/dist-packages/TTS/tts/utils/data.py", line 20, in _pad_tensor
assert x.ndim == 2
AssertionError
```
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA GeForce RTX 4090"
],
"available": true,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.2+cu121",
"TTS": "0.22.0",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
""
],
"processor": "x86_64",
"python": "3.10.12",
"version": "#187-Ubuntu SMP Thu Nov 23 14:52:28 UTC 2023"
}
}
```
### Additional context
_No response_ | closed | 2024-01-19T10:12:04Z | 2024-03-02T00:44:40Z | https://github.com/coqui-ai/TTS/issues/3529 | [
"bug",
"wontfix"
] | gabrielelanzafamee | 1 |
geopandas/geopandas | pandas | 2,445 | ENH: Support MySQL spatial | Hi, actually there is ```to_postgis``` and other functions for postgres, I think would be great have this for MySQL too, it have spatial support too.
The actual ```to_postgis``` does not work with that db, is like obvs but I tested it anyway.
Thx.
| closed | 2022-06-02T19:22:39Z | 2022-06-17T23:53:40Z | https://github.com/geopandas/geopandas/issues/2445 | [
"enhancement"
] | latot | 2 |
pytest-dev/pytest-mock | pytest | 139 | Python 3.8 failures | 6 tests fail on Python 3.8.0a3:
```
$ tox -e py38
GLOB sdist-make: .../pytest-mock/setup.py
py38 inst-nodeps: .../pytest-mock/.tox/dist/pytest-mock-1.10.3.dev1+g540c17b.zip
py38 installed: atomicwrites==1.3.0,attrs==19.1.0,coverage==4.5.3,more-itertools==7.0.0,pluggy==0.9.0,py==1.8.0,pytest==4.3.1,pytest-mock==1.10.3.dev1+g540c17b,six==1.12.0
py38 run-test-pre: PYTHONHASHSEED='858292126'
py38 runtests: commands[0] | coverage run --append --source=pytest_mock.py -m pytest test_pytest_mock.py
============================= test session starts ==============================
platform linux -- Python 3.8.0a3, pytest-4.3.1, py-1.8.0, pluggy-0.9.0
cachedir: .tox/py38/.pytest_cache
rootdir: .../pytest-mock, inifile: tox.ini
plugins: mock-1.10.3.dev1+g540c17b
collected 49 items
test_pytest_mock.py ...................FFFFF......................F.. [100%]
=================================== FAILURES ===================================
_______________ TestMockerStub.test_failure_message_with_no_name _______________
self = <test_pytest_mock.TestMockerStub object at 0x7fdf51f14e20>
mocker = <pytest_mock.MockFixture object at 0x7fdf51f14100>
def test_failure_message_with_no_name(self, mocker):
> self.__test_failure_message(mocker)
test_pytest_mock.py:204:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <test_pytest_mock.TestMockerStub object at 0x7fdf51f14e20>
mocker = <pytest_mock.MockFixture object at 0x7fdf51f14100>, kwargs = {}
expected_name = 'mock', expected_message = 'Expected call: mock()\nNot called'
stub = <MagicMock spec='function' id='140597129200016'>
exc_info = <ExceptionInfo AssertionError tblen=3>
@py_assert2 = AssertionError('expected call not found.\nExpected: mock()\nActual: not called.')
@py_assert4 = 'expected call not found.\nExpected: mock()\nActual: not called.'
@py_assert6 = False
def __test_failure_message(self, mocker, **kwargs):
expected_name = kwargs.get("name") or "mock"
expected_message = "Expected call: {0}()\nNot called".format(expected_name)
stub = mocker.stub(**kwargs)
with pytest.raises(AssertionError) as exc_info:
stub.assert_called_with()
> assert str(exc_info.value) == expected_message
E AssertionError: assert 'expected cal...: not called.' == 'Expected call...)\nNot called'
E - expected call not found.
E - Expected: mock()
E + Expected call: mock()
E ? +++++
E - Actual: not called.
E + Not called
test_pytest_mock.py:201: AssertionError
_____________ TestMockerStub.test_failure_message_with_name[None] ______________
self = <test_pytest_mock.TestMockerStub object at 0x7fdf51f101f0>
mocker = <pytest_mock.MockFixture object at 0x7fdf51f100d0>, name = None
@pytest.mark.parametrize("name", (None, "", "f", "The Castle of aaarrrrggh"))
def test_failure_message_with_name(self, mocker, name):
> self.__test_failure_message(mocker, name=name)
test_pytest_mock.py:208:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <test_pytest_mock.TestMockerStub object at 0x7fdf51f101f0>
mocker = <pytest_mock.MockFixture object at 0x7fdf51f100d0>
kwargs = {'name': None}, expected_name = 'mock'
expected_message = 'Expected call: mock()\nNot called'
stub = <MagicMock spec='function' id='140597129184016'>
exc_info = <ExceptionInfo AssertionError tblen=3>
@py_assert2 = AssertionError('expected call not found.\nExpected: mock()\nActual: not called.')
@py_assert4 = 'expected call not found.\nExpected: mock()\nActual: not called.'
@py_assert6 = False
def __test_failure_message(self, mocker, **kwargs):
expected_name = kwargs.get("name") or "mock"
expected_message = "Expected call: {0}()\nNot called".format(expected_name)
stub = mocker.stub(**kwargs)
with pytest.raises(AssertionError) as exc_info:
stub.assert_called_with()
> assert str(exc_info.value) == expected_message
E AssertionError: assert 'expected cal...: not called.' == 'Expected call...)\nNot called'
E - expected call not found.
E - Expected: mock()
E + Expected call: mock()
E ? +++++
E - Actual: not called.
E + Not called
test_pytest_mock.py:201: AssertionError
_______________ TestMockerStub.test_failure_message_with_name[] ________________
self = <test_pytest_mock.TestMockerStub object at 0x7fdf51f6aa60>
mocker = <pytest_mock.MockFixture object at 0x7fdf51f6adf0>, name = ''
@pytest.mark.parametrize("name", (None, "", "f", "The Castle of aaarrrrggh"))
def test_failure_message_with_name(self, mocker, name):
> self.__test_failure_message(mocker, name=name)
test_pytest_mock.py:208:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <test_pytest_mock.TestMockerStub object at 0x7fdf51f6aa60>
mocker = <pytest_mock.MockFixture object at 0x7fdf51f6adf0>
kwargs = {'name': ''}, expected_name = 'mock'
expected_message = 'Expected call: mock()\nNot called'
stub = <MagicMock spec='function' id='140597129555248'>
exc_info = <ExceptionInfo AssertionError tblen=3>
@py_assert2 = AssertionError('expected call not found.\nExpected: mock()\nActual: not called.')
@py_assert4 = 'expected call not found.\nExpected: mock()\nActual: not called.'
@py_assert6 = False
def __test_failure_message(self, mocker, **kwargs):
expected_name = kwargs.get("name") or "mock"
expected_message = "Expected call: {0}()\nNot called".format(expected_name)
stub = mocker.stub(**kwargs)
with pytest.raises(AssertionError) as exc_info:
stub.assert_called_with()
> assert str(exc_info.value) == expected_message
E AssertionError: assert 'expected cal...: not called.' == 'Expected call...)\nNot called'
E - expected call not found.
E - Expected: mock()
E + Expected call: mock()
E ? +++++
E - Actual: not called.
E + Not called
test_pytest_mock.py:201: AssertionError
_______________ TestMockerStub.test_failure_message_with_name[f] _______________
self = <test_pytest_mock.TestMockerStub object at 0x7fdf51f05ee0>
mocker = <pytest_mock.MockFixture object at 0x7fdf51f05130>, name = 'f'
@pytest.mark.parametrize("name", (None, "", "f", "The Castle of aaarrrrggh"))
def test_failure_message_with_name(self, mocker, name):
> self.__test_failure_message(mocker, name=name)
test_pytest_mock.py:208:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <test_pytest_mock.TestMockerStub object at 0x7fdf51f05ee0>
mocker = <pytest_mock.MockFixture object at 0x7fdf51f05130>
kwargs = {'name': 'f'}, expected_name = 'f'
expected_message = 'Expected call: f()\nNot called'
stub = <MagicMock name='f' spec='function' id='140597129141408'>
exc_info = <ExceptionInfo AssertionError tblen=3>
@py_assert2 = AssertionError('expected call not found.\nExpected: f()\nActual: not called.')
@py_assert4 = 'expected call not found.\nExpected: f()\nActual: not called.'
@py_assert6 = False
def __test_failure_message(self, mocker, **kwargs):
expected_name = kwargs.get("name") or "mock"
expected_message = "Expected call: {0}()\nNot called".format(expected_name)
stub = mocker.stub(**kwargs)
with pytest.raises(AssertionError) as exc_info:
stub.assert_called_with()
> assert str(exc_info.value) == expected_message
E AssertionError: assert 'expected cal...: not called.' == 'Expected call...)\nNot called'
E - expected call not found.
E - Expected: f()
E + Expected call: f()
E ? +++++
E - Actual: not called.
E + Not called
test_pytest_mock.py:201: AssertionError
___ TestMockerStub.test_failure_message_with_name[The Castle of aaarrrrggh] ____
self = <test_pytest_mock.TestMockerStub object at 0x7fdf51f73fa0>
mocker = <pytest_mock.MockFixture object at 0x7fdf51f73b20>
name = 'The Castle of aaarrrrggh'
@pytest.mark.parametrize("name", (None, "", "f", "The Castle of aaarrrrggh"))
def test_failure_message_with_name(self, mocker, name):
> self.__test_failure_message(mocker, name=name)
test_pytest_mock.py:208:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <test_pytest_mock.TestMockerStub object at 0x7fdf51f73fa0>
mocker = <pytest_mock.MockFixture object at 0x7fdf51f73b20>
kwargs = {'name': 'The Castle of aaarrrrggh'}
expected_name = 'The Castle of aaarrrrggh'
expected_message = 'Expected call: The Castle of aaarrrrggh()\nNot called'
stub = <MagicMock name='The Castle of aaarrrrggh' spec='function' id='140597129591536'>
exc_info = <ExceptionInfo AssertionError tblen=3>
@py_assert2 = AssertionError('expected call not found.\nExpected: The Castle of aaarrrrggh()\nActual: not called.')
@py_assert4 = 'expected call not found.\nExpected: The Castle of aaarrrrggh()\nActual: not called.'
@py_assert6 = False
def __test_failure_message(self, mocker, **kwargs):
expected_name = kwargs.get("name") or "mock"
expected_message = "Expected call: {0}()\nNot called".format(expected_name)
stub = mocker.stub(**kwargs)
with pytest.raises(AssertionError) as exc_info:
stub.assert_called_with()
> assert str(exc_info.value) == expected_message
E AssertionError: assert 'expected cal...: not called.' == 'Expected call...)\nNot called'
E - expected call not found.
E - Expected: The Castle of aaarrrrggh()
E + Expected call: The Castle of aaarrrrggh()
E ? +++++
E - Actual: not called.
E + Not called
test_pytest_mock.py:201: AssertionError
_________________________ test_detailed_introspection __________________________
testdir = <Testdir local('/tmp/pytest-of-churchyard/pytest-2/test_detailed_introspection0')>
@pytest.mark.usefixtures("needs_assert_rewrite")
def test_detailed_introspection(testdir):
"""Check that the "mock_use_standalone" is being used.
"""
testdir.makepyfile(
"""
def test(mocker):
m = mocker.Mock()
m('fo')
m.assert_called_once_with('', bar=4)
"""
)
result = testdir.runpytest("-s")
> result.stdout.fnmatch_lines(
[
"*AssertionError: Expected call: mock('', bar=4)*",
"*Actual call: mock('fo')*",
"*pytest introspection follows:*",
"*Args:",
"*assert ('fo',) == ('',)",
"*At index 0 diff: 'fo' != ''*",
"*Use -v to get the full diff*",
"*Kwargs:*",
"*assert {} == {'bar': 4}*",
"*Right contains more items:*",
"*{'bar': 4}*",
"*Use -v to get the full diff*",
]
)
E Failed: nomatch: "*AssertionError: Expected call: mock('', bar=4)*"
E and: '============================= test session starts =============================='
E and: 'platform linux -- Python 3.8.0a3, pytest-4.3.1, py-1.8.0, pluggy-0.9.0'
E and: 'rootdir: /tmp/pytest-of-churchyard/pytest-2/test_detailed_introspection0, inifile:'
E and: 'plugins: mock-1.10.3.dev1+g540c17b'
E and: 'collected 1 item'
E and: ''
E and: 'test_detailed_introspection.py F'
E and: ''
E and: '=================================== FAILURES ==================================='
E and: '_____________________________________ test _____________________________________'
E and: ''
E and: 'mocker = <pytest_mock.MockFixture object at 0x7fdf51e08160>'
E and: ''
E and: ' def test(mocker):'
E and: ' m = mocker.Mock()'
E and: " m('fo')"
E and: "> m.assert_called_once_with('', bar=4)"
E and: 'E AssertionError: expected call not found.'
E and: "E Expected: mock('', bar=4)"
E and: "E Actual: mock('fo')"
E and: 'E '
E and: 'E pytest introspection follows:'
E and: 'E '
E and: 'E Args:'
E and: "E assert ('fo',) == ('',)"
E and: "E At index 0 diff: 'fo' != ''"
E and: 'E Use -v to get the full diff'
E and: 'E Kwargs:'
E and: "E assert {} == {'bar': 4}"
E and: 'E Right contains more items:'
E and: "E {'bar': 4}"
E and: 'E Use -v to get the full diff'
E and: ''
E and: 'test_detailed_introspection.py:4: AssertionError'
E and: '=========================== 1 failed in 0.01 seconds ==========================='
E and: ''
E remains unmatched: "*AssertionError: Expected call: mock('', bar=4)*"
.../pytest-mock/test_pytest_mock.py:588: Failed
----------------------------- Captured stdout call -----------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.0a3, pytest-4.3.1, py-1.8.0, pluggy-0.9.0
rootdir: /tmp/pytest-of-churchyard/pytest-2/test_detailed_introspection0, inifile:
plugins: mock-1.10.3.dev1+g540c17b
collected 1 item
test_detailed_introspection.py F
=================================== FAILURES ===================================
_____________________________________ test _____________________________________
mocker = <pytest_mock.MockFixture object at 0x7fdf51e08160>
def test(mocker):
m = mocker.Mock()
m('fo')
> m.assert_called_once_with('', bar=4)
E AssertionError: expected call not found.
E Expected: mock('', bar=4)
E Actual: mock('fo')
E
E pytest introspection follows:
E
E Args:
E assert ('fo',) == ('',)
E At index 0 diff: 'fo' != ''
E Use -v to get the full diff
E Kwargs:
E assert {} == {'bar': 4}
E Right contains more items:
E {'bar': 4}
E Use -v to get the full diff
test_detailed_introspection.py:4: AssertionError
=========================== 1 failed in 0.01 seconds ===========================
=============================== warnings summary ===============================
test_pytest_mock.py::test_deprecated_mock
.../pytest-mock/pytest_mock.py:179: DeprecationWarning: "mock" fixture has been deprecated, use "mocker" instead
warnings.warn(
-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================== short test summary info ============================
FAILED test_pytest_mock.py::TestMockerStub::test_failure_message_with_no_name
FAILED test_pytest_mock.py::TestMockerStub::test_failure_message_with_name[None]
FAILED test_pytest_mock.py::TestMockerStub::test_failure_message_with_name[]
FAILED test_pytest_mock.py::TestMockerStub::test_failure_message_with_name[f]
FAILED test_pytest_mock.py::TestMockerStub::test_failure_message_with_name[The Castle of aaarrrrggh]
FAILED test_pytest_mock.py::test_detailed_introspection
=============== 6 failed, 43 passed, 1 warnings in 1.41 seconds ================
Coverage.py warning: Module pytest_mock.py was never imported. (module-not-imported)
Coverage.py warning: No data was collected. (no-data-collected)
ERROR: InvocationError for command '.../pytest-mock/.tox/py38/bin/coverage run --append --source=pytest_mock.py -m pytest test_pytest_mock.py' (exited with code 1)
___________________________________ summary ____________________________________
ERROR: py38: commands failed
``` | closed | 2019-03-29T18:31:25Z | 2019-03-30T13:36:45Z | https://github.com/pytest-dev/pytest-mock/issues/139 | [] | hroncok | 1 |
pytest-dev/pytest-qt | pytest | 268 | Have a complete example to quick started? | can push doc's [Tutorial](https://pytest-qt.readthedocs.io/en/latest/tutorial.html) to github repo?
I think it's too fragmented to affect my reading.
It's best to have a working example. | closed | 2019-07-25T06:35:58Z | 2019-07-25T09:22:53Z | https://github.com/pytest-dev/pytest-qt/issues/268 | [] | 625781186 | 2 |
httpie/cli | api | 1,495 | Display used server IP in verbose mode | ## Checklist
- [X] I've searched for similar feature requests.
---
## Enhancement request
When running `curl` in verbose mode, it will print every ip it tries to connect to, until a connection is stablished.
```shell
~ ➜ curl -v www.google.com
* Trying [2001:db8::3:8bd0]:80...
* Trying [2001:db8::3:8bd1]:80...
* Connected to www.google.com (2001:db8::3:8bd1) port 80 (#0)
> GET / HTTP/1.1
```
The above example request tells the user that `curl` was trying to connect to both `2001:db8::3:8bd0` and `2001:db8::3:8bd1`, while ultimately a connection was established to `2001:db8::3:8bd1`.
While ideally `httpie` would implement the same output as `curl`, it should at least be able to print the IP it connected to.
---
## Problem it solves
It is very common to have multiple IPv4 and IPv6 addresses within DNS and which one exactly was used is very relevant when debugging or in cases where different ip's may return different content. | open | 2023-04-06T22:06:49Z | 2023-12-30T01:00:52Z | https://github.com/httpie/cli/issues/1495 | [
"enhancement",
"new"
] | SRv6d | 5 |
wkentaro/labelme | computer-vision | 758 | [BUG] CI crashing for Ubuntu,3.7,pyqt5 -- Pyinstaller cannot find module matplotlib | CI actions in the last 3 days have continuously crashed for run Ubuntu-latest,3.7,pyqt5. Pyinstaller cannot seem to locate matplotlib. Possibly matplotlib needs to be installed with an older version? Maybe 3.3.0 or earlier? Can be seen in CI https://github.com/wkentaro/labelme/actions/runs/208870714 and https://github.com/wkentaro/labelme/actions/runs/214657163
Traceback:
File "/usr/share/miniconda/envs/test/lib/python3.7/site-packages/PyInstaller/loader/pyimod03_importers.py", line 493, in exec_module
exec(bytecode, module.__dict__)
File "__init__.py", line 26, in <module>
File "/usr/share/miniconda/envs/test/lib/python3.7/site-packages/PyInstaller/loader/pyimod03_importers.py", line 493, in exec_module
exec(bytecode, module.__dict__)
File "testing.py", line 4, in <module>
File "/usr/share/miniconda/envs/test/lib/python3.7/site-packages/PyInstaller/loader/pyimod03_importers.py", line 493, in exec_module
exec(bytecode, module.__dict__)
File "imgviz/__init__.py", line 18, in <module>
File "/usr/share/miniconda/envs/test/lib/python3.7/site-packages/PyInstaller/loader/pyimod03_importers.py", line 493, in exec_module
exec(bytecode, module.__dict__)
File "imgviz/depth.py", line 1, in <module>
ModuleNotFoundError: No module named 'matplotlib'
##[error]Process completed with exit code 255.
| closed | 2020-08-19T22:03:16Z | 2021-09-23T15:30:11Z | https://github.com/wkentaro/labelme/issues/758 | [] | jbutle55 | 3 |
dynaconf/dynaconf | flask | 1,257 | Pylance Throws Errors Everytime We Use `settings.get("some_key", "some_default")`. | #### **Description**
Pylance is incorrectly reporting type errors when using **Dynaconf** for configuration management. The code executes correctly, but static analysis flags multiple issues, making development frustrating. Using `# type: ignore` works as a workaround, but it quickly becomes messy across multiple files.
#### **Errors Reported by Pylance**
- **`reportArgumentType`**:
- `"None" cannot be assigned to parameter of type "str"
- **`reportCallIssue`**:
- `Object of type "Box" is not callable`
- `Object of type "BoxList" is not callable`
- `Object of type "DynaBox" is not callable`
- `Object of type "object" is not callable`
- `Object of type "str" is not callable`
- `Object of type "list[list[list[Unknown] | DynaBox | str | Unknown] | DynaBox | str | Unknown]" is not callable`
- `Object of type "None" cannot be called`
#### **Expected Behavior**
Pylance should correctly infer the return type of `settings.get()` and recognize that Dynaconf’s internal objects (`Box`, `DynaBox`, etc.) are not callable unless explicitly defined as such.
#### **Temporary Workaround**
Using `# type: ignore` can suppress these errors, but this is not a clean solution as it quickly clutters the codebase.
#### **Additional Context**
This appears to be an issue with **Pylance’s static type inference** rather than a problem with Dynaconf itself, as there are no runtime issues. | open | 2025-02-17T17:50:04Z | 2025-03-07T17:57:55Z | https://github.com/dynaconf/dynaconf/issues/1257 | [
"help wanted",
"question"
] | DanyaalMajid | 2 |
tflearn/tflearn | tensorflow | 824 | How to clone DNN model object | Hi everyone!
Is there any way of cloning a TFLearn's DNN model object? I'm trying to save the best status of it without using callbacks or saving it into a file but Python's copy functions (shallow copy and deep copy) don't seem to solve the problem. Any ideias?
With the best regards
| closed | 2017-07-04T11:26:04Z | 2017-07-31T13:51:55Z | https://github.com/tflearn/tflearn/issues/824 | [] | BBarbosa | 2 |
deepset-ai/haystack | nlp | 8,514 | pEBR: A Probabilistic Approach to Embedding Based Retrieval | **Is your feature request related to a problem? Please describe.**
Improving the chunk retrieval.
**Additional context**
Came across an interesting LinkedIn post: https://www.linkedin.com/posts/zainhas_instead-of-always-retrieving-a-fixed-number-activity-7256902090449420289-K9qy
Mentioning the paper "pEBR: A Probabilistic Approach to Embedding Based Retrieval" https://arxiv.org/pdf/2410.19349
"Instead of always retrieving a fixed number of chunks, this new paper proposes retrieving a dynamic number of top_k chunks. Based on how well supported the query is under the cumulative distribution function of the datapoints increase top_k.
Higher data density near query -> retrieve more chunks
Lower data density near query -> retrieve less chunks"
<img width="590" alt="image" src="https://github.com/user-attachments/assets/523310d8-43b3-400b-9d1b-fbfbc9c30c22">
| open | 2024-11-01T12:37:41Z | 2025-03-14T16:49:53Z | https://github.com/deepset-ai/haystack/issues/8514 | [] | git-git-hurrah | 1 |
littlecodersh/ItChat | api | 231 | 执行 auto_login 但是没有弹出二维码 | 执行 auto_login 但是没有弹出二维码 | closed | 2017-02-16T07:53:51Z | 2017-02-17T02:02:30Z | https://github.com/littlecodersh/ItChat/issues/231 | [] | gccdChen | 4 |
ageitgey/face_recognition | machine-learning | 616 | The face_recognition.face_encoding function cause CPU 100% and system halted - Raspberry pi | * face_recognition version: 1.2.3
* Python version: 2.7
* Operating System: Raspbian Stretch
* Device: Raspberry Pi 3 Model B
### Description
I am trying to use face_recognition model on raspberry pi. I follow the instruction step by step, but I could not run the "facerec_on_raspberry_pi.py". It seem like when I run the that file, the cpu usage becomes 100%.
The only thing i did which is different with the instruction is I used python 2 to compile the dlib.
$ sudo python setup.py install --compiler-flags "-mfpu=neon"
I have to use it as I need to deploy the function on Amazon greengrass. They only support python 2.7 :(
I have debug it for a whole day, but still couldn't fix it. Does anyone know the solution? Help!
| open | 2018-09-04T01:34:47Z | 2019-06-08T06:33:38Z | https://github.com/ageitgey/face_recognition/issues/616 | [] | xyG67 | 1 |
d2l-ai/d2l-en | computer-vision | 2,311 | 5x5 convolution padding should be 2 in nin.svg | According to the code, 5x5 convolution padding should be 2 in nin.svg.
https://github.com/d2l-ai/d2l-en/blob/1dce6bdd62cae2f8fc32815d6a33b7f0412fd68b/chapter_convolutional-modern/nin.md?plain=1#L134-L144
 | closed | 2022-09-20T12:10:35Z | 2022-09-21T22:46:52Z | https://github.com/d2l-ai/d2l-en/issues/2311 | [] | zhenjiaguo | 2 |
dmlc/gluon-cv | computer-vision | 1,314 | Inference and memory management | Hi, I have an issue with the file eval_ssd.py (but I think this problem is likely to happen in other eval_***.py files). I have successfully executed the demo files of this repository on an Ubuntu 18.04 VM.
In my project I have to evaluate in a "for" cycle the output of eval_ssd.py (that I will redirect to a txt file) so I have changed a little bit this file to be called from my Python file. In particular:
- some default values (as the network, the batch size and so on to call this file without passing arguments)
- line 140
print('Throughput is %f img/sec.'% speed)
to
print('Throughput is %.10f img/sec.'% speed)
- finally, I put lines from 144 to 214 in a new function evaluate().
I don't think these changes matter somehow.
In my file fps_map.py, apart from importing the needed modules (as import eval_ssd as ssd
[Archivio.zip](https://github.com/dmlc/gluon-cv/files/4667136/Archivio.zip) and so on), there is a for cycle which samples randomly the Pascal VOC 2007 Test Dataset
```
test_size = 100
for i in range (1,1000):
test_data = np.genfromtxt("/home/fabio/Desktop/VOCtest_06-Nov-2007/VOCdevkit/VOC2007/ImageSets/Main/test.txt", dtype='str')
print(test_data)
sampled_test_data=np.random.choice(test_data, test_size, replace=True)
print(sampled_test_data)
np.savetxt("/home/fabio/.mxnet/datasets/voc/VOC2007/ImageSets/Main/test.txt", sampled_test_data, fmt="%s")
ssd.evaluate()
```
which perfectly runs until about 100 iterations; the process is then killed.
So I run the code monitoring the System Monitor and I noticed that Memory skyrockets during the execution and, between one iteration and the other, it seems that the memory has not been released so it accumulates over time and the process is eventually killed. I also tried without success to call the method gc.collect() as the last "for instruction" but it doesn't seem to help.
I have also tried to increase system memory up to 4 GB but I cannot go any further.
In brief, in my opinion it seems that this function holds something in memory that accumulates over time, and actually I have no ideas on how to solve this problem.
Thank you in advice for your suggestions. | closed | 2020-05-22T09:19:24Z | 2021-05-22T06:40:51Z | https://github.com/dmlc/gluon-cv/issues/1314 | [
"Stale"
] | fabiolb8 | 1 |
pytorch/pytorch | python | 149,258 | Auto-selective activation checkpointing is not optimal for speed (issue with min_cut_rematerialization_partition) | ### 🐛 Describe the bug
I try the new api described in [pytorch blog: selective activation checkpointing](https://pytorch.org/blog/activation-checkpointing-techniques/#compile-only-memory-budget-api-new)
.
Then I find that selective activation checkpointing is not optimal for speed.
A minimal reproducer:
```python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
os.environ["TORCH_COMPILE_DEBUG"] = "1"
import torch
from torch import nn
import torch.nn.functional as F
import torch._functorch.config
torch._functorch.config.activation_memory_budget = 0.99
class Test1(nn.Module):
def __init__(self):
super().__init__()
self.layer0 = nn.Linear(100, 100, bias=False)
self.layer1 = nn.Linear(100, 100, bias=False)
self.norm = nn.RMSNorm(100)
def forward(self, x):
x = self.norm(x)
return self.layer0(self.layer1(x))
class Test(nn.Module):
def __init__(self):
super().__init__()
self.embs = nn.Embedding(1000, 100)
self.layers = nn.ModuleList([Test1() for _ in range(32)])
def forward(self, x):
x = self.embs(x)
for layer in self.layers:
x = layer(x) + x
return x.sum()
x = torch.randint(0, 1000, (20,), device="cuda")
model = Test().cuda().bfloat16()
compiled_model = torch.compile(model)
y = compiled_model(x)
y.backward()
```
In the `torch_compile_debug` backward folder, the `fx_graph_readable.py` file shows an unusual series of additions.
```python
class GraphModule(torch.nn.Module):
def forward(self, primals_2: "i64[20]", primals_3: "bf16[100]", primals_6: "bf16[100]", primals_9: "bf16[100]", primals_12: "bf16[100]", primals_15: "bf16[100]", primals_18: "bf16[100]", primals_21: "bf16[100]", primals_24: "bf16[100]", primals_27: "bf16[100]", primals_30: "bf16[100]", primals_33: "bf16[100]", primals_36: "bf16[100]", primals_39: "bf16[100]", primals_42: "bf16[100]", primals_45: "bf16[100]", primals_48: "bf16[100]", primals_51: "bf16[100]", primals_54: "bf16[100]", primals_57: "bf16[100]", primals_60: "bf16[100]", primals_63: "bf16[100]", primals_66: "bf16[100]", primals_69: "bf16[100]", primals_72: "bf16[100]", primals_75: "bf16[100]", primals_78: "bf16[100]", primals_81: "bf16[100]", primals_84: "bf16[100]", primals_87: "bf16[100]", primals_90: "bf16[100]", primals_93: "bf16[100]", primals_96: "bf16[100]", embedding: "bf16[20, 100]", rsqrt: "bf16[20, 1]", mm: "bf16[20, 100]", mm_1: "bf16[20, 100]", rsqrt_1: "bf16[20, 1]", mm_2: "bf16[20, 100]", mm_3: "bf16[20, 100]", rsqrt_2: "bf16[20, 1]", mm_4: "bf16[20, 100]", mm_5: "bf16[20, 100]", rsqrt_3: "bf16[20, 1]", mm_6: "bf16[20, 100]", mm_7: "bf16[20, 100]", rsqrt_4: "bf16[20, 1]", mm_8: "bf16[20, 100]", mm_9: "bf16[20, 100]", rsqrt_5: "bf16[20, 1]", mm_10: "bf16[20, 100]", mm_11: "bf16[20, 100]", rsqrt_6: "bf16[20, 1]", mm_12: "bf16[20, 100]", mm_13: "bf16[20, 100]", rsqrt_7: "bf16[20, 1]", mm_14: "bf16[20, 100]", mm_15: "bf16[20, 100]", rsqrt_8: "bf16[20, 1]", mm_16: "bf16[20, 100]", mm_17: "bf16[20, 100]", rsqrt_9: "bf16[20, 1]", mm_18: "bf16[20, 100]", mm_19: "bf16[20, 100]", rsqrt_10: "bf16[20, 1]", mm_20: "bf16[20, 100]", mm_21: "bf16[20, 100]", rsqrt_11: "bf16[20, 1]", mm_22: "bf16[20, 100]", mm_23: "bf16[20, 100]", rsqrt_12: "bf16[20, 1]", mm_24: "bf16[20, 100]", mm_25: "bf16[20, 100]", rsqrt_13: "bf16[20, 1]", mm_26: "bf16[20, 100]", mm_27: "bf16[20, 100]", rsqrt_14: "bf16[20, 1]", mm_28: "bf16[20, 100]", mm_29: "bf16[20, 100]", rsqrt_15: "bf16[20, 1]", mm_30: "bf16[20, 100]", mm_31: "bf16[20, 100]", rsqrt_16: "bf16[20, 1]", mm_32: "bf16[20, 100]", mm_33: "bf16[20, 100]", rsqrt_17: "bf16[20, 1]", mm_34: "bf16[20, 100]", mm_35: "bf16[20, 100]", rsqrt_18: "bf16[20, 1]", mm_36: "bf16[20, 100]", mm_37: "bf16[20, 100]", rsqrt_19: "bf16[20, 1]", mm_38: "bf16[20, 100]", mm_39: "bf16[20, 100]", rsqrt_20: "bf16[20, 1]", mm_40: "bf16[20, 100]", mm_41: "bf16[20, 100]", rsqrt_21: "bf16[20, 1]", mm_42: "bf16[20, 100]", mm_43: "bf16[20, 100]", rsqrt_22: "bf16[20, 1]", mm_44: "bf16[20, 100]", mm_45: "bf16[20, 100]", rsqrt_23: "bf16[20, 1]", mm_46: "bf16[20, 100]", mm_47: "bf16[20, 100]", rsqrt_24: "bf16[20, 1]", mm_48: "bf16[20, 100]", mm_49: "bf16[20, 100]", rsqrt_25: "bf16[20, 1]", mm_50: "bf16[20, 100]", mm_51: "bf16[20, 100]", rsqrt_26: "bf16[20, 1]", mm_52: "bf16[20, 100]", mm_53: "bf16[20, 100]", rsqrt_27: "bf16[20, 1]", mm_54: "bf16[20, 100]", mm_55: "bf16[20, 100]", rsqrt_28: "bf16[20, 1]", mm_56: "bf16[20, 100]", mm_57: "bf16[20, 100]", rsqrt_29: "bf16[20, 1]", mm_58: "bf16[20, 100]", mm_59: "bf16[20, 100]", rsqrt_30: "bf16[20, 1]", mm_60: "bf16[20, 100]", mm_61: "bf16[20, 100]", rsqrt_31: "bf16[20, 1]", mm_62: "bf16[20, 100]", permute_66: "bf16[100, 100]", permute_70: "bf16[100, 100]", permute_74: "bf16[100, 100]", permute_78: "bf16[100, 100]", permute_82: "bf16[100, 100]", permute_86: "bf16[100, 100]", permute_90: "bf16[100, 100]", permute_94: "bf16[100, 100]", permute_98: "bf16[100, 100]", permute_102: "bf16[100, 100]", permute_106: "bf16[100, 100]", permute_110: "bf16[100, 100]", permute_114: "bf16[100, 100]", permute_118: "bf16[100, 100]", permute_122: "bf16[100, 100]", permute_126: "bf16[100, 100]", permute_130: "bf16[100, 100]", permute_134: "bf16[100, 100]", permute_138: "bf16[100, 100]", permute_142: "bf16[100, 100]", permute_146: "bf16[100, 100]", permute_150: "bf16[100, 100]", permute_154: "bf16[100, 100]", permute_158: "bf16[100, 100]", permute_162: "bf16[100, 100]", permute_166: "bf16[100, 100]", permute_170: "bf16[100, 100]", permute_174: "bf16[100, 100]", permute_178: "bf16[100, 100]", permute_182: "bf16[100, 100]", permute_186: "bf16[100, 100]", permute_190: "bf16[100, 100]", permute_194: "bf16[100, 100]", permute_198: "bf16[100, 100]", permute_202: "bf16[100, 100]", permute_206: "bf16[100, 100]", permute_210: "bf16[100, 100]", permute_214: "bf16[100, 100]", permute_218: "bf16[100, 100]", permute_222: "bf16[100, 100]", permute_226: "bf16[100, 100]", permute_230: "bf16[100, 100]", permute_234: "bf16[100, 100]", permute_238: "bf16[100, 100]", permute_242: "bf16[100, 100]", permute_246: "bf16[100, 100]", permute_250: "bf16[100, 100]", permute_254: "bf16[100, 100]", permute_258: "bf16[100, 100]", permute_262: "bf16[100, 100]", permute_266: "bf16[100, 100]", permute_270: "bf16[100, 100]", permute_274: "bf16[100, 100]", permute_278: "bf16[100, 100]", permute_282: "bf16[100, 100]", permute_286: "bf16[100, 100]", permute_290: "bf16[100, 100]", permute_294: "bf16[100, 100]", permute_298: "bf16[100, 100]", permute_302: "bf16[100, 100]", permute_306: "bf16[100, 100]", permute_310: "bf16[100, 100]", permute_314: "bf16[100, 100]", permute_318: "bf16[100, 100]", tangents_1: "bf16[]"):
# File: /tmp/ipykernel_1043308/3460069279.py:38 in forward, code: return x.sum()
expand: "bf16[20, 100]" = torch.ops.aten.expand.default(tangents_1, [20, 100]); tangents_1 = None
# File: /tmp/ipykernel_1043308/3460069279.py:24 in forward, code: return self.layer0(self.layer1(x))
permute_64: "bf16[100, 20]" = torch.ops.aten.permute.default(expand, [1, 0])
mm_64: "bf16[100, 100]" = torch.ops.aten.mm.default(permute_64, mm_62); permute_64 = mm_62 = None
permute_65: "bf16[100, 100]" = torch.ops.aten.permute.default(mm_64, [1, 0]); mm_64 = None
mm_65: "bf16[20, 100]" = torch.ops.aten.mm.default(expand, permute_66); permute_66 = None
permute_67: "bf16[100, 100]" = torch.ops.aten.permute.default(permute_65, [1, 0]); permute_65 = None
permute_68: "bf16[100, 20]" = torch.ops.aten.permute.default(mm_65, [1, 0])
# File: /tmp/ipykernel_1043308/3460069279.py:37 in forward, code: x = layer(x) + x
add_1: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_1, embedding); mm_1 = None
add_3: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_3, add_1); mm_3 = None
add_5: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_5, add_3); mm_5 = None
add_7: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_7, add_5); mm_7 = None
add_9: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_9, add_7); mm_9 = None
add_11: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_11, add_9); mm_11 = None
add_13: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_13, add_11); mm_13 = None
add_15: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_15, add_13); mm_15 = None
add_17: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_17, add_15); mm_17 = None
add_19: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_19, add_17); mm_19 = None
add_21: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_21, add_19); mm_21 = None
add_23: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_23, add_21); mm_23 = None
add_25: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_25, add_23); mm_25 = None
add_27: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_27, add_25); mm_27 = None
add_29: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_29, add_27); mm_29 = None
add_31: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_31, add_29); mm_31 = None
add_33: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_33, add_31); mm_33 = None
add_35: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_35, add_33); mm_35 = None
add_37: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_37, add_35); mm_37 = None
add_39: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_39, add_37); mm_39 = None
add_41: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_41, add_39); mm_41 = None
add_43: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_43, add_41); mm_43 = None
add_45: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_45, add_43); mm_45 = None
add_47: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_47, add_45); mm_47 = None
add_49: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_49, add_47); mm_49 = None
add_51: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_51, add_49); mm_51 = None
add_53: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_53, add_51); mm_53 = None
add_55: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_55, add_53); mm_55 = None
add_57: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_57, add_55); mm_57 = None
add_59: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_59, add_57); mm_59 = None
add_61: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_61, add_59); mm_61 = None
```
A simple observation reveals that it has transformed into the following pattern during the forward pass:
x1 = x0 + y0,
x2 = x1 + y1,
x3 = x2 + y2.
Here, x0, x1, x2, and x3 are all stored for backward computation.
The optimal approach would be to store x0, x1, x2, and x3.
However, due to an issue in the `min cut` implementation of `torch.compile`, which supports recomputation for non-compute-intensive operations, it instead stores x0, y0, y1, and y2, while x1, x2, and x3 are recomputed.
Although both methods use the same amount of memory, the latter introduces unnecessary computations.
### Error logs
_No response_
### Versions
torch 2.5.1+cu124
cc @chauhang @penguinwu @zou3519 @bdhirsh | open | 2025-03-15T15:57:32Z | 2025-03-18T18:17:30Z | https://github.com/pytorch/pytorch/issues/149258 | [
"triaged",
"oncall: pt2",
"module: pt2-dispatcher"
] | efsotr | 1 |
ShishirPatil/gorilla | api | 98 | Gorila | closed | 2023-08-13T08:31:50Z | 2023-08-26T12:20:30Z | https://github.com/ShishirPatil/gorilla/issues/98 | [] | Nafarsami | 2 | |
deezer/spleeter | deep-learning | 52 | ffprobe Error | <!-- PLEASE READ THIS CAREFULLY :
- Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed
- First check FAQ from wiki to see if your problem is not already known
-->
## Description
I downloaded homebrew and ffmpeg to run the code. Works fine with the audio_example given, but when I try to use another song from my library I get an ffprobe error. Also I'm putting this song in the same directory as the audio_example.mp3.
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
1. Installed using
`git clone https://github.com/Deezer/spleeter
conda env create -f spleeter/conda/spleeter-cpu.yaml
conda activate spleeter-cpu`
2. Run as `spleeter separate -i spleeter/Home.mp3 -p spleeter:2stems -o output`
3. Got
`INFO:spleeter:Loading audio b'spleeter/Home.mp3' from 0.0 to 600.0
WARNING:spleeter:ffprobe error (see stderr output for detail)` error
## Output
The code works fine with the original audio given. This would be the output.
```INFO:spleeter:Loading audio b'spleeter/audio_example.mp3' from 0.0 to 600.0
INFO:spleeter:Audio data loaded successfully
INFO:spleeter:File output/audio_example/vocals.wav written
INFO:spleeter:File output/audio_example/accompaniment.wav written
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | MacOS |
| Installation type | Conda |
| RAM available | 4 GB |
| Hardware spec | CPU |
## Additional context
<!-- Add any other context about the problem here, references, cites, etc.. -->
| closed | 2019-11-07T17:04:12Z | 2019-11-20T11:45:54Z | https://github.com/deezer/spleeter/issues/52 | [
"bug",
"invalid",
"next release"
] | andrew-alarcon17 | 28 |
miguelgrinberg/microblog | flask | 78 | EditProfileForm validation errors doesn't display on the page. | When I trying to change the username to 'susan' which already exists, after submitting form I get new form without username validation error message displaying. | closed | 2018-02-01T11:53:59Z | 2018-06-09T05:54:45Z | https://github.com/miguelgrinberg/microblog/issues/78 | [] | mikhailsidorov | 1 |
mwaskom/seaborn | data-visualization | 3,718 | The way to change the size of title in so.plot.label(title="...") | Hi,
Thank you for wonderful library.
I have a wonder that whether any way to change the size of title in so.plot.label(title= "..."). I referred you documentation and I did not find any information related to this.
Thank you for helping me! | closed | 2024-06-27T02:45:59Z | 2024-06-28T01:18:10Z | https://github.com/mwaskom/seaborn/issues/3718 | [] | ngvananh2508 | 2 |
ResidentMario/missingno | pandas | 119 | Performance considerations | pandas-profiling is using `missingno` to generate these informative plots for quite some time, which is a really valuable addition. Now that we're optimizing the computation, it seems that `missingno` is a relative bottleneck. There are two issues that we're currently facing: matplotlib is slow for many/large plots and can't be parallelized and the fact that `missingno` is pandas-specific prevents us from using it with other backends as Spark.
The package is never intended to be optimized for performance, since that was never necessary before.
Instead of building our custom version of `missingno`, I would very much prefer that instead we refactor upstream for a multitude of reasons. First, dedicated packages reduce complexity which is better for the community in total. Second, I believe strongly that proper attribution, which comes naturally from the package import.
As far as I can see, decoupling the logic from the plotting would enable the usage of parts of the code for now. @ResidentMario What do you think? | closed | 2020-09-19T16:52:28Z | 2021-07-03T18:52:49Z | https://github.com/ResidentMario/missingno/issues/119 | [] | sbrugman | 1 |
tensorpack/tensorpack | tensorflow | 1,109 | Why can your maskrcnn use 0.01 learning rate, but googleAPI object detection can not? | Because warmup or others? | closed | 2019-03-15T04:55:19Z | 2019-03-15T16:40:12Z | https://github.com/tensorpack/tensorpack/issues/1109 | [
"examples"
] | yiyang186 | 1 |
hbldh/bleak | asyncio | 1,445 | Passive scan `or_pattern` is ignored if connected to another device | * bleak version: 0.21.1
* Python version: 3.11.2
* Operating System: Debian Bookworm 64-bit (Raspberry Pi)
* BlueZ version (`bluetoothctl -v`) in case of Linux: 5.66
### Description
I need to run a passive scan to retrieve advertising packets from a specific device while simultaneously connecting to another device. When I connect to the other device, the passive scan starts returning advertisement data that does not match the provided `or_pattern`.
I expect the passive scan to respect my `or_pattern` at all times and not be affected by a connection to another device.
### What I Did
Here is a minimal reproduction:
```python
import asyncio
import logging
from bleak import BleakClient, BleakScanner
from bleak.assigned_numbers import AdvertisementDataType
from bleak.backends.bluezdbus.advertisement_monitor import OrPattern
from bleak.backends.bluezdbus.scanner import BlueZScannerArgs
async def scan():
args = BlueZScannerArgs(
or_patterns=[OrPattern(0, AdvertisementDataType.MANUFACTURER_SPECIFIC_DATA, b"\xe1\x02")]
)
async with BleakScanner(bluez=args, scanning_mode="passive") as scanner:
async for _, advertisement_data in scanner.advertisement_data():
mfr_data = advertisement_data.manufacturer_data
if mfr_data.get(0x02e1):
logging.info("scan(): found correct device: %s", mfr_data)
else:
logging.info("scan(): this should never happen: %s", mfr_data)
async def connect():
device1 = await BleakScanner.find_device_by_address("01:B6:EC:10:CB:8F")
async with BleakClient(device1):
logging.info("connect(): connected to device")
await asyncio.sleep(60)
async def main():
logging.info("main(): starting scan")
asyncio.create_task(scan())
logging.info("main(): sleeping for 10 seconds")
await asyncio.sleep(10)
logging.info("main(): connecting to another device")
await asyncio.create_task(connect())
logging.basicConfig(
level=logging.INFO,
format="%(asctime)-15s %(name)-8s %(levelname)s: %(message)s",
)
asyncio.run(main())
```
Here's the output:
```
2023-11-05 10:41:07,696 root INFO: main(): starting scan
2023-11-05 10:41:07,697 root INFO: main(): sleeping for 10 seconds
2023-11-05 10:41:08,149 root INFO: scan(): found correct device: {737: b'\x10\x02\x83\xa3\x02\x17G\xd0\xb9\x9a\xedh\xc2\xad`\x04\xb2E\xda"\xea=A'}
2023-11-05 10:41:17,708 root INFO: main(): connecting to another device
2023-11-05 10:41:17,795 root INFO: scan(): found correct device: {737: b'\x10\x02\x83\xa3\x02 G\xd0\xba\xa5\x13\x15m\x89m\xe8\\\xb1\r\x82\x93\xe8n'}
2023-11-05 10:41:17,854 root INFO: scan(): this should never happen: {54828: b'\x88\xa0ww\x19\r\xf3<'}
2023-11-05 10:41:17,884 root INFO: scan(): this should never happen: {49568: b'\x88\xa0ww\x19\r\xe2\xb0'}
2023-11-05 10:41:17,907 root INFO: scan(): this should never happen: {224: b'2]\xca\x82K\xdc'}
2023-11-05 10:41:17,912 root INFO: scan(): this should never happen: {61057: b'\x88\xa0\x01\xb6\xec\x10\xcb\x8f'}
2023-11-05 10:41:17,916 root INFO: scan(): this should never happen: {224: b'2]\xca\x82K\xdc'}
2023-11-05 10:41:17,917 root INFO: scan(): this should never happen: {61057: b'\x88\xa0\x01\xb6\xec\x10\xcb\x8f'}
2023-11-05 10:41:17,918 root INFO: scan(): this should never happen: {49568: b'\x88\xa0ww\x19\r\xe2\xb0'}
2023-11-05 10:41:17,919 root INFO: scan(): this should never happen: {54828: b'\x88\xa0ww\x19\r\xf3<'}
2023-11-05 10:41:17,919 root INFO: scan(): found correct device: {737: b'\x10\x02\x83\xa3\x02 G\xd0\xba\xa5\x13\x15m\x89m\xe8\\\xb1\r\x82\x93\xe8n'}
2023-11-05 10:41:17,955 root INFO: scan(): found correct device: {737: b'\x10\x02\x83\xa3\x02!G\xd0,\xb6\x00\x14\xb3>z\xd8p\xe8\xd4\xab\x13\xe1Q'}
2023-11-05 10:41:18,517 root INFO: scan(): this should never happen: {61057: b'\x88\xa0\x01\xb6\xec\x10\xcb\x8f'}
2023-11-05 10:41:18,763 root INFO: scan(): this should never happen: {61057: b'\x88\xa0\x01\xb6\xec\x10\xcb\x8f'}
2023-11-05 10:41:18,764 root INFO: connect(): connected to device
``` | closed | 2023-11-05T15:46:07Z | 2023-11-08T23:46:58Z | https://github.com/hbldh/bleak/issues/1445 | [] | vboginskey | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.