repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
dynaconf/dynaconf | flask | 790 | Add custom converters to the docs | The PR https://github.com/dynaconf/dynaconf/pull/784
allowed
```py
# app.py
from pathlib import Path
from dynaconf.utils import parse_conf
parse_conf.converters["@path"] = (
lambda value: value.set_casting(Path)
if isinstance(value, parse_conf.Lazy)
else Path(value)
)
```
```toml
# settings.toml
parent = "@path @format {env[HOME]}/parent"
child = "@path @format {this.parent}/child"
```
We need to add that to the docs and `example/` test run | closed | 2022-08-16T17:52:27Z | 2022-08-18T10:19:20Z | https://github.com/dynaconf/dynaconf/issues/790 | [
"Docs"
] | rochacbruno | 3 |
jackmpcollins/magentic | pydantic | 388 | AWS Bedrock support | Hi Jack 👋 ,
Probably a long-shot (given the large API differences), but are there any plans to support [AWS Bedrock](https://docs.aws.amazon.com/code-library/latest/ug/python_3_bedrock-runtime_code_examples.html#anthropic_claude)?
Given Amazon's investment into Anthropic, we've seen an uptick in interest in using Bedrock to run LLMs on their existing cloud infrastructure, and we'd like to continue using `magentic`.
Probably something better for `Litellm`? | open | 2024-12-18T08:16:59Z | 2024-12-18T17:00:40Z | https://github.com/jackmpcollins/magentic/issues/388 | [] | mnicstruwig | 1 |
stanfordnlp/stanza | nlp | 1,324 | [QUESTION] GUD model? | In your webpage you report results on Greek **gud** model : https://stanfordnlp.github.io/stanza/performance.html
But when I'm trying to download available models only this model for Greek is available, namely Greek **gdt** and in the huggingface platform as well--
is the Greek GUD model available somewhere online or not? | closed | 2023-12-20T15:31:45Z | 2024-02-25T00:14:59Z | https://github.com/stanfordnlp/stanza/issues/1324 | [
"question"
] | vistamou | 2 |
JaidedAI/EasyOCR | deep-learning | 1,386 | CRAFT training on non ocr images | Hi,
This is regarding training the CRAT model (the detection segment of EasyOCR). Apart from images containing text as part of the dataset, I also have images with no text, and I want the model to be trained on both types. While label files are provided for images containing text, I am unsure how to create labels for images without text.
Do I need to create label files for such images, or should the label file be created but left blank? | open | 2025-03-12T07:06:33Z | 2025-03-12T07:06:33Z | https://github.com/JaidedAI/EasyOCR/issues/1386 | [] | gupta9ankit5 | 0 |
Miksus/rocketry | pydantic | 170 | BUG crashed in pickling | **Describe the bug**
On the second run of my task, the following log line triggers continuously in a loop forever:
```
CRITICAL:rocketry.task:Task 'daily_notification' crashed in pickling. Cannot pickle: {'__dict__': {'session': <rocketry.session.Session object at 0x2f6d4f9a0>, 'permanent': False, 'fmt_log_message': "Task '{task}' status: '{action}'", 'daemon': None, 'batches': [], 'name': 'daily_notification', 'description': None, 'logger_name': 'rocketry.task', 'execution': None, 'priority': 0, 'disabled': False, 'force_run': False, 'force_termination': False, 'status': 'run', 'timeout': None, 'parameters': Parameters(), 'start_cond': None, 'end_cond': None, 'multilaunch': None, 'on_startup': False, 'on_shutdown': False, 'func_run_id': None, 'func': <function daily_notification at 0x1038cd4c0>, 'path': None, 'func_name': 'main', 'cache': False, 'sys_paths': []}}
```
The log line is the same each time, except for the Session object reference.
On CTRL+C the trace back contains:
```
INFO:rocketry.scheduler:Interupted. Shutting down...
Traceback (most recent call last):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/schedule.py", line 128, in serve
await self.run_cycle()
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/schedule.py", line 182, in run_cycle
await self.run_task(task)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/schedule.py", line 211, in run_task
await task.start_async(log_queue=self._log_queue)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 514, in start_async
self.run_as_process(params=params, direct_params=direct_params, task_run=task_run, **kwargs)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 735, in run_as_process
process.start()
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1324, in __getstate__
unpicklable = {key: val for key, val in state.items() if not is_pickleable(val)}
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1324, in <dictcomp>
unpicklable = {key: val for key, val in state.items() if not is_pickleable(val)}
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1324, in __getstate__
unpicklable = {key: val for key, val in state.items() if not is_pickleable(val)}
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1324, in <dictcomp>
unpicklable = {key: val for key, val in state.items() if not is_pickleable(val)}
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1324, in __getstate__
unpicklable = {key: val for key, val in state.items() if not is_pickleable(val)}
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1324, in <dictcomp>
unpicklable = {key: val for key, val in state.items() if not is_pickleable(val)}
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1318, in __getstate__
if not is_pickleable(state):
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/utils/pickle.py", line 6, in is_pickleable
pickle.dumps(obj)
File "/Users/tekumara/code/dt-web-glimpse/.venv/lib/python3.9/site-packages/rocketry/core/task.py", line 1326, in __getstate__
self.logger.critical(f"Task '{self.name}' crashed in pickling. Cannot pickle: {unpicklable}", extra={"action": "fail", "task_name": self.name})
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/logging/__init__.py", line 1835, in critical
self.log(CRITICAL, msg, *args, **kwargs)
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/logging/__init__.py", line 1844, in log
self.logger.log(level, msg, *args, **kwargs)
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/logging/__init__.py", line 1512, in log
self._log(level, msg, args, **kwargs)
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/logging/__init__.py", line 1589, in _log
self.handle(record)
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/logging/__init__.py", line 1599, in handle
self.callHandlers(record)
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/logging/__init__.py", line 1661, in callHandlers
hdlr.handle(record)
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/logging/__init__.py", line 952, in handle
self.emit(record)
File "/Users/tekumara/.pyenv/versions/3.9.13/lib/python3.9/logging/__init__.py", line 1086, in emit
stream.write(msg + self.terminator)
KeyboardInterrupt
INFO:rocketry.scheduler:Shutdown completed. Good bye.
```
My task:
```
@app.task(hourly)
async def daily_notification() -> None:
file = "summary.png"
# we are using the sync clients so run them on a thread
await to_thread.run_sync(
glance.browser.screenshot,
file
)
await to_thread.run_sync(glance.slack.upload_screenshot, file)
```
**Desktop (please complete the following information):**
- OS: macos
- Python version 3.9
**Additional context**
rocketry 2.5.1 | open | 2022-12-11T11:21:49Z | 2023-12-15T09:18:42Z | https://github.com/Miksus/rocketry/issues/170 | [
"bug"
] | tekumara | 3 |
slackapi/bolt-python | fastapi | 1,007 | Lazy Listener for Bolt Python not doing what it is supposed to | I am trying to create a slack app that involves querying data from another service through their API and trying to host it on AWS lambda. Since querying data from another API is going to take quite a long time and slack has the 3 second time-out I've been trying to use the lazy listener, but even just copy pasting from the documentation does not really work out. What happens is that the acknowledgement goes through, but then the actual message that gets delayed by 5 seconds is not being sent. Obviously the app itself is going to be more complicated than this, but currently I am just trying to get the most basic thing to run.
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The `slack_bolt` version
slack-bolt==1.18.1
slack-sdk==3.26.1
#### Python runtime version
Python 3.10.0
#### OS info
ProductName: macOS
ProductVersion: 14.0
BuildVersion: 23A344
Darwin Kernel Version 23.0.0: Fri Sep 15 14:41:34 PDT 2023; root:xnu-10002.1.13~1/RELEASE_ARM64_T8103
#### Steps to reproduce:
This is the current script I am using for my lambda function:
```
import os
import logging
from slack_bolt import App
from slack_bolt.adapter.aws_lambda import SlackRequestHandler
import time
# process_before_response must be True when running on FaaS
app = App(process_before_response=True,
signing_secret=os.environ.get("SLACK_SIGNING_SECRET"),
token=os.getenv("SLACK_BOT_TOKEN"))
def respond_to_slack_within_3_seconds(body, ack):
text = body.get("text")
if text is None or len(text) == 0:
ack(":x: Usage: /testing this is the acknowledgement to slack")
else:
ack(f"Accepted! (task: {body['text']})")
logging.info(body)
logging.info('ack() done')
pass
def run_long_process(respond, body):
time.sleep(5) # longer than 3 seconds
respond(f"Completed! (task: {body['text']})")
logging.info(body)
logging.info('long run with 5 sec delay done.')
app.command("/testing")(
ack=respond_to_slack_within_3_seconds, # responsible for calling `ack()`
lazy=[run_long_process] # unable to call `ack()` / can have multiple functions
)
SlackRequestHandler.clear_all_log_handlers()
logging.basicConfig(format="%(asctime)s %(message)s", level=logging.DEBUG)
def lambda_handler(event, context):
slack_handler = SlackRequestHandler(app=app)
logging.info(event)
logging.info(context)
return slack_handler.handle(event, context)
```
### Expected result:
I expect to receive one message informing me that my slash command was acknowledged and another one with the actual response.
### Actual result:
Over slack I only receive the message that my command was acknowledged and no matter how long I wait the actual response does not come through.
<img width="1016" alt="Screenshot 2024-01-03 at 15 45 49" src="https://github.com/slackapi/bolt-python/assets/118990464/2f8f982d-7a7b-4373-bf78-9c2228d15808">
Furthermore, when I check the logs, I can also see there that my command was acknowledged but despite that the function still times out.


This is also the first time I am doing anything in this direction, so any help would be appreciated.
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2024-01-03T14:56:14Z | 2024-01-12T22:51:43Z | https://github.com/slackapi/bolt-python/issues/1007 | [
"question"
] | sinjachen | 6 |
onnx/onnx | scikit-learn | 6,710 | type-coverage-of-popular-python-packages-and-github-badge | Hello,
maybe that's of interest for us:
https://discuss.python.org/t/type-coverage-of-popular-python-packages-and-github-badge/63401
https://html-preview.github.io/?url=https://github.com/lolpack/type_coverage_py/blob/main/index.html

 | open | 2025-02-16T09:17:00Z | 2025-03-09T12:19:19Z | https://github.com/onnx/onnx/issues/6710 | [
"contributions welcome"
] | andife | 2 |
plotly/dash | data-visualization | 2,269 | [BUG] Callbacks not triggered by attribute changes by Bootstrap | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
We're using Dash in combination with Bootstrap. Callbacks on attribute changes caused by Bootstrap (in our case `class`/`className` attributes set by Bootstrap's `Navbar` logic) do not fire correctly. When changing the same attribute via a second Dash Callback, the Callback is triggered as expected.
- replace the result of `pip list | grep dash` below
```
dash 2.5.0
dash-bootstrap-components 1.1.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: Ubuntu 20.04.3 LTS
- Browser Firefox
- Version 105.0 (64-bit)
**Describe the bug**
Consider the following minimal example:
```
import dash_bootstrap_components as dbc
from dash import Dash, html
from dash.dependencies import Input, Output
app = Dash(
__name__,
external_stylesheets=[dbc.themes.BOOTSTRAP],
external_scripts=[
"https://code.jquery.com/jquery-3.6.1.min.js",
"https://cdn.jsdelivr.net/npm/bootstrap@4.0.0/dist/js/bootstrap.min.js",
],
)
app.layout = html.Div(
[
html.Nav(
[
html.Div(
[
html.A(
"Tab 1",
className="nav-link active",
id="tab-1-tab",
href="#tab-1-content",
role="tab",
**{
"data-toggle": "tab",
"aria-controls": "tab-1-content",
"aria-selected": True,
},
),
html.A(
"Tab 2",
className="nav-link",
id="tab-2-tab",
href="#tab-2-content",
role="tab",
**{
"data-toggle": "tab",
"aria-controls": "tab-2-content",
"aria-selected": False,
},
),
],
className="nav nav-tabs card-header-tabs",
id="tabs",
role="tablist",
)
],
className="",
),
html.Div(
[
html.Div(
[
html.Div(
"Tab 1 Content",
className="tab-pane fade show active",
id="tab-1-content",
role="tabpanel",
**{
"aria-labelledby": "tab-1-tab",
},
),
html.Div(
"Tab 2 Content",
className="tab-pane fade",
id="tab-2-content",
role="tabpanel",
**{
"aria-labelledby": "tab-2-tab",
},
),
],
className="tab-content",
),
]
),
html.Div(id="dummy-element"),
],
)
@app.callback(Output("dummy-element", "children"), Input("tab-2-content", "className"))
def class_changed_callback(class_name):
print(class_name)
return None
if __name__ == "__main__":
app.run(debug=True)
```
It renders a Bootstrap Tab Navbar:

Callback `class_changed_callback` is listening to `class` changes of `tab-2-content` and prints the respective class.
**Expected behavior**
After clicking `Tab 2`, Bootstrap changes the `class` property of `tab-2-content` from `tab-pane fade` to `tab-pane fade show active`. This change should be registered by `class_changed_callback`, which should print `tab-pane fade show active` to console. Instead, the callback is called only once when the page is rendered, and never afterwards.
Interestingly, when setting the `class` through another Dash callback, `class_changed_callback` triggers correctly.
| closed | 2022-10-12T16:47:41Z | 2022-10-14T09:11:57Z | https://github.com/plotly/dash/issues/2269 | [] | nweil-vorausrobotik | 3 |
ivy-llc/ivy | tensorflow | 28,198 | Fix Ivy Failing Test: torch - shape.shape__bool__ | closed | 2024-02-06T13:34:13Z | 2024-02-09T09:27:05Z | https://github.com/ivy-llc/ivy/issues/28198 | [
"Sub Task"
] | fnhirwa | 0 | |
vitalik/django-ninja | pydantic | 1,063 | query parameter variabel | some applications request data with url query, and one of the variable call "pass"
ex:
http://my_url/api?pass=testvalue
when i try to gaet the parameter
@api.get("/user")
def list_weapons(request, pass:str):
return pass
it's be a problem because pass can't use as variable in python. how to handle it?
| open | 2024-01-25T17:00:16Z | 2024-01-26T12:31:20Z | https://github.com/vitalik/django-ninja/issues/1063 | [] | lutfyneutron | 1 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,765 | [Bug]: Prompt S/R uses newlines as a delimeter as well as commas, but only commas are mentioned in the tooltip/hint | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [X] The issue has been reported before but has not been fixed yet
### What happened?
Prompt S/R creates extra images if you use newlines. The tooltip for Prompt S/R only states that commas are delimiters.
For this GUI, a newline is a terrible choice for a delimiter, as the input box is so small, people are likely to use newlines to break up and format their text to make editing easier.
Either the tooltip should be updated to reflect how it actually works, or newlines should not be used as delimiters.
It would be nice to choose our own symbol as a delimiter, as a comma is not a great delimiter either, as we use those to break up text visually (even if they're wasted tokens)
### Steps to reproduce the problem
Start an X/Y/Z plot script. Set X to 'Prompt S/R'.
Put 'a dog' in the input prompt.
Put this in the field for Prompt S/R:
a dog,
a fluffy dog,
a (fluffy) dog,
an unreasonably ((fluffy)) dog
This will produce extra images.
Contrasted with this prompt:
a dog,a fluffy dog,a (fluffy) dog,an unreasonably ((fluffy)) dog
It's hard to look at, but it's functioning as intended now.
### What should have happened?
The tooltip should either explain that newlines are delimiters as well, or newlines should not be delimiters
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
<details><summary>Sysinfo</summary>
<p>
```json
{
"Platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35",
"Python": "3.10.14",
"Version": "v1.9.0",
"Commit": "adadb4e3c7382bf3e4f7519126cd6c70f4f8557b",
"Script path": "/workspace/stable-diffusion-webui",
"Data path": "/workspace/stable-diffusion-webui",
"Extensions dir": "/workspace/stable-diffusion-webui/extensions",
"Checksum": "09eef9ef59dc90d8567367bd339c19c307404580dc63a8180f195f5421a23fd0",
"Commandline": [
"launch.py",
"--xformers",
"--no-half",
"--no-half-vae",
"--enable-insecure-extension-access",
"--port",
"17860"
],
"Torch env info": {
"torch_version": "2.2.2",
"is_debug_build": "False",
"cuda_compiled_version": "11.8",
"gcc_version": "(Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0",
"clang_version": null,
"cmake_version": null,
"os": "Ubuntu 22.04.4 LTS (x86_64)",
"libc_version": "glibc-2.35",
"python_version": "3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)",
"python_platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35",
"is_cuda_available": "True",
"cuda_runtime_version": null,
"cuda_module_loading": "LAZY",
"nvidia_driver_version": "535.104.05",
"nvidia_gpu_models": "GPU 0: NVIDIA H100 PCIe",
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.26.2",
"onnx==1.16.0",
"onnxruntime-gpu==1.17.1",
"open-clip-torch==2.20.0",
"optree==0.11.0",
"pytorch-lightning==1.9.4",
"pytorch_optimizer==2.12.0",
"torch==2.2.2",
"torchaudio==2.2.2",
"torchdiffeq==0.2.3",
"torchmetrics==1.3.2",
"torchsde==0.2.6",
"torchvision==0.17.2",
"triton==2.2.0"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "",
"is_xnnpack_available": "True",
"cpu_info": [
"Architecture: x86_64",
"CPU op-mode(s): 32-bit, 64-bit",
"Address sizes: 52 bits physical, 57 bits virtual",
"Byte Order: Little Endian",
"CPU(s): 128",
"On-line CPU(s) list: 0-127",
"Vendor ID: AuthenticAMD",
"Model name: AMD EPYC 9354 32-Core Processor",
"CPU family: 25",
"Model: 17",
"Thread(s) per core: 2",
"Core(s) per socket: 32",
"Socket(s): 2",
"Stepping: 1",
"Frequency boost: enabled",
"CPU max MHz: 3799.0720",
"CPU min MHz: 1500.0000",
"BogoMIPS: 6499.65",
"Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d",
"Virtualization: AMD-V",
"L1d cache: 2 MiB (64 instances)",
"L1i cache: 2 MiB (64 instances)",
"L2 cache: 64 MiB (64 instances)",
"L3 cache: 512 MiB (16 instances)",
"NUMA node(s): 2",
"NUMA node0 CPU(s): 0-31,64-95",
"NUMA node1 CPU(s): 32-63,96-127",
"Vulnerability Gather data sampling: Not affected",
"Vulnerability Itlb multihit: Not affected",
"Vulnerability L1tf: Not affected",
"Vulnerability Mds: Not affected",
"Vulnerability Meltdown: Not affected",
"Vulnerability Mmio stale data: Not affected",
"Vulnerability Retbleed: Not affected",
"Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp",
"Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization",
"Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected",
"Vulnerability Srbds: Not affected",
"Vulnerability Tsx async abort: Not affected"
]
},
"Exceptions": [
{
"exception": "",
"traceback": [
[
"/workspace/stable-diffusion-webui/modules/scripts.py, line 825, process",
"script.process(p, *script_args)"
],
[
"/workspace/stable-diffusion-webui/extensions/sd-dynamic-prompts/sd_dynamic_prompts/dynamic_prompting.py, line 480, process",
"all_prompts, all_negative_prompts = generate_prompts("
],
[
"/workspace/stable-diffusion-webui/extensions/sd-dynamic-prompts/sd_dynamic_prompts/helpers.py, line 93, generate_prompts",
"all_prompts = prompt_generator.generate(prompt, num_prompts, seeds=seeds) or [\"\"]"
],
[
"/opt/micromamba/envs/webui/lib/python3.10/site-packages/dynamicprompts/generators/randomprompt.py, line 71, generate",
"prompts.append(str(next(iter(gen))))"
]
]
},
{
"exception": "",
"traceback": [
[
"/workspace/stable-diffusion-webui/modules/scripts.py, line 825, process",
"script.process(p, *script_args)"
],
[
"/workspace/stable-diffusion-webui/extensions/sd-dynamic-prompts/sd_dynamic_prompts/dynamic_prompting.py, line 480, process",
"all_prompts, all_negative_prompts = generate_prompts("
],
[
"/workspace/stable-diffusion-webui/extensions/sd-dynamic-prompts/sd_dynamic_prompts/helpers.py, line 93, generate_prompts",
"all_prompts = prompt_generator.generate(prompt, num_prompts, seeds=seeds) or [\"\"]"
],
[
"/opt/micromamba/envs/webui/lib/python3.10/site-packages/dynamicprompts/generators/randomprompt.py, line 71, generate",
"prompts.append(str(next(iter(gen))))"
]
]
},
{
"exception": "",
"traceback": [
[
"/workspace/stable-diffusion-webui/modules/scripts.py, line 825, process",
"script.process(p, *script_args)"
],
[
"/workspace/stable-diffusion-webui/extensions/sd-dynamic-prompts/sd_dynamic_prompts/dynamic_prompting.py, line 480, process",
"all_prompts, all_negative_prompts = generate_prompts("
],
[
"/workspace/stable-diffusion-webui/extensions/sd-dynamic-prompts/sd_dynamic_prompts/helpers.py, line 93, generate_prompts",
"all_prompts = prompt_generator.generate(prompt, num_prompts, seeds=seeds) or [\"\"]"
],
[
"/opt/micromamba/envs/webui/lib/python3.10/site-packages/dynamicprompts/generators/randomprompt.py, line 71, generate",
"prompts.append(str(next(iter(gen))))"
]
]
},
{
"exception": "",
"traceback": [
[
"/workspace/stable-diffusion-webui/modules/scripts.py, line 825, process",
"script.process(p, *script_args)"
],
[
"/workspace/stable-diffusion-webui/extensions/sd-dynamic-prompts/sd_dynamic_prompts/dynamic_prompting.py, line 480, process",
"all_prompts, all_negative_prompts = generate_prompts("
],
[
"/workspace/stable-diffusion-webui/extensions/sd-dynamic-prompts/sd_dynamic_prompts/helpers.py, line 93, generate_prompts",
"all_prompts = prompt_generator.generate(prompt, num_prompts, seeds=seeds) or [\"\"]"
],
[
"/opt/micromamba/envs/webui/lib/python3.10/site-packages/dynamicprompts/generators/randomprompt.py, line 71, generate",
"prompts.append(str(next(iter(gen))))"
]
]
},
{
"exception": "",
"traceback": [
[
"/workspace/stable-diffusion-webui/modules/scripts.py, line 825, process",
"script.process(p, *script_args)"
],
[
"/workspace/stable-diffusion-webui/extensions/sd-dynamic-prompts/sd_dynamic_prompts/dynamic_prompting.py, line 480, process",
"all_prompts, all_negative_prompts = generate_prompts("
],
[
"/workspace/stable-diffusion-webui/extensions/sd-dynamic-prompts/sd_dynamic_prompts/helpers.py, line 93, generate_prompts",
"all_prompts = prompt_generator.generate(prompt, num_prompts, seeds=seeds) or [\"\"]"
],
[
"/opt/micromamba/envs/webui/lib/python3.10/site-packages/dynamicprompts/generators/randomprompt.py, line 71, generate",
"prompts.append(str(next(iter(gen))))"
]
]
}
],
"CPU": {
"model": "x86_64",
"count logical": 128,
"count physical": 64
},
"RAM": {
"total": "755GB",
"used": "23GB",
"free": "163GB",
"active": "144GB",
"inactive": "427GB",
"buffers": "565MB",
"cached": "569GB",
"shared": "20MB"
},
"Extensions": [
{
"name": "openpose-editor",
"path": "/workspace/stable-diffusion-webui/extensions/openpose-editor",
"version": "c9357715",
"branch": "master",
"remote": "https://github.com/fkunn1326/openpose-editor"
},
{
"name": "sd-dynamic-prompts",
"path": "/workspace/stable-diffusion-webui/extensions/sd-dynamic-prompts",
"version": "1567e787",
"branch": "main",
"remote": "https://github.com/adieyal/sd-dynamic-prompts"
},
{
"name": "sd-face-editor",
"path": "/workspace/stable-diffusion-webui/extensions/sd-face-editor",
"version": "85fc3d8b",
"branch": "main",
"remote": "https://github.com/ototadana/sd-face-editor"
},
{
"name": "sd-webui-controlnet",
"path": "/workspace/stable-diffusion-webui/extensions/sd-webui-controlnet",
"version": "3b4eedd9",
"branch": "main",
"remote": "https://github.com/Mikubill/sd-webui-controlnet"
},
{
"name": "sd-webui-deforum",
"path": "/workspace/stable-diffusion-webui/extensions/sd-webui-deforum",
"version": "32242685",
"branch": "automatic1111-webui",
"remote": "https://github.com/deforum-art/sd-webui-deforum"
},
{
"name": "sd-webui-reactor",
"path": "/workspace/stable-diffusion-webui/extensions/sd-webui-reactor",
"version": "d2e78be2",
"branch": "main",
"remote": "https://github.com/Gourieff/sd-webui-reactor"
},
{
"name": "sd-webui-regional-prompter",
"path": "/workspace/stable-diffusion-webui/extensions/sd-webui-regional-prompter",
"version": "50493ec0",
"branch": "main",
"remote": "https://github.com/hako-mikan/sd-webui-regional-prompter"
},
{
"name": "sd_civitai_extension.git",
"path": "/workspace/stable-diffusion-webui/extensions/sd_civitai_extension.git",
"version": "115cd9c3",
"branch": "main",
"remote": "https://github.com/civitai/sd_civitai_extension.git"
},
{
"name": "sd_dreambooth_extension",
"path": "/workspace/stable-diffusion-webui/extensions/sd_dreambooth_extension",
"version": "45a12fe5",
"branch": "main",
"remote": "https://github.com/d8ahazard/sd_dreambooth_extension"
},
{
"name": "stable-diffusion-webui-images-browser",
"path": "/workspace/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser",
"version": "3d2d0f4c",
"branch": "main",
"remote": "https://github.com/AlUlkesh/stable-diffusion-webui-images-browser"
},
{
"name": "ultimate-upscale-for-automatic1111",
"path": "/workspace/stable-diffusion-webui/extensions/ultimate-upscale-for-automatic1111",
"version": "2322caa4",
"branch": "master",
"remote": "https://github.com/Coyote-A/ultimate-upscale-for-automatic1111"
},
{
"name": "unprompted",
"path": "/workspace/stable-diffusion-webui/extensions/unprompted",
"version": "697a6f61",
"branch": "main",
"remote": "https://github.com/ThereforeGames/unprompted.git"
}
],
"Inactive extensions": [],
"Environment": {
"GRADIO_ANALYTICS_ENABLED": "False"
},
"Config": {
"ldsr_steps": 100,
"ldsr_cached": false,
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"SWIN_torch_compile": false,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"dp_ignore_whitespace": false,
"dp_write_raw_template": false,
"dp_write_prompts_to_file": false,
"dp_parser_variant_start": "{",
"dp_parser_variant_end": "}",
"dp_parser_wildcard_wrap": "__",
"dp_limit_jinja_prompts": false,
"dp_auto_purge_cache": false,
"dp_wildcard_manager_no_dedupe": false,
"dp_wildcard_manager_no_sort": false,
"dp_wildcard_manager_shuffle": false,
"dp_magicprompt_default_model": "Gustavosta/MagicPrompt-Stable-Diffusion",
"dp_magicprompt_batch_size": 1,
"face_editor_search_subdirectories": false,
"face_editor_additional_components": [],
"face_editor_save_original_on_detection_fail": true,
"face_editor_correct_tilt": false,
"face_editor_auto_face_size_by_model": false,
"face_editor_script_index": 99,
"control_net_detectedmap_dir": "detected_maps",
"control_net_models_path": "",
"control_net_modules_path": "",
"control_net_unit_count": 3,
"control_net_model_cache_size": 2,
"control_net_inpaint_blur_sigma": 7,
"control_net_no_detectmap": false,
"control_net_detectmap_autosaving": false,
"control_net_allow_script_control": false,
"control_net_sync_field_args": true,
"controlnet_show_batch_images_in_ui": false,
"controlnet_increment_seed_during_batch": false,
"controlnet_disable_openpose_edit": false,
"controlnet_disable_photopea_edit": false,
"controlnet_photopea_warning": true,
"controlnet_ignore_noninpaint_mask": false,
"controlnet_clip_detector_on_cpu": false,
"controlnet_control_type_dropdown": false,
"deforum_keep_3d_models_in_vram": false,
"deforum_enable_persistent_settings": false,
"deforum_persistent_settings_path": "models/Deforum/deforum_persistent_settings.txt",
"deforum_ffmpeg_location": "/opt/micromamba/envs/webui/lib/python3.10/site-packages/imageio_ffmpeg/binaries/ffmpeg-linux64-v4.2.2",
"deforum_ffmpeg_crf": 17,
"deforum_ffmpeg_preset": "slow",
"deforum_debug_mode_enabled": false,
"deforum_save_gen_info_as_srt": false,
"deforum_embed_srt": false,
"deforum_save_gen_info_as_srt_params": [
"Noise Schedule"
],
"deforum_preview": "Off",
"deforum_preview_interval_frames": 100,
"regprp_debug": false,
"regprp_hidepmask": false,
"civitai_link_key": "",
"civitai_link_logging": true,
"civitai_api_key": "",
"civitai_download_previews": true,
"civitai_download_triggers": true,
"civitai_nsfw_previews": false,
"civitai_download_missing_models": true,
"civitai_hashify_resources": true,
"civitai_folder_model": "",
"civitai_folder_lora": "",
"civitai_folder_lyco": "",
"image_browser_active_tabs": "txt2img, img2img, txt2img-grids, img2img-grids, Extras, Favorites, Others, All, Maintenance",
"image_browser_hidden_components": [],
"image_browser_with_subdirs": true,
"image_browser_preload": false,
"image_browser_copy_image": false,
"image_browser_delete_message": true,
"image_browser_txt_files": true,
"image_browser_debug_level": "0 - none",
"image_browser_delete_recycle": true,
"image_browser_scan_exif": true,
"image_browser_mod_shift": false,
"image_browser_mod_ctrl_shift": false,
"image_browser_ranking_pnginfo": false,
"image_browser_page_columns": 6,
"image_browser_page_rows": 6,
"image_browser_pages_perload": 20,
"image_browser_height_auto": false,
"image_browser_use_thumbnail": false,
"image_browser_thumbnail_size": 200,
"image_browser_thumbnail_crop": false,
"image_browser_swipe": false,
"image_browser_img_tooltips": true,
"image_browser_show_progress": true,
"image_browser_info_add": false,
"image_browser_video_pos": "Above",
"image_browser_video_x": 640,
"image_browser_video_y": 640,
"sd_model_checkpoint": "",
"sd_checkpoint_hash": "408997149cd6ce0f1f8187a857303f312e5bf090948b4c432d2329d7efcd26c5",
"outdir_samples": "",
"outdir_txt2img_samples": "outputs/txt2img-images",
"outdir_img2img_samples": "outputs/img2img-images",
"outdir_extras_samples": "outputs/extras-images",
"outdir_grids": "",
"outdir_txt2img_grids": "outputs/txt2img-grids",
"outdir_img2img_grids": "outputs/img2img-grids",
"outdir_save": "log/images",
"outdir_init_images": "outputs/init-images",
"samples_save": true,
"samples_format": "png",
"samples_filename_pattern": "",
"save_images_add_number": true,
"save_images_replace_action": "Replace",
"grid_save": true,
"grid_format": "png",
"grid_extended_filename": false,
"grid_only_if_multiple": true,
"grid_prevent_empty_spots": false,
"grid_zip_filename_pattern": "",
"n_rows": -1,
"font": "",
"grid_text_active_color": "#000000",
"grid_text_inactive_color": "#999999",
"grid_background_color": "#ffffff",
"save_images_before_face_restoration": false,
"save_images_before_highres_fix": false,
"save_images_before_color_correction": false,
"save_mask": false,
"save_mask_composite": false,
"jpeg_quality": 80,
"webp_lossless": false,
"export_for_4chan": true,
"img_downscale_threshold": 4.0,
"target_side_length": 4000.0,
"img_max_size_mp": 200.0,
"use_original_name_batch": true,
"use_upscaler_name_as_suffix": false,
"save_selected_only": true,
"save_init_img": false,
"temp_dir": "",
"clean_temp_dir_at_start": false,
"save_incomplete_images": false,
"notification_audio": true,
"notification_volume": 100,
"save_to_dirs": true,
"grid_save_to_dirs": true,
"use_save_to_dirs_for_ui": false,
"directories_filename_pattern": "[date]",
"directories_max_prompt_words": 8,
"auto_backcompat": true,
"use_old_emphasis_implementation": false,
"use_old_karras_scheduler_sigmas": false,
"no_dpmpp_sde_batch_determinism": false,
"use_old_hires_fix_width_height": false,
"dont_fix_second_order_samplers_schedule": false,
"hires_fix_use_firstpass_conds": false,
"use_old_scheduling": false,
"use_downcasted_alpha_bar": false,
"refiner_switch_by_sample_steps": false,
"lora_functional": false,
"extra_networks_show_hidden_directories": true,
"extra_networks_dir_button_function": false,
"extra_networks_hidden_models": "When searched",
"extra_networks_default_multiplier": 1,
"extra_networks_card_width": 0.0,
"extra_networks_card_height": 0.0,
"extra_networks_card_text_scale": 1,
"extra_networks_card_show_desc": true,
"extra_networks_card_description_is_html": false,
"extra_networks_card_order_field": "Path",
"extra_networks_card_order": "Ascending",
"extra_networks_tree_view_style": "Dirs",
"extra_networks_tree_view_default_enabled": true,
"extra_networks_tree_view_default_width": 180.0,
"extra_networks_add_text_separator": " ",
"ui_extra_networks_tab_reorder": "",
"textual_inversion_print_at_load": false,
"textual_inversion_add_hashes_to_infotext": true,
"sd_hypernetwork": "None",
"sd_lora": "None",
"lora_preferred_name": "Alias from file",
"lora_add_hashes_to_infotext": true,
"lora_show_all": false,
"lora_hide_unknown_for_versions": [],
"lora_in_memory_limit": 0,
"lora_not_found_warning_console": false,
"lora_not_found_gradio_warning": false,
"cross_attention_optimization": "Automatic",
"s_min_uncond": 0,
"token_merging_ratio": 0,
"token_merging_ratio_img2img": 0,
"token_merging_ratio_hr": 0,
"pad_cond_uncond": false,
"pad_cond_uncond_v0": false,
"persistent_cond_cache": true,
"batch_cond_uncond": true,
"fp8_storage": "Disable",
"cache_fp16_weight": false,
"hide_samplers": [],
"eta_ddim": 0,
"eta_ancestral": 1,
"ddim_discretize": "uniform",
"s_churn": 0,
"s_tmin": 0,
"s_tmax": 0,
"s_noise": 1,
"sigma_min": 0.0,
"sigma_max": 0.0,
"rho": 0.0,
"eta_noise_seed_delta": 0,
"always_discard_next_to_last_sigma": false,
"sgm_noise_multiplier": false,
"uni_pc_variant": "bh1",
"uni_pc_skip_type": "time_uniform",
"uni_pc_order": 3,
"uni_pc_lower_order_final": true,
"sd_noise_schedule": "Default",
"sd_checkpoints_limit": 1,
"sd_checkpoints_keep_in_cpu": true,
"sd_checkpoint_cache": 0,
"sd_unet": "Automatic",
"enable_quantization": false,
"emphasis": "Original",
"enable_batch_seeds": true,
"comma_padding_backtrack": 20,
"CLIP_stop_at_last_layers": 1,
"upcast_attn": false,
"randn_source": "GPU",
"tiling": false,
"hires_fix_refiner_pass": "second pass",
"enable_prompt_comments": true,
"sdxl_crop_top": 0.0,
"sdxl_crop_left": 0.0,
"sdxl_refiner_low_aesthetic_score": 2.5,
"sdxl_refiner_high_aesthetic_score": 6.0,
"sd_vae_checkpoint_cache": 0,
"sd_vae": "Automatic",
"sd_vae_overrides_per_model_preferences": true,
"auto_vae_precision_bfloat16": false,
"auto_vae_precision": true,
"sd_vae_encode_method": "Full",
"sd_vae_decode_method": "Full",
"inpainting_mask_weight": 1,
"initial_noise_multiplier": 1,
"img2img_extra_noise": 0,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"img2img_background_color": "#ffffff",
"img2img_editor_height": 720,
"img2img_sketch_default_brush_color": "#ffffff",
"img2img_inpaint_mask_brush_color": "#ffffff",
"img2img_inpaint_sketch_default_brush_color": "#ffffff",
"return_mask": false,
"return_mask_composite": false,
"img2img_batch_show_results_limit": 32,
"overlay_inpaint": true,
"return_grid": true,
"do_not_show_images": false,
"js_modal_lightbox": true,
"js_modal_lightbox_initially_zoomed": true,
"js_modal_lightbox_gamepad": false,
"js_modal_lightbox_gamepad_repeat": 250.0,
"sd_webui_modal_lightbox_icon_opacity": 1,
"sd_webui_modal_lightbox_toolbar_opacity": 0.9,
"gallery_height": "",
"open_dir_button_choice": "Subdirectory",
"enable_pnginfo": true,
"save_txt": false,
"add_model_name_to_info": true,
"add_model_hash_to_info": true,
"add_vae_name_to_info": true,
"add_vae_hash_to_info": true,
"add_user_name_to_info": false,
"add_version_to_infotext": true,
"disable_weights_auto_swap": true,
"infotext_skip_pasting": [],
"infotext_styles": "Apply if any",
"show_progressbar": true,
"live_previews_enable": true,
"live_previews_image_format": "png",
"show_progress_grid": true,
"show_progress_every_n_steps": 10,
"show_progress_type": "Approx NN",
"live_preview_allow_lowvram_full": false,
"live_preview_content": "Prompt",
"live_preview_refresh_period": 1000.0,
"live_preview_fast_interrupt": false,
"js_live_preview_in_modal_lightbox": false,
"keyedit_precision_attention": 0.1,
"keyedit_precision_extra": 0.05,
"keyedit_delimiters": ".,\\/!?%^*;:{}=`~() ",
"keyedit_delimiters_whitespace": [
"Tab",
"Carriage Return",
"Line Feed"
],
"keyedit_move": true,
"disable_token_counters": false,
"include_styles_into_token_counters": true,
"extra_options_txt2img": [],
"extra_options_img2img": [],
"extra_options_cols": 1,
"extra_options_accordion": false,
"compact_prompt_box": false,
"samplers_in_dropdown": true,
"dimensions_and_batch_together": true,
"sd_checkpoint_dropdown_use_short": false,
"hires_fix_show_sampler": false,
"hires_fix_show_prompts": false,
"txt2img_settings_accordion": false,
"img2img_settings_accordion": false,
"interrupt_after_current": true,
"localization": "None",
"quicksettings_list": [
"sd_model_checkpoint"
],
"ui_tab_order": [],
"hidden_tabs": [],
"ui_reorder_list": [],
"gradio_theme": "Default",
"gradio_themes_cache": true,
"show_progress_in_title": true,
"send_seed": true,
"send_size": true,
"enable_reloading_ui_scripts": false,
"api_enable_requests": true,
"api_forbid_local_requests": true,
"api_useragent": "",
"prioritized_callbacks_app_started": [],
"prioritized_callbacks_model_loaded": [],
"prioritized_callbacks_ui_tabs": [],
"prioritized_callbacks_ui_settings": [],
"prioritized_callbacks_before_image_saved": [],
"prioritized_callbacks_after_component": [],
"prioritized_callbacks_infotext_pasted": [],
"prioritized_callbacks_script_unloaded": [],
"prioritized_callbacks_before_ui": [],
"prioritized_callbacks_on_reload": [],
"prioritized_callbacks_list_optimizers": [],
"prioritized_callbacks_before_token_counter": [],
"prioritized_callbacks_script_before_process": [],
"prioritized_callbacks_script_process": [],
"prioritized_callbacks_script_before_process_batch": [],
"prioritized_callbacks_script_process_batch": [],
"prioritized_callbacks_script_postprocess": [],
"prioritized_callbacks_script_postprocess_batch": [],
"prioritized_callbacks_script_post_sample": [],
"prioritized_callbacks_script_on_mask_blend": [],
"prioritized_callbacks_script_postprocess_image": [],
"prioritized_callbacks_script_postprocess_maskoverlay": [],
"auto_launch_browser": "Local",
"enable_console_prompts": false,
"show_warnings": false,
"show_gradio_deprecation_warnings": true,
"memmon_poll_rate": 8,
"samples_log_stdout": false,
"multiple_tqdm": true,
"enable_upscale_progressbar": true,
"print_hypernet_extra": false,
"list_hidden_files": true,
"disable_mmap_load_safetensors": false,
"hide_ldm_prints": true,
"dump_stacks_on_signal": false,
"face_restoration": false,
"face_restoration_model": "CodeFormer",
"code_former_weight": 0.5,
"face_restoration_unload": false,
"postprocessing_enable_in_main_ui": [],
"postprocessing_disable_in_extras": [],
"postprocessing_operation_order": [],
"upscaling_max_images_in_cache": 5,
"postprocessing_existing_caption_action": "Ignore",
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"realesrgan_enabled_models": [
"R-ESRGAN 4x+",
"R-ESRGAN 4x+ Anime6B"
],
"dat_enabled_models": [
"DAT x2",
"DAT x3",
"DAT x4"
],
"DAT_tile": 192,
"DAT_tile_overlap": 8,
"set_scale_by_when_changing_upscaler": false,
"unload_models_when_training": false,
"pin_memory": false,
"save_optimizer_state": false,
"save_training_settings_to_txt": true,
"dataset_filename_word_regex": "",
"dataset_filename_join_string": " ",
"training_image_repeats_per_epoch": 1,
"training_write_csv_every": 500.0,
"training_xattention_optimizations": false,
"training_enable_tensorboard": false,
"training_tensorboard_save_images": false,
"training_tensorboard_flush_every": 120.0,
"canvas_hotkey_zoom": "Alt",
"canvas_hotkey_adjust": "Ctrl",
"canvas_hotkey_shrink_brush": "Q",
"canvas_hotkey_grow_brush": "W",
"canvas_hotkey_move": "F",
"canvas_hotkey_fullscreen": "S",
"canvas_hotkey_reset": "R",
"canvas_hotkey_overlap": "O",
"canvas_show_tooltip": true,
"canvas_auto_expand": true,
"canvas_blur_prompt": false,
"canvas_disabled_functions": [
"Overlap"
],
"interrogate_keep_models_in_memory": false,
"interrogate_return_ranks": false,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500.0,
"interrogate_clip_skip_categories": [],
"interrogate_deepbooru_score_threshold": 0.5,
"deepbooru_sort_alpha": true,
"deepbooru_use_spaces": true,
"deepbooru_escape": true,
"deepbooru_filter_tags": ""
},
"Startup": {
"total": 2.400942325592041,
"records": {
"app reload callback": 0.0003101825714111328,
"scripts unloaded callback": 0.00026988983154296875,
"set samplers": 0.00010919570922851562,
"list extensions": 0.0037322044372558594,
"restore config state file": 2.7894973754882812e-05,
"list SD models": 0.0044705867767333984,
"list localizations": 0.0003821849822998047,
"load scripts/custom_code.py": 0.00739288330078125,
"load scripts/img2imgalt.py": 0.0008432865142822266,
"load scripts/loopback.py": 0.0003917217254638672,
"load scripts/outpainting_mk_2.py": 0.00048542022705078125,
"load scripts/poor_mans_outpainting.py": 0.00026917457580566406,
"load scripts/postprocessing_codeformer.py": 0.00024700164794921875,
"load scripts/postprocessing_gfpgan.py": 0.00019931793212890625,
"load scripts/postprocessing_upscale.py": 0.0004520416259765625,
"load scripts/prompt_matrix.py": 0.00029087066650390625,
"load scripts/prompts_from_file.py": 0.00030422210693359375,
"load scripts/sd_upscale.py": 0.00023627281188964844,
"load scripts/xyz_grid.py": 0.001245260238647461,
"load scripts/ldsr_model.py": 0.06079363822937012,
"load scripts/lora_script.py": 0.12944555282592773,
"load scripts/scunet_model.py": 0.021799325942993164,
"load scripts/swinir_model.py": 0.021686315536499023,
"load scripts/hotkey_config.py": 0.00016880035400390625,
"load scripts/extra_options_section.py": 0.00019168853759765625,
"load scripts/hypertile_script.py": 0.04309535026550293,
"load scripts/hypertile_xyz.py": 0.00018143653869628906,
"load scripts/postprocessing_autosized_crop.py": 0.0001506805419921875,
"load scripts/postprocessing_caption.py": 8.940696716308594e-05,
"load scripts/postprocessing_create_flipped_copies.py": 8.845329284667969e-05,
"load scripts/postprocessing_focal_crop.py": 0.00010585784912109375,
"load scripts/postprocessing_split_oversized.py": 0.0001010894775390625,
"load scripts/soft_inpainting.py": 0.0002815723419189453,
"load scripts/main.py": 0.023134469985961914,
"load scripts/dynamic_prompting.py": 8.96453857421875e-05,
"load scripts/face_editor.py": 0.00011515617370605469,
"load scripts/face_editor_extension.py": 0.00010180473327636719,
"load scripts/adapter.py": 0.0004062652587890625,
"load scripts/api.py": 0.04474902153015137,
"load scripts/batch_hijack.py": 0.0003345012664794922,
"load scripts/cldm.py": 0.00019073486328125,
"load scripts/controlnet.py": 0.10988211631774902,
"load scripts/controlnet_diffusers.py": 0.0002887248992919922,
"load scripts/controlnet_lllite.py": 0.00022530555725097656,
"load scripts/controlnet_lora.py": 0.00024366378784179688,
"load scripts/controlnet_model_guess.py": 0.00034427642822265625,
"load scripts/controlnet_sparsectrl.py": 0.000179290771484375,
"load scripts/controlnet_version.py": 6.389617919921875e-05,
"load scripts/enums.py": 0.0011875629425048828,
"load scripts/external_code.py": 0.00011277198791503906,
"load scripts/global_state.py": 0.00018024444580078125,
"load scripts/hook.py": 0.0004794597625732422,
"load scripts/infotext.py": 0.00011849403381347656,
"load scripts/logging.py": 0.0006034374237060547,
"load scripts/lvminthin.py": 0.0003173351287841797,
"load scripts/movie2movie.py": 0.00018358230590820312,
"load scripts/supported_preprocessor.py": 0.00139617919921875,
"load scripts/utils.py": 0.00023031234741210938,
"load scripts/xyz_grid_support.py": 0.00025844573974609375,
"load scripts/deforum.py": 0.044471025466918945,
"load scripts/deforum_api.py": 0.0008006095886230469,
"load scripts/deforum_api_models.py": 0.003371715545654297,
"load scripts/deforum_extend_paths.py": 9.822845458984375e-05,
"load scripts/console_log_patch.py": 0.00017690658569335938,
"load scripts/reactor_api.py": 0.043341636657714844,
"load scripts/reactor_faceswap.py": 0.0003552436828613281,
"load scripts/reactor_globals.py": 0.0002033710479736328,
"load scripts/reactor_helpers.py": 0.0002956390380859375,
"load scripts/reactor_logger.py": 0.0003070831298828125,
"load scripts/reactor_swapper.py": 0.001123189926147461,
"load scripts/reactor_version.py": 0.00021696090698242188,
"load scripts/reactor_xyz.py": 0.00011277198791503906,
"load scripts/attention.py": 0.00025534629821777344,
"load scripts/latent.py": 0.0006561279296875,
"load scripts/regions.py": 0.00151824951171875,
"load scripts/rp.py": 0.02244114875793457,
"load scripts/rps.py": 0.00024199485778808594,
"load scripts/gen_hashing.py": 0.0216982364654541,
"load scripts/info.py": 0.021608829498291016,
"load scripts/link.py": 0.021574020385742188,
"load scripts/pasted.py": 0.02158498764038086,
"load scripts/previews.py": 0.0215604305267334,
"load scripts/settings.py": 0.021611928939819336,
"load scripts/__init__.py": 0.00012445449829101562,
"load scripts/image_browser.py": 0.04548215866088867,
"load scripts/ultimate-upscale.py": 0.0006527900695800781,
"load scripts/comments.py": 0.021661758422851562,
"load scripts/refiner.py": 0.00017905235290527344,
"load scripts/sampler.py": 0.00010347366333007812,
"load scripts/seed.py": 0.00013446807861328125,
"load scripts": 0.7939610481262207,
"load upscalers": 0.0006277561187744141,
"refresh VAE": 0.0003781318664550781,
"refresh textual inversion templates": 3.838539123535156e-05,
"scripts list_optimizers": 0.0003681182861328125,
"scripts list_unets": 2.5987625122070312e-05,
"reload hypernetworks": 0.00044846534729003906,
"initialize extra networks": 0.00043654441833496094,
"scripts before_ui_callback": 0.00026988983154296875,
"create ui": 1.3994841575622559,
"gradio launch": 0.18543195724487305,
"add APIs": 0.0044019222259521484,
"app_started_callback/lora_script.py": 0.00019884109497070312,
"app_started_callback/api.py": 0.002332925796508789,
"app_started_callback/reactor_api.py": 0.0019371509552001953,
"app_started_callback/info.py": 0.0003161430358886719,
"app_started_callback/link.py": 0.00020575523376464844,
"app_started_callback/previews.py": 0.0007855892181396484,
"app_started_callback": 0.005781650543212891
}
},
"Packages": [
"absl-py==2.1.0",
"accelerate==0.21.0",
"addict==2.4.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohttp==3.9.5",
"aiosignal==1.3.1",
"albumentations==1.4.3",
"altair==5.3.0",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"asttokens==2.4.1",
"async-timeout==4.0.3",
"attrs==23.2.0",
"av==12.0.0",
"beautifulsoup4==4.12.3",
"bidict==0.23.1",
"bitsandbytes==0.43.0",
"blendmodes==2022",
"brotli==1.1.0",
"cachetools==5.3.3",
"certifi==2024.2.2",
"cffi==1.16.0",
"chardet==5.2.0",
"charset-normalizer==3.3.2",
"clean-fid==0.1.35",
"click==8.1.7",
"clip==1.0",
"coloredlogs==15.0.1",
"colorlog==6.8.2",
"comm==0.2.2",
"contourpy==1.2.1",
"cssselect2==0.7.0",
"cycler==0.12.1",
"cython==3.0.10",
"dadaptation==3.2",
"debugpy==1.8.1",
"decorator==5.1.1",
"deprecation==2.1.0",
"depth-anything==2024.1.22.0",
"diffusers==0.27.2",
"discord-webhook==1.3.0",
"diskcache==5.6.3",
"dsine==2024.3.23",
"dynamicprompts==0.31.0",
"easydict==1.13",
"einops==0.4.1",
"embreex==2.17.7.post4",
"exceptiongroup==1.2.0",
"executing==2.0.1",
"facexlib==0.3.0",
"fastapi==0.94.0",
"ffmpy==0.3.2",
"filelock==3.13.4",
"filterpy==1.4.5",
"flatbuffers==24.3.25",
"fonttools==4.51.0",
"frozenlist==1.4.1",
"fsspec==2024.3.1",
"ftfy==6.2.0",
"fvcore==0.1.5.post20221221",
"gdown==5.2.0",
"geffnet==1.0.2",
"gitdb==4.0.11",
"gitpython==3.1.43",
"glob2==0.5",
"gmpy2==2.1.2",
"google-auth-oauthlib==1.0.0",
"google-auth==2.29.0",
"gradio-client==0.5.0",
"gradio==3.41.2",
"grpcio==1.63.0",
"h11==0.12.0",
"handrefinerportable==2024.2.12.0",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.22.2",
"humanfriendly==10.0",
"idna==3.7",
"imageio-ffmpeg==0.4.9",
"imageio==2.34.0",
"importlib-metadata==7.1.0",
"importlib-resources==6.4.0",
"inflection==0.5.1",
"insightface==0.7.3",
"iopath==0.1.10",
"ipykernel==6.29.3",
"ipython==8.22.2",
"ipywidgets==8.1.2",
"jax==0.4.28",
"jaxlib==0.4.28",
"jedi==0.19.1",
"jinja2==3.1.3",
"joblib==1.4.2",
"jsonmerge==1.8.0",
"jsonschema-specifications==2023.12.1",
"jsonschema==4.21.1",
"jupyter-client==8.6.1",
"jupyter-core==5.7.2",
"jupyterlab-widgets==3.0.10",
"kiwisolver==1.4.5",
"kornia==0.6.7",
"lark==1.1.2",
"lazy-loader==0.4",
"lightning-utilities==0.11.2",
"llvmlite==0.42.0",
"lxml==5.2.1",
"mapbox-earcut==1.0.1",
"markdown-it-py==3.0.0",
"markdown==3.6",
"markupsafe==2.1.5",
"matplotlib-inline==0.1.7",
"matplotlib==3.8.4",
"mdurl==0.1.2",
"mediapipe==0.10.14",
"ml-dtypes==0.4.0",
"mpmath==1.3.0",
"multidict==6.0.5",
"nest-asyncio==1.6.0",
"networkx==3.3",
"numba==0.59.1",
"numexpr==2.10.0",
"numpy==1.26.2",
"nvidia-ml-py3==7.352.0",
"oauthlib==3.2.2",
"olefile==0.47",
"omegaconf==2.2.3",
"onnx==1.16.0",
"onnxruntime-gpu==1.17.1",
"open-clip-torch==2.20.0",
"opencv-contrib-python==4.9.0.80",
"opencv-python-headless==4.9.0.80",
"opencv-python==4.9.0.80",
"opt-einsum==3.3.0",
"orjson==3.10.1",
"packaging==24.0",
"pandas==2.2.2",
"parso==0.8.4",
"pexpect==4.9.0",
"pickleshare==0.7.5",
"piexif==1.1.3",
"pillow==9.5.0",
"pims==0.6.1",
"pip==24.0",
"platformdirs==4.2.0",
"portalocker==2.8.2",
"prettytable==3.10.0",
"prompt-toolkit==3.0.42",
"protobuf==3.20.3",
"psutil==5.9.5",
"ptyprocess==0.7.0",
"pure-eval==0.2.2",
"pyasn1-modules==0.4.0",
"pyasn1==0.6.0",
"pycollada==0.8",
"pycparser==2.22",
"pydantic==1.10.15",
"pydub==0.25.1",
"pygments==2.17.2",
"pyparsing==3.1.2",
"pysocks==1.7.1",
"python-dateutil==2.9.0",
"python-engineio==4.9.0",
"python-multipart==0.0.9",
"python-socketio==5.11.2",
"pytorch-lightning==1.9.4",
"pytorch-optimizer==2.12.0",
"pytz==2024.1",
"pywavelets==1.6.0",
"pyyaml==6.0.1",
"pyzmq==26.0.0",
"referencing==0.34.0",
"regex==2024.4.16",
"reportlab==4.2.0",
"requests-oauthlib==2.0.0",
"requests==2.31.0",
"resize-right==0.0.2",
"rich==13.7.1",
"rpds-py==0.18.0",
"rsa==4.9",
"rtree==1.2.0",
"safetensors==0.4.2",
"scikit-image==0.21.0",
"scikit-learn==1.4.2",
"scipy==1.13.0",
"semantic-version==2.10.0",
"send2trash==1.8.3",
"sentencepiece==0.2.0",
"setuptools==69.5.1",
"shapely==2.0.4",
"simple-websocket==1.0.0",
"six==1.16.0",
"slicerator==1.1.0",
"smmap==5.0.1",
"sniffio==1.3.1",
"sounddevice==0.4.6",
"soupsieve==2.5",
"spandrel==0.1.6",
"stack-data==0.6.2",
"starlette==0.26.1",
"svg.path==6.3",
"svglib==1.5.1",
"sympy==1.12",
"tabulate==0.9.0",
"tensorboard-data-server==0.7.2",
"tensorboard==2.13.0",
"termcolor==2.4.0",
"threadpoolctl==3.5.0",
"tifffile==2024.4.18",
"timm==0.9.16",
"tinycss2==1.3.0",
"tokenizers==0.13.3",
"tomesd==0.1.3",
"tomli==2.0.1",
"toolz==0.12.1",
"torch==2.2.2",
"torchaudio==2.2.2",
"torchdiffeq==0.2.3",
"torchmetrics==1.3.2",
"torchsde==0.2.6",
"torchvision==0.17.2",
"tornado==6.4",
"tqdm==4.66.2",
"traitlets==5.14.2",
"trampoline==0.1.2",
"transformers==4.30.2",
"trimesh==4.3.2",
"triton==2.2.0",
"typing-extensions==4.11.0",
"tzdata==2024.1",
"urllib3==2.2.1",
"uvicorn==0.29.0",
"vhacdx==0.0.6",
"wcwidth==0.2.13",
"webencodings==0.5.1",
"websocket-client==1.8.0",
"websockets==11.0.3",
"werkzeug==3.0.3",
"wheel==0.43.0",
"widgetsnbextension==4.0.10",
"wsproto==1.2.0",
"xatlas==0.0.9",
"xformers==0.0.25.post1",
"xxhash==3.4.1",
"yacs==0.1.8",
"yapf==0.40.2",
"yarl==1.9.4",
"zipp==3.17.0"
]
}
```
</p>
</details>
### Console logs
```Shell
I already had to scrub API keys and other crap from the sysinfo file to post this, I'm not going over this one with a fine toothed comb too.
```
### Additional information
_No response_ | open | 2024-05-12T19:13:03Z | 2024-05-13T02:10:24Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15765 | [
"bug-report"
] | redactedaccount | 0 |
keras-rl/keras-rl | tensorflow | 88 | Parameterization of P for CDQN | I was getting NaNs in my cdqn like https://github.com/matthiasplappert/keras-rl/issues/28 . I suspect the below exponential and I'm trying out an alternative parameterization that seems to be working. Squaring might have slightly less chance of exploding than exponential but means less control over small values. Any thoughts about the best way to parameterize a positive definite matrix?
I'm doing some experimentation at https://github.com/bstriner/keras-rl/tree/cdqn . I can put together a PR if anyone can confirm the best way to go.
Any other good suggestions on troubleshooting NaN values? Rescaling the observations and rewards seems to be a popular suggestion.
Current code:
```python
def fn(x, L_acc, LT_acc):
x_ = K.zeros((self.nb_actions, self.nb_actions))
x_ = T.set_subtensor(x_[np.tril_indices(self.nb_actions)], x)
diag = K.exp(T.diag(x_) + K.epsilon())
x_ = T.set_subtensor(x_[np.diag_indices(self.nb_actions)], diag)
return x_, x_.T
outputs_info = [
K.zeros((self.nb_actions, self.nb_actions)),
K.zeros((self.nb_actions, self.nb_actions)),
]
results, _ = theano.scan(fn=fn, sequences=L_flat, outputs_info=outputs_info)
L, LT = results
P = K.batch_dot(L, LT)
```
Alternative parameterization:
```python
def fn(x_tri, _):
x_ = K.zeros((self.nb_actions, self.nb_actions))
x_ = T.set_subtensor(x_[np.tril_indices(self.nb_actions)], x_tri)
x_ = T.dot(x_, x_.T) + (K.epsilon()*T.eye(self.nb_actions))
return x_
outputs_info = [
K.zeros((self.nb_actions, self.nb_actions))
]
P, _ = theano.scan(fn=fn, sequences=L_flat, outputs_info=outputs_info)
```
Cheers!
(I only sketched out a TF implementation but the theano one is working.)
@matthiasplappert As a side note, I think in the current version of the code, the `+K.epsilon` should be outside of the `K.exp`.
```python
diag = K.exp(T.diag(x_)) + K.epsilon()
```
- [x] Check that you are up-to-date with the master branch of Keras-RL. You can update with:
`pip install git+git://github.com/matthiasplappert/keras-rl.git --upgrade --no-deps`
- [x] Check that you are up-to-date with the master branch of Keras. You can update with:
`pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps`
- [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short). If you report an error, please include the error message and the backtrace.
| closed | 2017-03-15T00:12:33Z | 2019-01-12T15:40:45Z | https://github.com/keras-rl/keras-rl/issues/88 | [
"wontfix"
] | bstriner | 1 |
miguelgrinberg/flasky | flask | 401 | clarification | closed | 2018-12-08T18:09:35Z | 2018-12-08T18:11:14Z | https://github.com/miguelgrinberg/flasky/issues/401 | [] | roygb1v | 0 | |
encode/httpx | asyncio | 3,232 | How to implement failed retry | The starting point for issues should usually be a discussion...
https://github.com/encode/httpx/discussions
Possible bugs may be raised as a "Potential Issue" discussion, feature requests may be raised as an "Ideas" discussion. We can then determine if the discussion needs to be escalated into an "Issue" or not.
This will help us ensure that the "Issues" list properly reflects ongoing or needed work on the project.
---
- [ ] Initially raised as discussion #...
| closed | 2024-06-28T00:40:18Z | 2024-06-28T16:51:36Z | https://github.com/encode/httpx/issues/3232 | [] | yuanjie-ai | 0 |
Gozargah/Marzban | api | 1,568 | Error when use change user plan to next | **Describe the bug**
This bug is not critical, because it performs its functions. But from my side, as an API developer, it is rather strange to see a 404 error when everything works successfully. It accrues the next plan to the user, but apparently the check comes after the accrual and the system already considers that there is no plan
**To Reproduce**
You need to set the following plan to the user via API
Then try to install it, it will give a 404 error. But it will actually install the plan.
**Expected behavior**
Fixes this error to return 200 status code and user model as specified in the API
**Machine details (please complete the following information):**
- OS: Ubuntu 22
- Python version: Python 3.12
- Browser: Don't use. API
| closed | 2025-01-05T14:13:29Z | 2025-01-14T22:08:57Z | https://github.com/Gozargah/Marzban/issues/1568 | [
"Bug",
"Backend",
"P1",
"API",
"DB"
] | sm1ky | 3 |
falconry/falcon | api | 1,534 | Middleware process_response function is missing parameter | The `process_response` function used in the middleware in the [readme file](https://github.com/falconry/falcon/blob/master/README.rst) and on the [quickstart page](https://falcon.readthedocs.io/en/stable/user/quickstart.html) is missing the `req_succeeded` parameter, which was introduced in Falcon 2.0.0.
It's currently written as `def process_response(self, req, resp, resource):`, but should be `def process_response(self, req, resp, resource, req_succeeded):` (see [current middleware documentation](https://falcon.readthedocs.io/en/stable/api/middleware.html)). This is currently causing the test example to not function correctly, returning this exception: `TypeError: process_response() takes 4 positional arguments but 5 were given` | closed | 2019-05-07T16:20:18Z | 2019-05-20T05:04:27Z | https://github.com/falconry/falcon/issues/1534 | [
"bug",
"documentation"
] | pbjr23 | 0 |
modoboa/modoboa | django | 3,329 | Modoboa upgrade resulted in broken pop3 support | # Impacted versions
* OS Type: Debian
* OS Version: 12.4
* Database Type: PostgreSQL
* Modoboa: 2.3.2
* installer used: Yes
* Webserver: Nginx
# Current behavior
After the upgrade, the dovecot-pop3d service did not install correctly and is now not working. I have tried to manually install the dovecot-pop3d service but get an error. Terminal output below...
```
root@mail01:/# dpkg -l | grep dovecot
ii dovecot-core 1:2.3.21.1+dfsg1-1~bpo12+1 amd64 secure POP3/IMAP server - core files
ii dovecot-imapd 1:2.3.21.1+dfsg1-1~bpo12+1 amd64 secure POP3/IMAP server - IMAP daemon
ii dovecot-lmtpd 1:2.3.21.1+dfsg1-1~bpo12+1 amd64 secure POP3/IMAP server - LMTP server
ii dovecot-managesieved 1:2.3.21.1+dfsg1-1~bpo12+1 amd64 secure POP3/IMAP server - ManageSieve server
ii dovecot-pgsql 1:2.3.21.1+dfsg1-1~bpo12+1 amd64 secure POP3/IMAP server - PostgreSQL support
rc dovecot-pop3d 1:2.3.19.1+dfsg1-2.1 amd64 secure POP3/IMAP server - POP3 daemon
ii dovecot-sieve 1:2.3.21.1+dfsg1-1~bpo12+1 amd64 secure POP3/IMAP server - Sieve filters support
```
```
root@mail01:/# apt install dovecot-pop3d
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
dovecot-pop3d : Depends: dovecot-core (= 1:2.3.19.1+dfsg1-2.1+deb12u1) but 1:2.3.21.1+dfsg1-1~bpo12+1 is to be installed
E: Unable to correct problems, you have held broken packages.
```
# Expected behavior
Installer should upgrade correctly.
Any help here would be great as I have users without connectivity right now other than through webmail.
Thanks.
| closed | 2024-10-28T21:38:44Z | 2024-10-29T02:06:42Z | https://github.com/modoboa/modoboa/issues/3329 | [] | pappastech | 2 |
graphql-python/graphene-sqlalchemy | graphql | 148 | MySQL Enum reflection convertion issue | When using reflection to build the models (instead of declarative) the Enum conversion fails because the ENUM column is treated as string so doing ```type.name``` returns None.
```sql
CREATE TABLE `foos` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`status` ENUM('open', 'closed') DEFAULT 'open',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
```
```python
class Foo(Base):
__table__ = Table('foos', metadata, autoload=True)
```
```shell
enum_class = super(EnumMeta, metacls).__new__(metacls, cls, bases, classdict) TypeError: Error when calling the metaclass bases type() argument 1 must be string, not None
```
Suggested [fix](https://github.com/graphql-python/graphene-sqlalchemy/pull/147) | open | 2018-07-24T13:14:49Z | 2018-07-24T13:14:49Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/148 | [] | sebastiandev | 0 |
wkentaro/labelme | deep-learning | 430 | Video annotation example | Hi,
Sorry for make you angry.
But I am new to this field. so please help me on this concepts.
Unable to understand the concept of video annotation.
I am using video_to_image.py to convert video frames to image file. Then I run labelme to label the images.
Then I have no idea to convert that images to video.
Finally if I run the final video it will show the created label in that video?
| closed | 2019-06-28T11:33:54Z | 2021-02-27T04:45:17Z | https://github.com/wkentaro/labelme/issues/430 | [] | suravijayjilla | 6 |
vanna-ai/vanna | data-visualization | 594 | ORA-24550: signal received: Unhandled exception: Code=c0000005 Flags=0 | **Describe the bug**
When I used flask with openai+chroma+Microsoft SQL Server database for initial training, when I generated about 2000 plan data, I carried out vn.train(plan=plan) operation. However, training 100 pieces of data will kill my flask process and output an error :”ORA-24550: signal received: Unhandled exception: Code=c0000005 Flags=0
00000251140251DE<-00000251140253D0<-0000025114024429<-00000251141BD333<-00007FFA8FAE6227<-00007FFA932CE431<-00007FFA932B6506<-00007FFA932CA49D<-00007FFA9325FD43<-00007FFA932C960A<-00007FFA4B192EB0<-00007FFA38073B2C<-00007FFA380768A9<-00007FFA38093D6B<-00007FFA38093E04<-00007FFA3806BBA0<-00007FFA405345D6<-00007FFA4053BCEF<-00007FFA4063A469<-00007FFA404CCD4F<-00007FFA404CA4B7<-00007FFA4050061C<-00007FFA405004F3<-00007FFA404D1305<-00007FFA404C9165“
**To Reproduce**
Steps to reproduce the behavior:
1. code
```
cx_Oracle.init_oracle_client(lib_dir="E:\\Development\\PremiumSoft\\Navicat Premium 15\\instantclient_11_2")
db_str = "mssql+pymssql://sa:Sensnow2022@127.0.0.1:1433/ismartMemberService",
engine = create_engine(db_str)
def run_sql(sql: str) -> pd.DataFrame:
if len(sql) > 0:
sql = sql.replace(";", "")
df = pd.read_sql_query(sql, engine)
return df
vn.run_sql = run_sql
vn.run_sql_is_set = True
vn.static_documentation = "This is a Microsoft SQL Server database",
df_information_schema = vn.run_sql("SELECT * FROM INFORMATION_SCHEMA.COLUMNS")
plan = vn.get_training_plan_generic(df_information_schema)
vn.train(plan=plan)
```
**Expected behavior**
**Error logs/Screenshots**

ORA-24550: signal received: Unhandled exception: Code=c0000005 Flags=0
00000251140251DE<-00000251140253D0<-0000025114024429<-00000251141BD333<-00007FFA8FAE6227<-00007FFA932CE431<-00007FFA932B6506<-00007FFA932CA49D<-00007FFA9325FD43<-00007FFA932C960A<-00007FFA4B192EB0<-00007FFA38073B2C<-00007FFA380768A9<-00007FFA38093D6B<-00007FFA38093E04<-00007FFA3806BBA0<-00007FFA405345D6<-00007FFA4053BCEF<-00007FFA4063A469<-00007FFA404CCD4F<-00007FFA404CA4B7<-00007FFA4050061C<-00007FFA405004F3<-00007FFA404D1305<-00007FFA404C9165
**Desktop (please complete the following information where):**
- OS: [Windows 11]
- Version: [e.g. 20.04]
- Python: [3.9]
- Vanna: [2.8.0]
**Additional context**
Add any other context about the problem here.
| closed | 2024-08-09T07:19:22Z | 2024-08-09T07:30:28Z | https://github.com/vanna-ai/vanna/issues/594 | [
"bug"
] | DawsonPeres | 2 |
scikit-tda/kepler-mapper | data-visualization | 252 | Issue with generating visuals in mapper | **Describe the bug**
During execution of the mapper.visualize() function, it errors out in Visuals.py (line 573 - np.asscalar(object) "Numpy does not support attribute asscalar".
**To Reproduce**
Steps to reproduce the behavior:
1. brew install numpy (1.26.3)
2. Open Visual Studio Code
3. Follow the instructions in https://kepler-mapper.scikit-tda.org/en/latest/generated/gallery/plot_breast_cancer.html#sphx-glr-generated-gallery-plot-breast-cancer-py
4. Download the notebook https://kepler-mapper.scikit-tda.org/en/latest/_downloads/a94b1f1598c3cfb97a9077a22fcc2de4/plot_breast_cancer.ipynb
5. Install kmapper using the following notebook directive
%pip install --upgrade kmapper
6. Execute the notebook
7. It errors out on the below mentioned function:
mapper.visualize(
graph,
path_html="breast-cancer.html",
title="Wisconsin Breast Cancer Dataset",
custom_tooltips=y,
)
**Expected behavior**
The expectation is that the visualize function should create a visualization map and stores it as an html file with the given filename
**Screenshots**
**Desktop (please complete the following information):**
- OS: MacOS 14.3 (23D56)
- Browser: Not applicable
- OS Version: 14.3 (23D56)
**Smartphone (please complete the following information):**
Not applicable
**Additional context**
I believe the issue happens because of a bug in the code in Visuals.py on line 573. The code (between lines 569-573) should be replaced as follows:
# Jinja default json serializer can't handle np arrays; provide custom encoding
def my_dumper(obj, **kwargs):
def np_encoder(object, **kwargs):
if isinstance(object, np.generic):
#return np.asscalar(object) # <-- ERROR and a BUG on line 573
return object.item() # <-- The correct code should be this
| closed | 2024-01-29T13:08:41Z | 2024-07-06T11:38:50Z | https://github.com/scikit-tda/kepler-mapper/issues/252 | [
"bug"
] | wingenium-nagesh | 7 |
pallets-eco/flask-wtf | flask | 12 | FieldList(FileField) does not follow request.files | When form data is passed to a FieldList, each key in the form data
dict whose prefix corresponds to the encapsulate field's name results
in the creation of an entry. This allows for the addition of new field
entries on the basis of submitted form data and thus dynamic
field creation. See:
http://groups.google.com/group/wtforms/browse_thread/thread/824e95123e38c697/4bc8cd59a84ea5f9
However, when the encaspulated field is a FileField, the FieldList
will not create entries according to the submitted form data. The
following will fail on the last assertion:
``` python
from flask import Flask, request
from StringIO import StringIO
from flaskext.wtf import Form, FieldList, TextField, FileField
app = Flask(__name__)
app.config['SECRET_KEY'] = 'SECRET'
# given this
class BrokenForm(Form):
text_fields = FieldList(TextField())
file_fields = FieldList(FileField())
text_data = [('text_fields-0', 'First input'),
('text_fields-1', 'Second input')]
file_data = [('file_fields-0', (StringIO('contents 0'), 'file0.txt')),
('file_fields-1', (StringIO('contents 1'), 'file1.txt'))]
with app.test_request_context(method='POST',
data=dict(input_data + file_data)):
assert len(request.files) # the files have been added to the
# request
f = BrokenForm(csrf_enabled=False)
if f.validate_on_submit():
assert len(text_data) == len(f.text_fields) # this succeeds
assert len(file_data) == len(f.file_fields) # this doesn't!
```
This occurs because there is no overlap between the fields stored in
request.files and those in request.form, and Flask-WTF only looks at
request.form. The existing MultipleFileUpload test case didn't catch
this because it sets its FieldList's min_entries to 3; this means 3
FileFields exist that happen to correspond to what's in request.files
before that data is even processed. (These are created in the final
while loop in FieldList.process(*args), wtforms/fields.py; we're
failing in the 'if formdata:' block)
A simple solution is to combine request.form and request.files when
the latter exists and have the form process the result; the first
attached patch does just that.
This seems less than ideal, though, because it removes the distinction
between forms and files imposed by Werkzeug and thus might run counter
to sane assumptions. Furthermore, it combines the two whenever
request.files is present, not just when a FieldList encapsulates a
FileField. The second patch addresses these problems by introducing a
subclass of FieldList that processes request.files when its
encapsulated field is a FileField.
The third patch provides a test that ensures FileFields are added to
FieldLists on the basis of submitted data.
Finally, this is not strictly a problem with flask-wtf, but rather a
problem that will exist with anything that's based on Werkzeug and
uses wtforms. Perhaps there ought to be a werkzeug-wtf?
---
- Bitbucket: https://bitbucket.org/danjac/flask-wtf/issue/12
- Originally Reported By: [Mark Williams](http://bitbucket.org/markrwilliams)
- Originally Created At: 2011-02-10 13:18:47
| closed | 2012-02-29T16:46:08Z | 2021-05-30T01:24:51Z | https://github.com/pallets-eco/flask-wtf/issues/12 | [
"bug",
"import"
] | rduplain | 5 |
koaning/scikit-lego | scikit-learn | 535 | [FEATURE] Time Series Target Encoding Transformer | Hi all,
I am a data scientist who is working mainly on the time series problems. Usually, the best features are lags, rolling means and target encoders. I have already a transformer for Spark DataFrames that I use in my daily work for creating these features.
I want to contribute to creating this time series target encoder (which will be used for creating lags, rolling means and target encodings) transformer also for pandas.
For instance, I use the following class to create rolling means at item, store, region levels with rolling window of 13, 52, 104 weeks and some skip periods to prevent data leakage.

The transformer that I created was designed using Window functionality in Spark and to be used in the preprocessing step. However, I am willing to create one for the scikit-learn pipelines.
If you are also think that is a good idea, I would be happy to discuss the implementation.
Best,
| open | 2022-09-21T14:35:37Z | 2022-09-23T08:49:41Z | https://github.com/koaning/scikit-lego/issues/535 | [
"enhancement"
] | canerturkseven | 4 |
yt-dlp/yt-dlp | python | 11,831 | 'Unable to connect to proxy' 'No connection could be made because the target machine actively refused it' | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
I haven't used yt-dlp in a while. It worked fine for me a month ago, but now it just keeps throwing 'Unable to connect to proxy'. I'm not using any VPN or proxy, which I triple-checked with `netsh winhttp show proxy`. I've tried changing my Wi-Fi connection, using a VPN and rebooting my laptop. I thought my yt-dlp might be outdated, so I wrote `yt-dlp -U` but it threw the same error again:
`ERROR: Unable to obtain version info (('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x00000194943D4F10>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))); Please try again later or visit https://github.com/yt-dlp/yt-dlp/releases/latest`
I manually downloaded the latest release, both stable and nightly, but the error persists.
Sorry if it's something simple I'm missing, I'm still learning.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=T1cTLS9GiCk', '--list-formats']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.12.15.232913 from yt-dlp/yt-dlp-nightly-builds [d298693b1] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 2024-08-18-git-7e5410eadb-full_build-www.gyan.dev (setts), ffprobe 6.1-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {'https': 'http://localhost:8000', 'http': 'http://localhost:8000'}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
ERROR: Unable to obtain version info (('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FF96AA8AC0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))); Please try again later or visit https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest
[youtube] Extracting URL: https://www.youtube.com/watch?v=T1cTLS9GiCk
[youtube] T1cTLS9GiCk: Downloading webpage
WARNING: [youtube] Unable to download webpage: ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FF96B0F8E0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
[youtube] T1cTLS9GiCk: Downloading ios player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FF96B264A0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')). Retrying (1/3)...
[youtube] T1cTLS9GiCk: Downloading ios player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FF96B26B30>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')). Retrying (2/3)...
[youtube] T1cTLS9GiCk: Downloading ios player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FF96B25030>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')). Retrying (3/3)...
[youtube] T1cTLS9GiCk: Downloading ios player API JSON
WARNING: [youtube] Unable to download API page: ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FF96B24550>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')) (caused by ProxyError("('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FF96B24550>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))")); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
[youtube] T1cTLS9GiCk: Downloading iframe API JS
WARNING: [youtube] Unable to download webpage: ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FF96B25FC0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
[youtube] T1cTLS9GiCk: Downloading mweb player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FF96B0ECB0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')). Retrying (1/3)...
[youtube] T1cTLS9GiCk: Downloading mweb player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FF96B0F6D0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')). Retrying (2/3)...
[youtube] T1cTLS9GiCk: Downloading mweb player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FF96B25EA0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')). Retrying (3/3)...
[youtube] T1cTLS9GiCk: Downloading mweb player API JSON
WARNING: [youtube] Unable to download API page: ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FF96B269E0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')) (caused by ProxyError("('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FF96B269E0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))")); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
ERROR: [youtube] T1cTLS9GiCk: Failed to extract any player response; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\youtube.py", line 4427, in _real_extract
File "yt_dlp\extractor\youtube.py", line 4391, in _download_player_responses
File "yt_dlp\extractor\youtube.py", line 4048, in _extract_player_responses
```
| closed | 2024-12-16T02:18:35Z | 2025-02-10T18:24:06Z | https://github.com/yt-dlp/yt-dlp/issues/11831 | [
"question"
] | pmpnmf | 4 |
PaddlePaddle/models | computer-vision | 4,894 | run_ernie.sh infer 进行预测,数据格式问题 | 训练了模型之后,`bash run_ernie.sh infer` 进行预测,预测的数据格式不应该是未标注的数据吗?你这里为什么只能用标注过的数据做预测
```bash
function run_infer() {
echo "infering"
python run_ernie_sequence_labeling.py \
--mode infer \
--ernie_config_path "${ERNIE_PRETRAINED_MODEL_PATH}/ernie_config.json" \
# --init_checkpoint "${ERNIE_FINETUNED_MODEL_PATH}" \
--init_checkpoint "./ernie_models/step_620"
--init_bound 0.1 \
--vocab_path "${ERNIE_PRETRAINED_MODEL_PATH}/vocab.txt" \
--batch_size 64 \
--random_seed 0 \
--num_labels 57 \
--max_seq_len 128 \
**--test_data "${DATA_PATH}/test.tsv" \**
--label_map_config "./conf/label_map.json" \
--do_lower_case true \
--use_cuda false
}
```
用未标注过的语料训练就报错 | open | 2020-09-30T03:30:08Z | 2020-12-16T06:34:02Z | https://github.com/PaddlePaddle/models/issues/4894 | [
"user"
] | Adrian-Yan16 | 4 |
howie6879/owllook | asyncio | 100 | 爬起点的书名报错 | 请问下大佬,开起点爬虫时候报错
AttributeError: 'Response' object has no attribute 'html'
这个是什么情况呢 | closed | 2021-01-17T04:18:21Z | 2021-01-17T05:12:21Z | https://github.com/howie6879/owllook/issues/100 | [] | HuanYuanHe | 3 |
PaddlePaddle/PaddleHub | nlp | 2,337 | 使用ernie_vilg,提供了AK,SK,依然不能连接 | 欢迎您反馈PaddleHub使用问题,非常感谢您对PaddleHub的贡献!
在留下您的问题时,辛苦您同步提供如下信息:
- 版本、环境信息
1)PaddleHub和PaddlePaddle版本:请提供您的PaddleHub和PaddlePaddle版本号,例如PaddleHub1.4.1,PaddlePaddle1.6.2
2)系统环境:请您描述系统类型,例如Linux/Windows/MacOS/,python版本
- 复现信息:如为报错,请给出复现环境、复现步骤
在aistudio里运行
# os.environ["WENXIN_AK"] = "" # 替换为你的 API Key
# os.environ["WENXIN_SK"] = "" # 替换为你的 Secret Key
import paddlehub as hub
module = hub.Module(name="ernie_vilg")
results = module.generate_image(text_prompts=["波涛汹涌的大海"])
----
/.paddlehub/modules/ernie_vilg/module.py in _apply_token(self, ak, sk)
51 if res['code'] != 0:
52 print('Request access token error.')
---> 53 raise RuntimeError("Request access token error.")
54 else:
55 print('Request access token error.')
RuntimeError: Request access token error.
| open | 2024-11-27T08:43:33Z | 2024-11-27T08:43:37Z | https://github.com/PaddlePaddle/PaddleHub/issues/2337 | [] | BarryYin | 0 |
holoviz/colorcet | plotly | 69 | Some categorical colormaps are given as list of numerical RGB instead of list of hex strings | colorcet 2.0.6
The colorcet user guide specifically mentions that it provides 'Bokeh-style' palettes as lists of hex strings, which is handy when working with Bokeh.
However, I realised this was not the case for some of the categorical palettes, including `cc.glasbey_bw` and `cc.glasbey_hv`. These return lists of RGB triplets which don't work with Bokeh.
Accessing these palettes by string name (_e.g._ `cc.palette['glasbey_hv']`) does yield a list of hex strings... so this is only an issue with regard to consistency. | closed | 2021-09-08T14:01:54Z | 2021-11-27T02:29:42Z | https://github.com/holoviz/colorcet/issues/69 | [] | TheoMathurin | 2 |
tensorly/tensorly | numpy | 163 | An equation doesn't hold. | #### Describe the bug
The highlighted equation doesn't hold for the tucker tensor. For more information, please refer to page 475 of Tensor Decompositions and Applications (2009).

#### Steps or Code to Reproduce
```python
from tensorly.random import random_tucker
import tensorly as tl
from tensorly.tenalg import kronecker
[core, factors] = random_tucker((4, 4, 4), [2, 3, 2])
tensor = tl.tucker_to_tensor([core, factors])
mode = 1 # The selected mode.
tensor_unfold = tl.unfold(tensor, mode=mode)
core_unfold = tl.unfold(core, mode=mode)
factors_copy = factors.copy()
del factors_copy[mode]
kron = kronecker(factors_copy[::-1])
print(tensor_unfold - factors[mode].dot(core_unfold).dot(kron.T)) # All elements should be zero.
```
#### Expected behavior
The elements in the printing matrix should be zero.
#### Actual result
The elements are significantly not zero.
#### Versions
The backend is NumPy.
| closed | 2020-03-19T09:21:01Z | 2020-03-20T00:29:44Z | https://github.com/tensorly/tensorly/issues/163 | [] | Haobo1108 | 2 |
neuml/txtai | nlp | 87 | Add Embeddings Cluster component | Add Embeddings Cluster component
- This component will manage a number of embeddings shards
- The shards will be API references
- The cluster component will mirror embeddings operations
- The API will seamlessly determine if an Embeddings or Embeddings Cluster should be created based on settings | closed | 2021-05-11T20:20:44Z | 2021-05-19T00:14:06Z | https://github.com/neuml/txtai/issues/87 | [] | davidmezzetti | 0 |
jina-ai/clip-as-service | pytorch | 298 | Cannot start Bert service. | Hi, sorry for reopening this issue but:
I used pip install -U bert-serving-server bert-serving-client
, I ran it again to be sure .
(Bert) C:\Users\fad>pip install -U bert-serving-server bert-serving-client
Requirement already up-to-date: bert-serving-server in c:\programdata\anaconda3\envs\bert\lib\site-packages (1.8.7)
Requirement already up-to-date: bert-serving-client in c:\programdata\anaconda3\envs\bert\lib\site-packages (1.8.7)
But i am still getting the Typeerror when I try to start Bert server:
usage: C:\ProgramData\Anaconda3\envs\Bert\Scripts\bert-serving-start -model_dir /tmp/english_L-12_H-768_A-12/ -num_worker=4
ARG VALUE
__________________________________________________
ckpt_name = bert_model.ckpt
config_name = bert_config.json
cors = *
cpu = False
device_map = []
fixed_embed_length = False
fp16 = False
gpu_memory_fraction = 0.5
graph_tmp_dir = None
http_max_connect = 10
http_port = None
mask_cls_sep = False
max_batch_size = 256
max_seq_len = 25
model_dir = /tmp/english_L-12_H-768_A-12/
num_worker = 4
pooling_layer = [-2]
pooling_strategy = REDUCE_MEAN
port = 5555
port_out = 5556
prefetch_size = 10
priority_batch_size = 16
show_tokens_to_client = False
tuned_model_dir = None
verbose = False
xla = False
I:[35mVENTILATOR[0m:freeze, optimize and export graph, could take a while...
I:[36mGRAPHOPT[0m:model config: /tmp/english_L-12_H-768_A-12/bert_config.json
I:[36mGRAPHOPT[0m:checkpoint: /tmp/english_L-12_H-768_A-12/bert_model.ckpt
E:[36mGRAPHOPT[0m:fail to optimize the graph!
Traceback (most recent call last):
File "c:\programdata\anaconda3\envs\bert\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\programdata\anaconda3\envs\bert\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\ProgramData\Anaconda3\envs\Bert\Scripts\bert-serving-start.exe\__main__.py", line 9, in <module>
File "c:\programdata\anaconda3\envs\bert\lib\site-packages\bert_serving\server\cli\__init__.py", line 4, in main
with BertServer(get_run_args()) as server:
File "c:\programdata\anaconda3\envs\bert\lib\site-packages\bert_serving\server\__init__.py", line 70, in __init__
self.graph_path, self.bert_config = pool.apply(optimize_graph, (self.args,))
TypeError: 'NoneType' object is not iterable | closed | 2019-03-29T03:15:20Z | 2019-08-07T16:57:04Z | https://github.com/jina-ai/clip-as-service/issues/298 | [] | faddyai | 1 |
OFA-Sys/Chinese-CLIP | nlp | 100 | 关于数据集的构造 | 您好!我目前有一个疑问,能不能自己造一些图片
样本,比如.png的格式,能否直接改成您这边的数据集要求格式,当做训练集重新训练模型? | closed | 2023-05-08T09:28:56Z | 2023-05-15T02:50:37Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/100 | [] | Sally6651 | 5 |
schemathesis/schemathesis | pytest | 1,809 | Support Python 3.12 | Blocked on `aiohttp` [update](https://github.com/aio-libs/aiohttp/issues/7675)
Other todos:
- [ ] Update builds to include Python 3.12
- [ ] Update classifiers & docs | closed | 2023-10-09T08:13:27Z | 2023-11-03T15:33:16Z | https://github.com/schemathesis/schemathesis/issues/1809 | [
"Priority: Medium",
"Status: Blocked",
"Type: Compatibility"
] | Stranger6667 | 0 |
cvat-ai/cvat | computer-vision | 9,225 | Return to the first page after visiting invalid page | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
When visiting a non-existent page for some resource in UI, you're shown a error notification

It would be nice to redirect the user automatically to the first page or to provide a button to do so. There are other buttons on the page (Tasks, Jobs etc.), but if there was some filter enabled, visiting these pages will clear the filter.
### Describe the solution you'd like
_No response_
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | open | 2025-03-18T17:13:28Z | 2025-03-18T17:13:45Z | https://github.com/cvat-ai/cvat/issues/9225 | [
"enhancement",
"ui/ux",
"good first issue"
] | zhiltsov-max | 0 |
huggingface/datasets | pytorch | 6,457 | `TypeError`: huggingface_hub.hf_file_system.HfFileSystem.find() got multiple values for keyword argument 'maxdepth' | ### Describe the bug
Please see https://github.com/huggingface/huggingface_hub/issues/1872
### Steps to reproduce the bug
Please see https://github.com/huggingface/huggingface_hub/issues/1872
### Expected behavior
Please see https://github.com/huggingface/huggingface_hub/issues/1872
### Environment info
Please see https://github.com/huggingface/huggingface_hub/issues/1872 | closed | 2023-11-29T01:57:36Z | 2023-11-29T15:39:03Z | https://github.com/huggingface/datasets/issues/6457 | [] | wasertech | 5 |
jupyter/nbgrader | jupyter | 1,041 | Use Nbgrader without a DB in simplest way possible |
I am trying to use nbgrader for a course I am supervising. However, due to logistics of the course I only need a very small subset of the available features.
Specifically, I just want to generate an assignment version of my notebooks, without any added features. Meaning, I just want to strip the solution blocks and potentially hide specific test cells.
What would be the easiest way to do this? The `assign` command has a flag to not store any information in the database, but even with this flag nbgrader assumes a certain directory structure and a config file.
Is it possible to just convert one notebook with a single command and no further setup?
Thanks in advance!
| closed | 2018-11-01T15:19:17Z | 2019-01-13T19:01:06Z | https://github.com/jupyter/nbgrader/issues/1041 | [
"question"
] | AKuederle | 3 |
jonaswinkler/paperless-ng | django | 446 | Consuming stops, when Mail-Task starts | Still migrate lots of Documents via Consumer to paperless-ng (Docker, Version 1.0.0, using inotify for consumer). When i put Documents (10-50) into the consumer-folder the consumer starts consuming. But when the task paperless_mail.tasks.process_mail_accounts starts while consuming, consuming stops and all waiting documents in the consumer-folder will not be consumed. I have to restart paperless to start the consuming process again or copy the doc to another folder and back to the consuming folder. | closed | 2021-01-26T09:46:57Z | 2021-01-26T21:11:01Z | https://github.com/jonaswinkler/paperless-ng/issues/446 | [
"bug"
] | andbez | 4 |
alirezamika/autoscraper | web-scraping | 42 | HTML Parameter | I read a previous post that mentioned capability for the HTML parameter, in which I could render a JS application using another tool (BS or Selenium) and pass in the HTML data for AutoScraper to parse. Does anyone have steps or documentation on how to use this parameter? | closed | 2020-12-08T00:28:31Z | 2020-12-15T13:43:39Z | https://github.com/alirezamika/autoscraper/issues/42 | [] | j3vr0n | 2 |
custom-components/pyscript | jupyter | 337 | Cannot delete persistent variables | I tried to set the variable to None still the entity doesn't get removed. I want to remove this variable from the entities list as I don't want the token to display. I had no idea that it converts persistent variables into entities. I think it was not mentioned in the reference documents.

Can you please help me remove it?
| closed | 2022-04-03T11:20:22Z | 2022-04-03T11:38:02Z | https://github.com/custom-components/pyscript/issues/337 | [] | dibyadip-das | 1 |
LAION-AI/Open-Assistant | python | 3,206 | No module named 'datasets' | New here....
encountered "No module named 'datasets'" when running:
python trainer_sft.py --configs defaults galactica-125m --cache_dir $DATA_PATH --output_dir $MODEL_PATH/sft_model
anyone can help? | closed | 2023-05-22T11:05:48Z | 2023-05-22T14:26:35Z | https://github.com/LAION-AI/Open-Assistant/issues/3206 | [] | jacobatxa | 1 |
eamigo86/graphene-django-extras | graphql | 25 | Is there a way to use DjangoListObjectType with related fields? | Using DjangoListObjectType to make a filterable list doesn't make any related field available on the graphql endpoint.
I have a related field called created_by (ForeignKey to User model) but I can't query it on DjangoListObjectField, only on DjangoObjectField. Is there any way I can do that?
| closed | 2018-02-21T09:41:52Z | 2018-03-26T15:47:59Z | https://github.com/eamigo86/graphene-django-extras/issues/25 | [] | giovannicimolin | 11 |
gradio-app/gradio | python | 10,085 | gradio-client: websockets is out of date | ### Describe the bug
The latest websockets is 14.1. gradio-client requires <13.0 >=10.0
https://github.com/gradio-app/gradio/blob/main/client/python/requirements.txt#L6
I can work around it by creating a fork.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
requirements.txt
```requirements
gradio-client==1.5.0
websockets==14.1
```
```sh
pip install -r requirements.txt
```
### Screenshot

### Logs
```shell
INFO: pip is looking at multiple versions of gradio-client to determine which version is compatible with other requirements. This could take a while.
ERROR: Cannot install gradio and websockets==14.1 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested websockets==14.1
gradio-client 1.5.0 depends on websockets<13.0 and >=10.0
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip to attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
```
```
### System Info
```shell
### System Info
➜ ✗ poetry run gradio environment
Launching in *reload mode* on: http://127.0.0.1:7860 (Press CTRL+C to quit)
Watching: '/home/brian/work/Presence-AI/presence/livekit/agent/.venv/lib/python3.12/site-packages/gradio', '/home/brian/work/Presence-AI/presence/livekit/agent'
ERROR: Error loading ASGI app. Could not import module "environment".
```
### Severity
I can work around it | closed | 2024-11-30T00:10:26Z | 2024-12-02T20:02:53Z | https://github.com/gradio-app/gradio/issues/10085 | [
"bug"
] | btakita | 0 |
koaning/scikit-lego | scikit-learn | 534 | [BUG] Future stability of meta-modelling | Scikit-lego is a great asset and the future of this library is important. The meta-modelling option employs the `from sklearn.utils.metaestimators import if_delegate_has_method` method. However, this method is depreciated in the most recent scikit-learn release and will be removed in version 1.3, which will be an issue. The new call method is `available_if`. Hopefully the integration of this new method will be simple and changes can be made. Thanks for all the amazing work so far.
| closed | 2022-09-18T17:44:05Z | 2023-06-10T18:24:11Z | https://github.com/koaning/scikit-lego/issues/534 | [
"bug"
] | KulikDM | 4 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 400 | BedRock Malformed input request: #/texts/0: expected maxLength: 2048, actual: 19882, please reformat your input and try agai | **Describe the bug**
I followed the example of bedrock https://github.com/VinciGit00/Scrapegraph-ai/blob/main/examples/bedrock/smart_scraper_bedrock.py
It was working in the first place. Then after I replace the url from source="https://perinim.github.io/projects/", to source="https://www.seek.com.au/jobs?page=1&sortmode=ListedDate",
I got the following errors:
Traceback (most recent call last):
File "c:\GitHub\job-scraper-poc\Test_Code\ai-scraper_bedrock_example.py", line 46, in <module>
result = smart_scraper_graph.run()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\job-scraper-poc\.venv\Lib\site-packages\scrapegraphai\graphs\smart_scraper_graph.py", line 120, in run
self.final_state, self.execution_info = self.graph.execute(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\job-scraper-poc\.venv\Lib\site-packages\scrapegraphai\graphs\base_graph.py", line 224, in execute
return self._execute_standard(initial_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\job-scraper-poc\.venv\Lib\site-packages\scrapegraphai\graphs\base_graph.py", line 153, in _execute_standard
raise e
File "C:\GitHub\job-scraper-poc\.venv\Lib\site-packages\scrapegraphai\graphs\base_graph.py", line 140, in _execute_standard
result = current_node.execute(state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\job-scraper-poc\.venv\Lib\site-packages\scrapegraphai\nodes\rag_node.py", line 118, in execute
index = FAISS.from_documents(chunked_docs, embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\job-scraper-poc\.venv\Lib\site-packages\langchain_core\vectorstores.py", line 550, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\job-scraper-poc\.venv\Lib\site-packages\langchain_community\vectorstores\faiss.py", line 930, in from_texts
embeddings = embedding.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\job-scraper-poc\.venv\Lib\site-packages\langchain_aws\embeddings\bedrock.py", line 169, in embed_documents
response = self._embedding_func(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\job-scraper-poc\.venv\Lib\site-packages\langchain_aws\embeddings\bedrock.py", line 150, in _embedding_func
raise ValueError(f"Error raised by inference endpoint: {e}")
ValueError: Error raised by inference endpoint: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #/texts/0: expected maxLength: 2048, actual: 19882, please reformat your input and try again.

Looks like current version of bedrock can't handle a website which has a long context of html?
| closed | 2024-06-20T11:56:31Z | 2024-12-05T15:39:01Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/400 | [] | TomZhaoJobadder | 9 |
InstaPy/InstaPy | automation | 6,059 | How to add an LOOP/While statement to Instapy quickstart | Hi,
I'd like to know how I can add a loop statement.
My quickstart file seems to work only when the hourly limit, and the 'amount' to follow = 10
So as soon as it reaches 10, it's jumping to unfollow people, but i'd like it to follow 7500, 10 people per hour, and 200 people a day max. But still to follow until it reaches to full amount 7500 BEFORE moving onto the unfollowing.
It works fine but this bug of not being able to choose the limits is the only thing stopping me.
I have changed the numbers in different places to no success.
I think a loop statement will fix this if I can loop the follow feature until 'Following' = over 7500
Or even to remove the unfollow all features and leave only the follow 10 and set task scheduler to run it over and over. But this doesn't seem very practical at all.
I'm very new to programming, python let alone instapy lol so I'm not sure how to write this to make it work and have been having a hard time with this. I got the course for christmas to put this into context smh.
Any suggestions or scripts which can do this already?
| open | 2021-01-29T00:56:35Z | 2021-07-21T02:19:03Z | https://github.com/InstaPy/InstaPy/issues/6059 | [
"wontfix"
] | molasunfish | 1 |
vastsa/FileCodeBox | fastapi | 96 | 上传文件报错500 | 开发者你好
我在设置了文件上传大小后,上传文件报错500,错误信息如下
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/fastapi/applications.py", line 289, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 91, in __call__
await self.simple_response(scope, receive, send, request_headers=headers)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 146, in simple_response
await self.app(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
raise e
File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 273, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 190, in run_endpoint_function
return await dependant.call(**values)
File "/app/apps/base/views.py", line 41, in share_file
if file.size > settings.uploadSize:
TypeError: '>' not supported between instances of 'int' and 'str'
```
查看 `config/update` 接口请求发现传回接口的值是 `string` 类型,而不是 `number` ,在更改类型后手动调用接口就没有问题了
看起来貌似是 `element-ui` 返回了错误的类型
https://github.com/vastsa/FileCodeBox/blob/67f09f9b817ac57d6f9f56c8f694354f94be7872/fcb-fronted/src/views/Admin/SettingView.vue#L19C23-L19C29
使用的版本是dokcer的beta版本 | closed | 2023-09-23T06:55:19Z | 2024-06-17T09:52:08Z | https://github.com/vastsa/FileCodeBox/issues/96 | [] | zhchy1996 | 3 |
aminalaee/sqladmin | asyncio | 701 | Add file downloading | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
I want to have the possibility to download files from the admin panel
### Describe the solution you would like.
If string is filepath than you can download it

### Additional context
I already have solutions to resolve my problem. And I made a pull request for it. Please check and give me feedback.
https://github.com/aminalaee/sqladmin/pull/702 | open | 2024-01-23T15:48:54Z | 2024-01-23T17:13:09Z | https://github.com/aminalaee/sqladmin/issues/701 | [] | EnotShow | 0 |
miguelgrinberg/microblog | flask | 95 | pages require login redirection failed after login | first and most important, thanks buddy! you " mega project with flask " is amazing , i have learned a lot from this article. i am new to python and flask and you really help me take the first step.
however, excelent in all,there still some small problem exist, in chapter 4 "user login", authentication is required before a visit, after we direct to login page ,we pass next={page_path} along with url_for("login") so that we remember which page we should redirect to after login success.my question is ,since get login request and post login request are two diffierent requests, one can not fetch next_page with request.args.get("next") , so can this can cause a rediret failure . | closed | 2018-03-30T04:01:14Z | 2018-04-03T04:22:33Z | https://github.com/miguelgrinberg/microblog/issues/95 | [] | rhodesiaxlo | 2 |
vitalik/django-ninja | rest-api | 898 | Expose OpenAPI `path_prefix` in NinjaAPI contstructor | **Is your feature request related to a problem? Please describe.**
`get_openapi_schema` takes the `path_prefix` parameter, but the only way I can see to change it is to subclass `NinjaAPI`. Our use case is that we want to set the prefix to an empty string, and customize the path in the docs with the `servers` parameter.
**Describe the solution you'd like**
Something like this would be nice:
```python
api = NinjaAPI(
servers=[
{"url": "/api", "description": "Test"},
],
openapi_prefix="",
)
```
```python
def get_openapi_schema(
self,
*,
path_prefix: Optional[str] = None,
path_params: Optional[DictStrAny] = None,
) -> OpenAPISchema:
if path_prefix is not None:
prefix = path_prefix
elif self.openapi_prefix is not None:
prefix = self.openapi_prefix
else:
prefix = self.get_root_path(path_params or {})
return get_schema(api=self, path_prefix=prefix)
```
| closed | 2023-11-01T18:38:14Z | 2023-11-02T16:43:12Z | https://github.com/vitalik/django-ninja/issues/898 | [] | scott-8 | 2 |
encode/databases | sqlalchemy | 178 | Implement return rows affected by operation if lastrowid is not set for asyncpg and adopt | It's the continuation of the PR https://github.com/encode/databases/pull/150 (as this PR only implements for mysql and sqlite) for the ticket https://github.com/encode/databases/issues/61 to cover other back-ends. | open | 2020-03-16T15:29:52Z | 2020-09-28T00:02:25Z | https://github.com/encode/databases/issues/178 | [] | gvbgduh | 2 |
Gerapy/Gerapy | django | 43 | 调度爬虫的时候可否增加 添加参数的功能? | 平常调度爬虫的时候会 curl ..... -a arg1=value1 -a arg2=value2 -a arg3=value3 传参数,在使用 Gerapy的时候,调度爬虫,点run后就直接提交运行了。
这个虽然可以直接修改文件里面的参数来做这样的操作,但是还是要修改文件代码,总觉得有点麻烦。
要是能直接在调度的时候添加的话就挺好的。 | open | 2018-02-23T10:17:49Z | 2022-08-30T06:14:30Z | https://github.com/Gerapy/Gerapy/issues/43 | [] | lvsoso | 3 |
mithi/hexapod-robot-simulator | plotly | 117 | pip install problem (on windows) | requirements.txt requests markupsafe 1.1.1 but werkzeug 2.2.3 requires MarkupSafe 2.1.1 or above | open | 2023-08-01T14:23:21Z | 2023-12-04T11:06:51Z | https://github.com/mithi/hexapod-robot-simulator/issues/117 | [] | bestbinaryboi | 2 |
huggingface/transformers | pytorch | 36,210 | Token healing throws error with "Qwen/Qwen2.5-Coder-7B-Instruct" | ### System Info
OS type: Sequoia 15.2 Apple M2 Pro.
I tried to reproduce using 🤗 [Inference Endpoints](https://endpoints.huggingface.co/AI-MO/endpoints/dedicated) when deploying https://huggingface.co/desaxce/Qwen2.5-Coder-7B-Instruct. It's a fork of `Qwen/Qwen2.5-Coder-7B-Instruct` with `token_healing=True` and a `handler.py` to deploy on 🤗 Inference Endpoints (use Default container, not TGI).
python 3.12.8
transformers 4.48.2
torch 2.6.0
Generate text with using [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) while specifying `token_healing` to `True`:
```
from transformers import AutoTokenizer, Qwen2ForCausalLM, Qwen2Tokenizer
pipe = Qwen2ForCausalLM.from_pretrained("./")
tokenizer = Qwen2Tokenizer.from_pretrained("./")
prompt = f'Complete the following Lean 4 code:\n\n```lean4\nimport '
inputs = tokenizer(prompt, return_tensors="pt")
# Here we activate token healing, which triggers error.
generate_ids = pipe.generate(inputs.input_ids, tokenizer=tokenizer, max_new_tokens=1, token_healing=True)
tokenizer.batch_decode(generate_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False)[0]
```
The error:
```
{'error': "where() received an invalid combination of arguments - got (bool, int, Tensor), but expected one of:
* (Tensor condition)
* (Tensor condition, Tensor input, Tensor other, *, Tensor out)
* (Tensor condition, Number self, Tensor other)
didn't match because some of the arguments have invalid types: (!bool!, !int!, Tensor)
* (Tensor condition, Tensor input, Number other)
didn't match because some of the arguments have invalid types: (!bool!, !int!, !Tensor!)
* (Tensor condition, Number self, Number other)
didn't match because some of the arguments have invalid types: (!bool!, !int!, !Tensor!)
"}
```
I traced it to https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L2436.
Because `tokenizer.bos_token_id` is `None`, the `torch.where()` call fails.
I commented this line and a subsequent error popped up a few lines below on https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L2447. The error:
```
TypeError Traceback (most recent call last)
Cell In[1], line 9
6 prompt = f'Complete the following Lean 4 code:\n\n```lean4\nimport '
7 inputs = tokenizer(prompt, return_tensors="pt")
----> 9 generate_ids = pipe.generate(inputs.input_ids, tokenizer=tokenizer, max_new_tokens=1, token_healing=True)
10 tokenizer.batch_decode(generate_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False)[0]
File [~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/torch/utils/_contextlib.py:116](http://localhost:8888/~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/torch/utils/_contextlib.py#line=115), in context_decorator.<locals>.decorate_context(*args, **kwargs)
113 @functools.wraps(func)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)
File [~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/transformers/generation/utils.py:2084](http://localhost:8888/~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/transformers/generation/utils.py#line=2083), in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)
2081 input_ids = inputs_tensor if model_input_name == "input_ids" else model_kwargs.pop("input_ids")
2083 if generation_config.token_healing:
-> 2084 input_ids = self.heal_tokens(input_ids, tokenizer)
2086 if streamer is not None:
2087 streamer.put(input_ids.cpu())
File [~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/transformers/generation/utils.py:2499](http://localhost:8888/~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/transformers/generation/utils.py#line=2498), in GenerationMixin.heal_tokens(self, input_ids, tokenizer)
2495 return input_ids
2497 tail_ids = input_ids[:, -1].tolist()
-> 2499 space_tok = tokenizer.convert_ids_to_tokens(tokenizer.convert_tokens_to_ids(" "))[0]
2500 # tail tokens are used for a prefix search, thus, whitespaces are replaced with
2501 # their tokenization (e.g. 'Ġ') to enable search for tokens prefixed with a whitespace
2502 tail_toks = (tokenizer.decode(t).replace(" ", space_tok) for t in tail_ids)
File [~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/transformers/tokenization_utils.py:1065](http://localhost:8888/~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/transformers/tokenization_utils.py#line=1064), in PreTrainedTokenizer.convert_ids_to_tokens(self, ids, skip_special_tokens)
1063 return self._convert_id_to_token(ids)
1064 tokens = []
-> 1065 for index in ids:
1066 index = int(index)
1067 if skip_special_tokens and index in self.all_special_ids:
TypeError: 'NoneType' object is not iterable
```
This time, it's due to `tokenizer.convert_tokens_to_ids(" ")` returning `None` because the space character is not a token (the tokenizer already uses `Ġ` to represent space characters).
### Who can help?
@ArthurZucker @itazap I suspect an issue in `heal_tokens` for tokenizers which:
- have `tokenizer.bos_token_id` equal to `None`
- do not have space character as a token, i.e. `tokenizer.convert_tokens_to_ids(" ")` is `None`
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
To reproduce on 🤗 Inference Endpoints, deploy https://huggingface.co/desaxce/Qwen2.5-Coder-7B-Instruct on a "Default" container. I forked this repository from https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct to reproduce the issue: I added the `token_healing: true` parameter in `generation_config.json` and a `handler.py` to be able to deploy on 🤗 Inference Endpoints.
It's important to select "Default" container to reproduce - with TGI I didn't have any error (but I didn't check that token healing was indeed being used). In all cases, the error can be reproduced locally ⬇
To reproduce locally, clone https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct and run this snippet which generates using token healing:
```
from transformers import AutoTokenizer, Qwen2ForCausalLM, Qwen2Tokenizer
pipe = Qwen2ForCausalLM.from_pretrained("./")
tokenizer = Qwen2Tokenizer.from_pretrained("./")
prompt = f'Complete the following Lean 4 code:\n\n```lean4\nimport '
inputs = tokenizer(prompt, return_tensors="pt")
# Here we activate token healing, which triggers error.
generate_ids = pipe.generate(inputs.input_ids, tokenizer=tokenizer, max_new_tokens=1, token_healing=True)
tokenizer.batch_decode(generate_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False)[0]
```
### Expected behavior
I expect completion to take place with tokens healed:
 | open | 2025-02-15T09:01:46Z | 2025-03-18T08:04:00Z | https://github.com/huggingface/transformers/issues/36210 | [
"bug"
] | desaxce | 1 |
Urinx/WeixinBot | api | 61 | 请问发送文件后不停synccheck到window.synccheck={retcode:"0",selector:"2"}? | 请问通过https://wx.qq.com/cgi-bin/mmwebwx-bin/webwxsendappmsg?fun=async&f=json和https://wx.qq.com/cgi-bin/mmwebwx-bin/webwxsendappmsg?fun=async&f=json上传一个文件后,https://webpush.wx.qq.com/cgi-bin/mmwebwx-bin/synccheck不停得到window.synccheck={retcode:"0",selector:"2"},但https://wx.qq.com/cgi-bin/mmwebwx-bin/webwxsync又没有消息,该如何办呢?
| open | 2016-06-12T09:25:19Z | 2016-06-12T12:11:46Z | https://github.com/Urinx/WeixinBot/issues/61 | [] | dereky7 | 1 |
OpenInterpreter/open-interpreter | python | 669 | MacOS & conda env did not run interpreter. | ### Describe the bug
I tried following environment
- Mac OS v14.0 Sonoma
- conda environment : python=3.11 (created by 'conda create python=3.11')
```bash
$ pip install open-interpreter
```
could be done.
When I tried 'interpreter' command on terminal, following error occured.
```bash
...
File "/Users/xxx/miniconda3/lib/python3.9/site-packages/ooba/llm.py", line 195
match incoming_data['event']:
^
SyntaxError: invalid syntax
```
How do I fix this?
### Reproduce
1. conda create python=3.11 -n py311
2. conda activate py311
3. pip install open-interpreter
### Expected behavior
Starting interpreter
### Screenshots
_No response_
### Open Interpreter version
0.1.10
### Python version
3.11.5
### Operating System name and version
Mac OS v14.0 Sonoma
### Additional context
_No response_ | closed | 2023-10-21T04:36:34Z | 2023-10-27T13:49:03Z | https://github.com/OpenInterpreter/open-interpreter/issues/669 | [
"Bug"
] | yoyoyo-yo | 1 |
kymatio/kymatio | numpy | 1,062 | Sample to use Scattering 2D with ImageDataGenerator.flowfromdirectory | testDataGen = ImageDataGenerator(rescale = 1.0 / 255)
trainGenerator = trainDataGen.flow_from_directory(os.path.join(dspth,'SplitDataset', 'Train'),
target_size = (32, 32),
batch_size = 32,
color_mode = 'grayscale',
classes = [str(Class) for Class in [15,27]],
class_mode = 'categorical')
validationGenerator = testDataGen.flow_from_directory(os.path.join(dspth,'SplitDataset', 'Validation'),
target_size = (32, 32),
batch_size = 32,
color_mode = 'grayscale',
classes = [str(Class) for Class in [15,27]],
class_mode = 'categorical')
model.fit(trainGenerator, epochs = 40, validation_data =validationGenerator, callbacks = callbacks)
# Scattering2D fits at which stage | open | 2025-02-20T07:27:22Z | 2025-02-20T07:27:22Z | https://github.com/kymatio/kymatio/issues/1062 | [] | malathip72 | 0 |
inducer/pudb | pytest | 544 | isinstance(torch.tensor(0), Sized) is True, but it len() will throw error | **Describe the bug**
`isinstance(torch.tensor(0), Sized)` is True, but it `len()` will throw error
```
[08/23/2022 05:51:06 PM] ERROR stringifier failed var_view.py:607
╭────────────────────────────────────────────────────── Traceback (most recent call last) ───────────────────────────────────────────────────────╮
│ /home/wzy/.local/lib/python3.10/site-packages/pudb/var_view.py:601 in walk_value │
│ │
│ 598 │ │ iinfo = self.frame_var_info.get_inspect_info(id_path, read_only=True) │
│ 599 │ │ │
│ 600 │ │ try: │
│ ❱ 601 │ │ │ displayed_value = get_stringifier(iinfo)(value) │
│ 602 │ │ except Exception: │
│ 603 │ │ │ # Unfortunately, anything can happen when calling str() or │
│ 604 │ │ │ # repr() on a random object. │
│ │
│ /home/wzy/.local/lib/python3.10/site-packages/pudb/var_view.py:432 in default_stringifier │
│ │
│ 429 │ │ │ return str(result) │
│ 430 │ │
│ 431 │ elif isinstance(value, Sized): │
│ ❱ 432 │ │ return f"{type(value).__name__} ({len(value)})" │
│ 433 │ │
│ 434 │ return str(type(value).__name__) │
│ 435 │
│ │
│ /home/wzy/.local/lib/python3.10/site-packages/torch/_tensor.py:705 in __len__ │
│ │
│ 702 │ │ if has_torch_function_unary(self): │
│ 703 │ │ │ return handle_torch_function(Tensor.__len__, (self,), self) │
│ 704 │ │ if self.dim() == 0: │
│ ❱ 705 │ │ │ raise TypeError("len() of a 0-d tensor") │
│ 706 │ │ if torch._C._get_tracing_state(): │
│ 707 │ │ │ warnings.warn('Using len to get tensor shape might cause the trace to be │
│ incorrect. ' │
│ 708 │ │ │ │ │ │ 'Recommended usage would be tensor.shape[0]. ' │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: len() of a 0-d tensor
```
**To Reproduce**
test.py
```python
import torch
a = torch.tensor(0)
```
1. `pudb3 test.py`
2. run
**Expected behavior**
No error.
**Additional context**
I advise adding a try/except to avoid this error.
**Versions**
What version of pudb? What version of Python?
pudb: 2022.1.1
python: 3.10 | closed | 2022-08-23T10:10:49Z | 2022-08-24T02:31:35Z | https://github.com/inducer/pudb/issues/544 | [
"Bug"
] | Freed-Wu | 2 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,842 | Detected on Twitch | Nodriver and undetected driver are detected on twitch.tv | closed | 2024-04-24T12:28:36Z | 2024-04-24T15:16:14Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1842 | [] | fontrendererobj | 0 |
plotly/dash-cytoscape | plotly | 143 | [BUG] expected an incorrect layout name ("close-bilkent") in 0.3.0 | <!--
Thanks for your interest in Plotly's Dash Cytoscape Component!
Note that GitHub issues in this repo are reserved for bug reports and feature
requests. Implementation questions should be discussed in our
[Dash Community Forum](https://community.plotly.com/c/dash).
Before opening a new issue, please search through existing issues (including
closed issues) and the [Dash Community Forum](https://community.plotly.com/c/dash).
When reporting a bug, please include a reproducible example! We recommend using
the [latest version](https://github.com/plotly/dash-cytoscape/blob/master/CHANGELOG.md)
as this project is frequently updated. Issues can be browser-specific so
it's usually helpful to mention the browser and version that you are using.
-->
#### Description
I set layout as "cose-bilkent", the layout expect an incorrect layout name (close-bilkent)
#### Steps/Code to Reproduce
Example:
```python
layout= "cose-bilkent"
```
#### Expected Results
<!-- Please paste or describe the expected results.-->
according the document, the 'cose-bilkent' is correct name, not close-bilkent
#### Actual Results
<!-- Please paste or specifically describe the actual output or traceback. -->
```python
Invalid argument `layout.name` passed into Cytoscape with ID "cytoscape_graph".
Expected one of ["random","preset","circle","concentric","grid","breadthfirst","cose","close-bilkent","cola","euler","spread","dagre","klay"].
```
#### Versions
0.3.0 | closed | 2021-06-11T13:59:23Z | 2021-09-06T19:28:06Z | https://github.com/plotly/dash-cytoscape/issues/143 | [] | manmustbecool | 0 |
ray-project/ray | tensorflow | 51,351 | Release test aggregate_groups.few_groups (sort_shuffle_pull_based) failed | Release test **aggregate_groups.few_groups (sort_shuffle_pull_based)** failed. See https://buildkite.com/ray-project/release/builds/35758#0195916e-bf24-40ef-8732-80656689b01b for more details.
Managed by OSS Test Policy | closed | 2025-03-13T22:07:28Z | 2025-03-17T17:37:07Z | https://github.com/ray-project/ray/issues/51351 | [
"bug",
"P0",
"triage",
"data",
"release-test",
"jailed-test",
"ray-test-bot",
"weekly-release-blocker",
"stability"
] | can-anyscale | 2 |
ultralytics/yolov5 | machine-learning | 12,643 | Training on non-square images | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I have images of size 1280x720. I want to use these non-square images for training, my command to train,
`python3 segment/train.py --img-size 1280,720 --rect --epochs 20 --data custom_dataset.yaml --weights yolov5s-seg.p t --cfg models/segment/yolov5s-seg.yaml --batch 16 --workers 16 --name yolov5_instance_segmentation_run1`
However, I get the error as
`train.py: error: argument --imgsz/--img/--img-size: invalid int value: '1280,720'`
### Additional
_No response_ | closed | 2024-01-17T15:30:10Z | 2024-10-20T19:37:39Z | https://github.com/ultralytics/yolov5/issues/12643 | [
"question",
"Stale"
] | bharathsanthanam94 | 9 |
Esri/arcgis-python-api | jupyter | 1,846 | User' object has no attribute 'availableCredits' | **Describe the bug**
User object is not providing "AvailableCredits" or "AssignedCredits", for **some** users in AGOL, though the user clearly has credits in AGOL assigned and available to them.
**To Reproduce**
Steps to reproduce the behavior:
```python
from arcgis.gis import GIS
gis = GIS("home")
userid = 'xxx@mt.gov_montana'
user = gis.users.get(userid)
print(user.availableCredits)
```
error from 2.3.0:
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
In [27]:
Line 7: print(user.availableCredits)
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\arcgis\gis\__init__.py, in __getattr__:
Line 10980: raise AttributeError(
AttributeError: 'User' object has no attribute 'availableCredits'
---------------------------------------------------------------------------
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Expected behavior**
I've successfully obtained the available credits and assigned credits attributes from the user class numerous times. This error is happening now from AGOL Notebooks (ArcGIS Notebook Python 3 Standard - 9.0) and from a local notebook using arcgis API 2.2.01
**Platform (please complete the following information):**
- OS:Windows 11
- Browser: Edge
- Python API Version 2.2.03 and online w/ 2.3.0 (errors fpriv
**Additional context**
This might be an AGOL issue... but, thought I'd bring it up here.
| closed | 2024-06-17T15:55:12Z | 2024-06-18T06:31:48Z | https://github.com/Esri/arcgis-python-api/issues/1846 | [
"cannot reproduce"
] | MSLTrebesch | 2 |
nvbn/thefuck | python | 1,227 | Python exception when executing rule git_hook_bypass: NoneType is not iterable | <!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
The output of `thefuck --version` :
The Fuck 3.31 using Python 3.9.6 and ZSH 5.8
Your system (Debian 7, ArchLinux, Windows, etc.):
MacOS Big Sur 11.5.1 (20G80)
How to reproduce the bug:
```
❯ git push
To ssh://prefix.url.com:1234/REPO/repo.git
! [rejected] AB-123-task -> AB-123-task (non-fast-forward)
error: failed to push some refs to 'ssh://prefix.url.com:1234/REPO/repo.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
❯ fuck
[WARN] Rule git_hook_bypass:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/thefuck/types.py", line 181, in is_match
if self.match(command):
File "/usr/local/lib/python3.9/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "/usr/local/lib/python3.9/site-packages/thefuck/specific/git.py", line 17, in git_support
if 'trace: alias expansion:' in command.output:
TypeError: argument of type 'NoneType' is not iterable
----------------------------
No fucks given
```
The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):
```
❯ git push
To ssh://prefix.url.com:1234/REPO/repo.git
! [rejected] AB-123-task -> AB-123-task (non-fast-forward)
error: failed to push some refs to 'ssh://prefix.url.com:1234/REPO/repo.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
❯ fuck
DEBUG: Run with settings: {'alter_history': True,
'debug': True,
'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},
'exclude_rules': [],
'excluded_search_path_prefixes': [],
'history_limit': None,
'instant_mode': False,
'no_colors': False,
'num_close_matches': 3,
'priority': {'git_hook_bypass': 1001},
'repeat': False,
'require_confirmation': True,
'rules': [<const: All rules enabled>],
'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],
'user_dir': PosixPath('/Users/bbukaty/.config/thefuck'),
'wait_command': 3,
'wait_slow_command': 15}
DEBUG: Execution timed out!
DEBUG: Call: git push; with env: {redacted}; is slow: False took: 0:00:03.040546
DEBUG: Importing rule: adb_unknown_command; took: 0:00:00.000584
DEBUG: Importing rule: ag_literal; took: 0:00:00.000537
DEBUG: Importing rule: apt_get; took: 0:00:00.001473
DEBUG: Importing rule: apt_get_search; took: 0:00:00.000353
DEBUG: Importing rule: apt_invalid_operation; took: 0:00:00.000839
DEBUG: Importing rule: apt_list_upgradable; took: 0:00:00.000367
DEBUG: Importing rule: apt_upgrade; took: 0:00:00.001274
DEBUG: Importing rule: aws_cli; took: 0:00:00.000422
DEBUG: Importing rule: az_cli; took: 0:00:00.000383
DEBUG: Importing rule: brew_cask_dependency; took: 0:00:00.000850
DEBUG: Importing rule: brew_install; took: 0:00:00.000304
DEBUG: Importing rule: brew_link; took: 0:00:00.000350
DEBUG: Importing rule: brew_reinstall; took: 0:00:00.000704
DEBUG: Importing rule: brew_uninstall; took: 0:00:00.000377
DEBUG: Importing rule: brew_unknown_command; took: 0:00:00.000310
DEBUG: Importing rule: brew_update_formula; took: 0:00:00.000353
DEBUG: Importing rule: brew_upgrade; took: 0:00:00.000282
DEBUG: Importing rule: cargo; took: 0:00:00.000268
DEBUG: Importing rule: cargo_no_command; took: 0:00:00.000588
DEBUG: Importing rule: cat_dir; took: 0:00:00.000516
DEBUG: Importing rule: cd_correction; took: 0:00:00.001532
DEBUG: Importing rule: cd_cs; took: 0:00:00.000280
DEBUG: Importing rule: cd_mkdir; took: 0:00:00.000355
DEBUG: Importing rule: cd_parent; took: 0:00:00.000276
DEBUG: Importing rule: chmod_x; took: 0:00:00.000277
DEBUG: Importing rule: choco_install; took: 0:00:00.000538
DEBUG: Importing rule: composer_not_command; took: 0:00:00.000338
DEBUG: Importing rule: conda_mistype; took: 0:00:00.000333
DEBUG: Importing rule: cp_create_destination; took: 0:00:00.000369
DEBUG: Importing rule: cp_omitting_directory; took: 0:00:00.000375
DEBUG: Importing rule: cpp11; took: 0:00:00.000338
DEBUG: Importing rule: dirty_untar; took: 0:00:00.001920
DEBUG: Importing rule: dirty_unzip; took: 0:00:00.003105
DEBUG: Importing rule: django_south_ghost; took: 0:00:00.000336
DEBUG: Importing rule: django_south_merge; took: 0:00:00.000263
DEBUG: Importing rule: dnf_no_such_command; took: 0:00:00.000883
DEBUG: Importing rule: docker_image_being_used_by_container; took: 0:00:00.000342
DEBUG: Importing rule: docker_login; took: 0:00:00.000328
DEBUG: Importing rule: docker_not_command; took: 0:00:00.000496
DEBUG: Importing rule: dry; took: 0:00:00.000264
DEBUG: Importing rule: fab_command_not_found; took: 0:00:00.000403
DEBUG: Importing rule: fix_alt_space; took: 0:00:00.000324
DEBUG: Importing rule: fix_file; took: 0:00:00.002060
DEBUG: Importing rule: gem_unknown_command; took: 0:00:00.000445
DEBUG: Importing rule: git_add; took: 0:00:00.000714
DEBUG: Importing rule: git_add_force; took: 0:00:00.000309
DEBUG: Importing rule: git_bisect_usage; took: 0:00:00.000313
DEBUG: Importing rule: git_branch_delete; took: 0:00:00.000306
DEBUG: Importing rule: git_branch_delete_checked_out; took: 0:00:00.000313
DEBUG: Importing rule: git_branch_exists; took: 0:00:00.000323
DEBUG: Importing rule: git_branch_list; took: 0:00:00.000484
DEBUG: Importing rule: git_checkout; took: 0:00:00.000455
DEBUG: Importing rule: git_clone_git_clone; took: 0:00:00.000300
DEBUG: Importing rule: git_commit_amend; took: 0:00:00.000346
DEBUG: Importing rule: git_commit_reset; took: 0:00:00.000296
DEBUG: Importing rule: git_diff_no_index; took: 0:00:00.000306
DEBUG: Importing rule: git_diff_staged; took: 0:00:00.000299
DEBUG: Importing rule: git_fix_stash; took: 0:00:00.000366
DEBUG: Importing rule: git_flag_after_filename; took: 0:00:00.000305
DEBUG: Importing rule: git_help_aliased; took: 0:00:00.000310
DEBUG: Importing rule: git_hook_bypass; took: 0:00:00.000301
DEBUG: Importing rule: git_lfs_mistype; took: 0:00:00.000298
DEBUG: Importing rule: git_merge; took: 0:00:00.000370
DEBUG: Importing rule: git_merge_unrelated; took: 0:00:00.000302
DEBUG: Importing rule: git_not_command; took: 0:00:00.000319
DEBUG: Importing rule: git_pull; took: 0:00:00.000293
DEBUG: Importing rule: git_pull_clone; took: 0:00:00.000296
DEBUG: Importing rule: git_pull_uncommitted_changes; took: 0:00:00.000293
DEBUG: Importing rule: git_push; took: 0:00:00.000361
DEBUG: Importing rule: git_push_different_branch_names; took: 0:00:00.000308
DEBUG: Importing rule: git_push_force; took: 0:00:00.000322
DEBUG: Importing rule: git_push_pull; took: 0:00:00.000319
DEBUG: Importing rule: git_push_without_commits; took: 0:00:00.000511
DEBUG: Importing rule: git_rebase_merge_dir; took: 0:00:00.000353
DEBUG: Importing rule: git_rebase_no_changes; took: 0:00:00.000318
DEBUG: Importing rule: git_remote_delete; took: 0:00:00.000315
DEBUG: Importing rule: git_remote_seturl_add; took: 0:00:00.000288
DEBUG: Importing rule: git_rm_local_modifications; took: 0:00:00.000311
DEBUG: Importing rule: git_rm_recursive; took: 0:00:00.000320
DEBUG: Importing rule: git_rm_staged; took: 0:00:00.000297
DEBUG: Importing rule: git_stash; took: 0:00:00.000307
DEBUG: Importing rule: git_stash_pop; took: 0:00:00.000304
DEBUG: Importing rule: git_tag_force; took: 0:00:00.000522
DEBUG: Importing rule: git_two_dashes; took: 0:00:00.000435
DEBUG: Importing rule: go_run; took: 0:00:00.000305
DEBUG: Importing rule: go_unknown_command; took: 0:00:00.000464
DEBUG: Importing rule: gradle_no_task; took: 0:00:00.000525
DEBUG: Importing rule: gradle_wrapper; took: 0:00:00.000402
DEBUG: Importing rule: grep_arguments_order; took: 0:00:00.000344
DEBUG: Importing rule: grep_recursive; took: 0:00:00.000313
DEBUG: Importing rule: grunt_task_not_found; took: 0:00:00.000466
DEBUG: Importing rule: gulp_not_task; took: 0:00:00.000333
DEBUG: Importing rule: has_exists_script; took: 0:00:00.000366
DEBUG: Importing rule: heroku_multiple_apps; took: 0:00:00.000347
DEBUG: Importing rule: heroku_not_command; took: 0:00:00.000329
DEBUG: Importing rule: history; took: 0:00:00.000272
DEBUG: Importing rule: hostscli; took: 0:00:00.000338
DEBUG: Importing rule: ifconfig_device_not_found; took: 0:00:00.000317
DEBUG: Importing rule: java; took: 0:00:00.000301
DEBUG: Importing rule: javac; took: 0:00:00.000283
DEBUG: Importing rule: lein_not_task; took: 0:00:00.000316
DEBUG: Importing rule: ln_no_hard_link; took: 0:00:00.000292
DEBUG: Importing rule: ln_s_order; took: 0:00:00.000270
DEBUG: Importing rule: long_form_help; took: 0:00:00.000236
DEBUG: Importing rule: ls_all; took: 0:00:00.000356
DEBUG: Importing rule: ls_lah; took: 0:00:00.000288
DEBUG: Importing rule: man; took: 0:00:00.000283
DEBUG: Importing rule: man_no_space; took: 0:00:00.000232
DEBUG: Importing rule: mercurial; took: 0:00:00.000291
DEBUG: Importing rule: missing_space_before_subcommand; took: 0:00:00.000338
DEBUG: Importing rule: mkdir_p; took: 0:00:00.000290
DEBUG: Importing rule: mvn_no_command; took: 0:00:00.000292
DEBUG: Importing rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000284
DEBUG: Importing rule: nixos_cmd_not_found; took: 0:00:00.000728
DEBUG: Importing rule: no_command; took: 0:00:00.000467
DEBUG: Importing rule: no_such_file; took: 0:00:00.000372
DEBUG: Importing rule: npm_missing_script; took: 0:00:00.000632
DEBUG: Importing rule: npm_run_script; took: 0:00:00.000298
DEBUG: Importing rule: npm_wrong_command; took: 0:00:00.000348
DEBUG: Importing rule: omnienv_no_such_command; took: 0:00:00.000534
DEBUG: Importing rule: open; took: 0:00:00.000352
DEBUG: Importing rule: pacman; took: 0:00:00.000675
DEBUG: Importing rule: pacman_invalid_option; took: 0:00:00.000346
DEBUG: Importing rule: pacman_not_found; took: 0:00:00.000341
DEBUG: Importing rule: path_from_history; took: 0:00:00.000281
DEBUG: Importing rule: php_s; took: 0:00:00.000284
DEBUG: Importing rule: pip_install; took: 0:00:00.000297
DEBUG: Importing rule: pip_unknown_command; took: 0:00:00.000287
DEBUG: Importing rule: port_already_in_use; took: 0:00:00.000335
DEBUG: Importing rule: prove_recursively; took: 0:00:00.000287
DEBUG: Importing rule: pyenv_no_such_command; took: 0:00:00.000324
DEBUG: Importing rule: python_command; took: 0:00:00.000269
DEBUG: Importing rule: python_execute; took: 0:00:00.000272
DEBUG: Importing rule: python_module_error; took: 0:00:00.000231
DEBUG: Importing rule: quotation_marks; took: 0:00:00.000246
DEBUG: Importing rule: react_native_command_unrecognized; took: 0:00:00.000304
DEBUG: Importing rule: remove_shell_prompt_literal; took: 0:00:00.000221
DEBUG: Importing rule: remove_trailing_cedilla; took: 0:00:00.000220
DEBUG: Importing rule: rm_dir; took: 0:00:00.000317
DEBUG: Importing rule: rm_root; took: 0:00:00.000271
DEBUG: Importing rule: scm_correction; took: 0:00:00.000279
DEBUG: Importing rule: sed_unterminated_s; took: 0:00:00.000279
DEBUG: Importing rule: sl_ls; took: 0:00:00.000223
DEBUG: Importing rule: ssh_known_hosts; took: 0:00:00.000284
DEBUG: Importing rule: sudo; took: 0:00:00.000225
DEBUG: Importing rule: sudo_command_from_user_path; took: 0:00:00.000275
DEBUG: Importing rule: switch_lang; took: 0:00:00.000299
DEBUG: Importing rule: systemctl; took: 0:00:00.000322
DEBUG: Importing rule: terraform_init; took: 0:00:00.000298
DEBUG: Importing rule: test.py; took: 0:00:00.000226
DEBUG: Importing rule: tmux; took: 0:00:00.000276
DEBUG: Importing rule: touch; took: 0:00:00.000281
DEBUG: Importing rule: tsuru_login; took: 0:00:00.000282
DEBUG: Importing rule: tsuru_not_command; took: 0:00:00.000280
DEBUG: Importing rule: unknown_command; took: 0:00:00.000228
DEBUG: Importing rule: unsudo; took: 0:00:00.000238
DEBUG: Importing rule: vagrant_up; took: 0:00:00.000280
DEBUG: Importing rule: whois; took: 0:00:00.000467
DEBUG: Importing rule: workon_doesnt_exists; took: 0:00:00.000304
DEBUG: Importing rule: yarn_alias; took: 0:00:00.000278
DEBUG: Importing rule: yarn_command_not_found; took: 0:00:00.000495
DEBUG: Importing rule: yarn_command_replaced; took: 0:00:00.000370
DEBUG: Importing rule: yarn_help; took: 0:00:00.000293
DEBUG: Importing rule: yum_invalid_operation; took: 0:00:00.000663
DEBUG: Trying rule: dirty_unzip; took: 0:00:00.000090
DEBUG: Trying rule: git_hook_bypass; took: 0:00:00.000029
[WARN] Rule git_hook_bypass:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/thefuck/types.py", line 181, in is_match
if self.match(command):
File "/usr/local/lib/python3.9/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "/usr/local/lib/python3.9/site-packages/thefuck/specific/git.py", line 17, in git_support
if 'trace: alias expansion:' in command.output:
TypeError: argument of type 'NoneType' is not iterable
----------------------------
No fucks given
DEBUG: Total took: 0:00:03.145064
```
<!-- It's only with enough information that we can do something to fix the problem. -->
| closed | 2021-08-10T21:07:52Z | 2021-08-17T13:41:54Z | https://github.com/nvbn/thefuck/issues/1227 | [] | bbukaty | 3 |
onnx/onnx | tensorflow | 6,017 | Optimizing Node Ordering in ONNX Graphs: Ensuring Correct Sequence for Model Generation | I am working on generating an ONNX converter for some framework, and previously, I was iterating through the nodes of the ONNX model to generate a feed-forward network corresponding to each node of the ONNX model. However, after analyzing several ONNX models and the order of their nodes, I realized that I can't simply iterate in the order they were loaded because their order was pretty random.
So, I think I have to pick one node and to get the subsequent node in the model, I have to check which node is taking the output of the previous node. By doing this, I could get the next node in the model.
I think to get the first node of the model, I have to check which has such input with no initializer for that, and that node will be the first one.
I am using this approach to generate a model from an ONNX model. Is the way I am following correct, or if there is any other optimized way, then please do tell me.
| open | 2024-03-12T08:35:51Z | 2024-03-12T16:00:30Z | https://github.com/onnx/onnx/issues/6017 | [
"question"
] | kumar-utkarsh0317 | 1 |
huggingface/datasets | nlp | 6,451 | Unable to read "marsyas/gtzan" data | Hi, this is my code and the error:
```
from datasets import load_dataset
gtzan = load_dataset("marsyas/gtzan", "all")
```
[error_trace.txt](https://github.com/huggingface/datasets/files/13464397/error_trace.txt)
[audio_yml.txt](https://github.com/huggingface/datasets/files/13464410/audio_yml.txt)
Python 3.11.5
Jupyter Notebook 6.5.4
Windows 10
I'm able to download and work with other datasets, but not this one. For example, both these below work fine:
```
from datasets import load_dataset
dataset = load_dataset("facebook/voxpopuli", "pl", split="train", streaming=True)
minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
Thanks for your help
https://huggingface.co/datasets/marsyas/gtzan/tree/main | closed | 2023-11-25T15:13:17Z | 2023-12-01T12:53:46Z | https://github.com/huggingface/datasets/issues/6451 | [] | gerald-wrona | 3 |
huggingface/datasets | deep-learning | 6,484 | [Feature Request] Dataset versioning | **Is your feature request related to a problem? Please describe.**
I am working on a project, where I would like to test different preprocessing methods for my ML-data. Thus, I would like to work a lot with revisions and compare them. Currently, I was not able to make it work with the revision keyword because it was not redownloading the data, it was reading in some cached data, until I put `download_mode="force_redownload"`, even though the reversion was different.
Of course, I may have done something wrong or missed a setting somewhere!
**Describe the solution you'd like**
The solution would allow me to easily work with revisions:
- create a new dataset (by combining things, different preprocessing, ..) and give it a new revision (v.1.2.3), maybe like this:
`dataset_audio.push_to_hub('kenfus/xy', revision='v1.0.2')`
- then, get the current revision as follows:
```
dataset = load_dataset(
'kenfus/xy', revision='v1.0.2',
)
```
this downloads the new version and does not load in a different revision and all future map, filter, .. operations are done on this dataset and not loaded from cache produced from a different revision.
- if I rerun the run, the caching should be smart enough in every step to not reuse a mapping operation on a different revision.
**Describe alternatives you've considered**
I created my own caching, putting `download_mode="force_redownload"` and `load_from_cache_file=False,` everywhere.
**Additional context**
Thanks a lot for your great work! Creating NLP datasets and training a model with them is really easy and straightforward with huggingface.
This is the data loading in my script:
```
## CREATE PATHS
prepared_dataset_path = os.path.join(
DATA_FOLDER, str(DATA_VERSION), "prepared_dataset"
)
os.makedirs(os.path.join(DATA_FOLDER, str(DATA_VERSION)), exist_ok=True)
## LOAD DATASET
if os.path.exists(prepared_dataset_path):
print("Loading prepared dataset from disk...")
dataset_prepared = load_from_disk(prepared_dataset_path)
else:
print("Loading dataset from HuggingFace Datasets...")
dataset = load_dataset(
PATH_TO_DATASET, revision=DATA_VERSION, download_mode="force_redownload"
)
print("Preparing dataset...")
dataset_prepared = dataset.map(
prepare_dataset,
remove_columns=["audio", "transcription"],
num_proc=os.cpu_count(),
load_from_cache_file=False,
)
dataset_prepared.save_to_disk(prepared_dataset_path)
del dataset
if CHECK_DATASET:
## CHECK DATASET
dataset_prepared = dataset_prepared.map(
check_dimensions, num_proc=os.cpu_count(), load_from_cache_file=False
)
dataset_filtered = dataset_prepared.filter(
lambda example: not example["incorrect_dimension"],
load_from_cache_file=False,
)
for example in dataset_prepared.filter(
lambda example: example["incorrect_dimension"], load_from_cache_file=False
):
print(example["path"])
print(
f"Number of examples with incorrect dimension: {len(dataset_prepared) - len(dataset_filtered)}"
)
print("Number of examples train: ", len(dataset_filtered["train"]))
print("Number of examples test: ", len(dataset_filtered["test"]))
```
| open | 2023-12-08T16:01:35Z | 2023-12-11T19:13:46Z | https://github.com/huggingface/datasets/issues/6484 | [] | kenfus | 2 |
Anjok07/ultimatevocalremovergui | pytorch | 1,000 | What parameter to choose for importing a model from vocal remover by tsurumeso? | I trained a model with the beta version 6 (vocal remover by tsurumeso). I imported it into UVR 5 but the parameter doesn't fit anymore.
It used to be:
1band_sr44100_hl_1024
Which parameter is it now? | open | 2023-12-01T14:53:19Z | 2023-12-01T14:53:19Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1000 | [] | KatieBelli | 0 |
microsoft/MMdnn | tensorflow | 671 | UnboundLocalError: local variable 'x' referenced before assignment | Platform (like ubuntu 16.04/win10):ubuntu 18.04
Python version:3.6.7
Source framework with version (like Tensorflow 1.4.1 with GPU):Tensorflow 1.13.1
Destination framework with version (like CNTK 2.3 with GPU):
Pre-trained model path (webpath or webdisk path):
Running scripts:
WARNING:tensorflow:From /home/wangyuejiao/.local/lib/python3.6/site-packages/tensorflow/python/tools/strip_unused_lib.py:86: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
Traceback (most recent call last):
File "/media/wangyuejiao/DATA/MMdnn-master/mmdnn/conversion/_script/convert.py", line 123, in <module>
_main()
File "/media/wangyuejiao/DATA/MMdnn-master/mmdnn/conversion/_script/convert.py", line 102, in _main
ret = convertToIR._convert(ir_args)
File "/media/wangyuejiao/DATA/MMdnn-master/mmdnn/conversion/_script/convertToIR.py", line 59, in _convert
parser = TensorflowParser2(args.weights, inputshape, args.inNodeName, args.dstNodeName)
File "/media/wangyuejiao/DATA/MMdnn-master/mmdnn/conversion/tensorflow/tensorflow_frozenparser.py", line 157, in __init__
input_map[in_nodes[i] + ':0'] = x
UnboundLocalError: local variable 'x' referenced before assignment | open | 2019-06-05T06:17:08Z | 2019-06-25T06:22:48Z | https://github.com/microsoft/MMdnn/issues/671 | [] | moonjiaoer | 15 |
robotframework/robotframework | automation | 4,827 | What about adding support of !import to yaml variables | By default yaml does not have any include statement and RF YamlImporter class loader does not support it neither, however adding import can be simply added:
https://gist.github.com/joshbode/569627ced3076931b02f | closed | 2023-07-19T12:19:52Z | 2023-07-23T21:34:04Z | https://github.com/robotframework/robotframework/issues/4827 | [] | sebastianciupinski | 1 |
lexiforest/curl_cffi | web-scraping | 444 | Observing BoringSSL Exception intermittently for specific user-agents | **The question**
_BoringSSL Exception:_
Observing BoringSSL Exception intermittently while using user-agents such as `Mozilla/5.0 (iPhone; CPU iPhone OS 13_4_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 [FBAN/FBIOS;FBDV/iPhone12,3;FBMD/iPhone;FBSN/iOS;FBSV/13.4.1;FBSS/3;FBID/phone;FBLC/hu_HU;FBOP/5;FBCR/]` and using custom `ja3` fingerprints (randomizing TLS using `ja3` parameter in `requests.get()`).
```
ERROR in curl_cffi_script: Failed to perform, curl: (35) BoringSSL SSL_connect: SSL_ERROR_ZERO_RETURN in connection to www.example.com:443 . See https://curl.se/libcurl/c/libcurl-errors.html first for more details.
ERROR in curl_cffi_script: File "/Users/Documents/scripts/curl_cffi_script.py", line 258, in crawl_url
response = await s.get(
File ""/Users/python_virtual_env/python3/lib/python3.9/site-packages/curl_cffi/requests/session.py", line 1354, in request
raise error(str(e), e.code, rsp) from e
```
`ja3` base string used: `771,4865-4866-4867-49195-49199-49196-49200-52393-52392-49171-49172-156-157-47-53,51-45-65281-65037-11-10-5-27-35-43-13-17513-18-23-16-0,29-23-24,0`
Randomized `ja3` string might look like: 771,4865-4866-4867-52393-156-49171-49160-49172-49170-157-52392-49162-49196-47-49200,0-5-10-11-13-16-18-21-23-27-43-45-51-65281,29-23-24-25,0
What I am doing here is to use a base `ja3n` text, then randomize it by changing order of the cipher suites and then removing some of the cipher suites from the string.
Unable to understand why I am affected by this exception approximately 7-8 times in 10 requests made. It appears to be intermittent and not for all the requests. Any kind of valid ja3 fingerprint can be passed as parameter right?
**Versions**
If it's related to a specific environment, paste your env info here.
- OS: [Ubuntu, Mac OS]
- curl_cffi version [0.8.0b1, 0.7.3]
| closed | 2024-11-26T23:11:00Z | 2024-12-03T09:17:36Z | https://github.com/lexiforest/curl_cffi/issues/444 | [
"question"
] | charliedelta02 | 1 |
deezer/spleeter | tensorflow | 21 | Example script doesn't seem to be working. | I'm just following the instructions found in the README, and I was able to successfully run the scripts, and I get two output files, vocals.wav and accompaniment.wav, but both the files sound exactly the same as the original. I am not hearing any difference between the three.
This is exactly what I ran:
```
git clone https://github.com/Deezer/spleeter
conda env create -f spleeter/conda/spleeter-cpu.yaml
conda activate spleeter-cpu
spleeter separate -i spleeter/audio_example.mp3 -p spleeter:2stems -o output
```
On the last line I get a bunch of Tensorflow warnings.
```
INFO:tensorflow:Using config: {'_model_dir': 'pretrained_models/2stems', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': gpu_options {
per_process_gpu_memory_fraction: 0.7
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x13a157eb8>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Could not find trained model in model_dir: pretrained_models/2stems, running initialization to predict.
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From /anaconda3/envs/spleeter-cpu/lib/python3.7/site-packages/spleeter/model/functions/unet.py:29: The name tf.keras.initializers.he_uniform is deprecated. Please use tf.compat.v1.keras.initializers.he_uniform instead.
INFO:tensorflow:Apply unet for vocals_spectrogram
INFO:tensorflow:Apply unet for accompaniment_spectrogram
INFO:tensorflow:Done calling model_fn.
WARNING:tensorflow:From /anaconda3/envs/spleeter-cpu/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:1354: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
WARNING:tensorflow:The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.string
WARNING:tensorflow:The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.int32
WARNING:tensorflow:The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.string
INFO:tensorflow:Loading audio b'spleeter/audio_example.mp3' from 0.0 to 600.0
INFO:tensorflow:File output/audio_example/vocals.wav written
INFO:tensorflow:File output/audio_example/accompaniment.wav written
```
I'm running on a mac with Conda.
| closed | 2019-11-04T16:59:59Z | 2019-11-05T12:41:07Z | https://github.com/deezer/spleeter/issues/21 | [
"bug",
"enhancement",
"MacOS",
"conda"
] | crobertsbmw | 3 |
cvat-ai/cvat | pytorch | 8,795 | cvat version 2.0 reported an error when installing nuctl autoannotation function | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
[root@bogon cvat-2.0.0]# nuctl deploy --project-name cvat --path serverless/pytorch/ultralytics/15Strip_surface_inspection/nuclio/ --volume `pwd`/serverless/common:/opt/nuclio/common --platform local
24.12.09 02:43:25.009 nuctl (I) Deploying function {"name": ""}
24.12.09 02:43:25.009 nuctl (I) Building {"versionInfo": "Label: 1.5.16, Git commit: ae43a6a560c2bec42d7ccfdf6e8e11a1e3cc3774, OS: linux, Arch: amd64, Go version: go1.14.3", "name": ""}
24.12.09 02:43:25.460 nuctl (I) Cleaning up before deployment {"functionName": "yolov5-strip_surface_inspection"}
24.12.09 02:43:25.522 nuctl (I) Staging files and preparing base images
24.12.09 02:43:25.523 nuctl (I) Building processor image {"imageName": "cvat/ultralytics-yolov5:latest"}
24.12.09 02:43:25.524 nuctl.platform.docker (I) Pulling image {"imageName": "quay.io/nuclio/handler-builder-python-onbuild:1.5.16-amd64"}
24.12.09 02:43:48.385 nuctl.platform.docker (W) Docker command outputted to stderr - this may result in errors {"workingDir": "/tmp/nuclio-build-100192796/staging", "cmd": "docker build --network host --force-rm -t nuclio-onbuild-ctb9vcolcaupumt0kik0 -f /tmp/nuclio-build-100192796/staging/Dockerfile.onbuild --build-arg NUCLIO_LABEL=1.5.16 --build-arg NUCLIO_ARCH=amd64 --build-arg NUCLIO_BUILD_LOCAL_HANDLER_DIR=handler .", "stderr": "#0 building with \"default\" instance using docker driver\n\n#1 [internal] load build definition from Dockerfile.onbuild\n#1 transferring dockerfile: 200B 0.0s done\n#1 DONE 0.0s\n\n#2 [internal] load metadata for quay.io/nuclio/handler-builder-python-onbuild:1.5.16-amd64\n#2 DONE 0.0s\n\n#3 [internal] load .dockerignore\n#3 transferring context: 2B done\n#3 DONE 0.0s\n\n#4 [1/1] FROM quay.io/nuclio/handler-builder-python-onbuild:1.5.16-amd64\n#4 CACHED\n\n#5 exporting to image\n#5 exporting layers done\n#5 writing image sha256:4c7ddc9d1b793fc2d42c4b5ed935ea2e86d909306f84e9efcaee2d23377b194d done\n#5 naming to docker.io/library/nuclio-onbuild-ctb9vcolcaupumt0kik0 done\n#5 DONE 0.0s\n"}
24.12.09 02:43:49.145 nuctl.platform.docker (I) Pulling image {"imageName": "quay.io/nuclio/uhttpc:0.0.1-amd64"}
24.12.09 02:44:12.139 nuctl.platform.docker (W) Docker command outputted to stderr - this may result in errors {"workingDir": "/tmp/nuclio-build-100192796/staging", "cmd": "docker build --network host --force-rm -t nuclio-onbuild-ctb9violcaupumt0kikg -f /tmp/nuclio-build-100192796/staging/Dockerfile.onbuild --build-arg NUCLIO_LABEL=1.5.16 --build-arg NUCLIO_ARCH=amd64 --build-arg NUCLIO_BUILD_LOCAL_HANDLER_DIR=handler .", "stderr": "#0 building with \"default\" instance using docker driver\n\n#1 [internal] load build definition from Dockerfile.onbuild\n#1 transferring dockerfile: 175B done\n#1 DONE 0.0s\n\n#2 [internal] load metadata for quay.io/nuclio/uhttpc:0.0.1-amd64\n#2 DONE 0.0s\n\n#3 [internal] load .dockerignore\n#3 transferring context: 2B done\n#3 DONE 0.0s\n\n#4 [1/1] FROM quay.io/nuclio/uhttpc:0.0.1-amd64\n#4 CACHED\n\n#5 exporting to image\n#5 exporting layers done\n#5 writing image sha256:6c437d7c55d8a360818c8150244ca2b60d974d8c6b2e15fe6ceb6dcc260b3914 done\n#5 naming to docker.io/library/nuclio-onbuild-ctb9violcaupumt0kikg done\n#5 DONE 0.0s\n"}
24.12.09 02:44:12.503 nuctl.platform (I) Building docker image {"image": "cvat/ultralytics-yolov5:latest"}
24.12.09 02:44:52.706 nuctl.platform.docker (W) Docker command outputted to stderr - this may result in errors {"workingDir": "/tmp/nuclio-build-100192796/staging", "cmd": "docker build --network host --force-rm -t cvat/ultralytics-yolov5:latest -f /tmp/nuclio-build-100192796/staging/Dockerfile.processor --build-arg NUCLIO_LABEL=1.5.16 --build-arg NUCLIO_ARCH=amd64 --build-arg NUCLIO_BUILD_LOCAL_HANDLER_DIR=handler .", "stderr": "#0 building with \"default\" instance using docker driver\n\n#1 [internal] load build definition from Dockerfile.processor\n#1 transferring dockerfile: 967B done\n#1 DONE 0.0s\n\n#2 [internal] load metadata for docker.io/ultralytics/yolov5:latest-cpu\n#2 DONE 34.1s\n\n#3 [internal] load .dockerignore\n#3 transferring context: 2B done\n#3 DONE 0.0s\n\n#4 [1/8] FROM docker.io/ultralytics/yolov5:latest-cpu@sha256:69a9ed0a0ebf3d8b7ccfaf9f40806ea8596e538201c15a26c129fdf8af592ea6\n#4 CACHED\n\n#5 [internal] load build context\n#5 transferring context: 5.20kB done\n#5 DONE 0.0s\n\n#6 [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n#6 0.189 \n#6 0.189 WARNING: apt does not have a stable CLI interface. Use with caution in scripts.\n#6 0.189 \n#6 2.829 Ign:1 http://security.ubuntu.com/ubuntu mantic-security InRelease\n#6 2.830 Ign:2 http://archive.ubuntu.com/ubuntu mantic InRelease\n#6 3.091 Ign:3 http://archive.ubuntu.com/ubuntu mantic-updates InRelease\n#6 3.091 Err:4 http://security.ubuntu.com/ubuntu mantic-security Release\n#6 3.091 404 Not Found [IP: 91.189.91.81 80]\n#6 3.347 Ign:5 http://archive.ubuntu.com/ubuntu mantic-backports InRelease\n#6 3.592 Err:6 http://archive.ubuntu.com/ubuntu mantic Release\n#6 3.592 404 Not Found [IP: 185.125.190.83 80]\n#6 3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n#6 3.839 404 Not Found [IP: 185.125.190.83 80]\n#6 4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n#6 4.085 404 Not Found [IP: 185.125.190.83 80]\n#6 4.100 Reading package lists...\n#6 5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n#6 ERROR: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n------\n > [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0:\n3.592 404 Not Found [IP: 185.125.190.83 80]\n3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n3.839 404 Not Found [IP: 185.125.190.83 80]\n4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n4.085 404 Not Found [IP: 185.125.190.83 80]\n\n5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n------\nDockerfile.processor:18\n--------------------\n 16 | USER root\n 17 | \n 18 | >>> RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n 19 | \n 20 | WORKDIR /opt/nuclio\n--------------------\nERROR: failed to solve: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n"}
24.12.09 02:44:52.717 nuctl (W) Failed to create a function; setting the function status {"err": "Failed to build processor image", "errVerbose": "\nError - exit status 1\n /nuclio/pkg/cmdrunner/shellrunner.go:96\n\nCall stack:\nstdout:\n\nstderr:\n#0 building with \"default\" instance using docker driver\n\n#1 [internal] load build definition from Dockerfile.processor\n#1 transferring dockerfile: 967B done\n#1 DONE 0.0s\n\n#2 [internal] load metadata for docker.io/ultralytics/yolov5:latest-cpu\n#2 DONE 34.1s\n\n#3 [internal] load .dockerignore\n#3 transferring context: 2B done\n#3 DONE 0.0s\n\n#4 [1/8] FROM docker.io/ultralytics/yolov5:latest-cpu@sha256:69a9ed0a0ebf3d8b7ccfaf9f40806ea8596e538201c15a26c129fdf8af592ea6\n#4 CACHED\n\n#5 [internal] load build context\n#5 transferring context: 5.20kB done\n#5 DONE 0.0s\n\n#6 [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n#6 0.189 \n#6 0.189 WARNING: apt does not have a stable CLI interface. Use with caution in scripts.\n#6 0.189 \n#6 2.829 Ign:1 http://security.ubuntu.com/ubuntu mantic-security InRelease\n#6 2.830 Ign:2 http://archive.ubuntu.com/ubuntu mantic InRelease\n#6 3.091 Ign:3 http://archive.ubuntu.com/ubuntu mantic-updates InRelease\n#6 3.091 Err:4 http://security.ubuntu.com/ubuntu mantic-security Release\n#6 3.091 404 Not Found [IP: 91.189.91.81 80]\n#6 3.347 Ign:5 http://archive.ubuntu.com/ubuntu mantic-backports InRelease\n#6 3.592 Err:6 http://archive.ubuntu.com/ubuntu mantic Release\n#6 3.592 404 Not Found [IP: 185.125.190.83 80]\n#6 3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n#6 3.839 404 Not Found [IP: 185.125.190.83 80]\n#6 4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n#6 4.085 404 Not Found [IP: 185.125.190.83 80]\n#6 4.100 Reading package lists...\n#6 5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n#6 ERROR: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n------\n > [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0:\n3.592 404 Not Found [IP: 185.125.190.83 80]\n3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n3.839 404 Not Found [IP: 185.125.190.83 80]\n4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n4.085 404 Not Found [IP: 185.125.190.83 80]\n\n5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n------\nDockerfile.processor:18\n--------------------\n 16 | USER root\n 17 | \n 18 | >>> RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n 19 | \n 20 | WORKDIR /opt/nuclio\n--------------------\nERROR: failed to solve: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n\n /nuclio/pkg/cmdrunner/shellrunner.go:96\nFailed to build\n /nuclio/pkg/dockerclient/shell.go:118\nFailed to build docker image\n .../pkg/containerimagebuilderpusher/docker.go:53\nFailed to build processor image\n /nuclio/pkg/processor/build/builder.go:250\nFailed to build processor image", "errCauses": [{"error": "Failed to build docker image", "errorVerbose": "\nError - exit status 1\n /nuclio/pkg/cmdrunner/shellrunner.go:96\n\nCall stack:\nstdout:\n\nstderr:\n#0 building with \"default\" instance using docker driver\n\n#1 [internal] load build definition from Dockerfile.processor\n#1 transferring dockerfile: 967B done\n#1 DONE 0.0s\n\n#2 [internal] load metadata for docker.io/ultralytics/yolov5:latest-cpu\n#2 DONE 34.1s\n\n#3 [internal] load .dockerignore\n#3 transferring context: 2B done\n#3 DONE 0.0s\n\n#4 [1/8] FROM docker.io/ultralytics/yolov5:latest-cpu@sha256:69a9ed0a0ebf3d8b7ccfaf9f40806ea8596e538201c15a26c129fdf8af592ea6\n#4 CACHED\n\n#5 [internal] load build context\n#5 transferring context: 5.20kB done\n#5 DONE 0.0s\n\n#6 [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n#6 0.189 \n#6 0.189 WARNING: apt does not have a stable CLI interface. Use with caution in scripts.\n#6 0.189 \n#6 2.829 Ign:1 http://security.ubuntu.com/ubuntu mantic-security InRelease\n#6 2.830 Ign:2 http://archive.ubuntu.com/ubuntu mantic InRelease\n#6 3.091 Ign:3 http://archive.ubuntu.com/ubuntu mantic-updates InRelease\n#6 3.091 Err:4 http://security.ubuntu.com/ubuntu mantic-security Release\n#6 3.091 404 Not Found [IP: 91.189.91.81 80]\n#6 3.347 Ign:5 http://archive.ubuntu.com/ubuntu mantic-backports InRelease\n#6 3.592 Err:6 http://archive.ubuntu.com/ubuntu mantic Release\n#6 3.592 404 Not Found [IP: 185.125.190.83 80]\n#6 3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n#6 3.839 404 Not Found [IP: 185.125.190.83 80]\n#6 4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n#6 4.085 404 Not Found [IP: 185.125.190.83 80]\n#6 4.100 Reading package lists...\n#6 5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n#6 ERROR: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n------\n > [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0:\n3.592 404 Not Found [IP: 185.125.190.83 80]\n3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n3.839 404 Not Found [IP: 185.125.190.83 80]\n4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n4.085 404 Not Found [IP: 185.125.190.83 80]\n\n5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n------\nDockerfile.processor:18\n--------------------\n 16 | USER root\n 17 | \n 18 | >>> RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n 19 | \n 20 | WORKDIR /opt/nuclio\n--------------------\nERROR: failed to solve: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n\n /nuclio/pkg/cmdrunner/shellrunner.go:96\nFailed to build\n /nuclio/pkg/dockerclient/shell.go:118\nFailed to build docker image\n .../pkg/containerimagebuilderpusher/docker.go:53\nFailed to build docker image", "errorCauses": [{"error": "Failed to build", "errorVerbose": "\nError - exit status 1\n /nuclio/pkg/cmdrunner/shellrunner.go:96\n\nCall stack:\nstdout:\n\nstderr:\n#0 building with \"default\" instance using docker driver\n\n#1 [internal] load build definition from Dockerfile.processor\n#1 transferring dockerfile: 967B done\n#1 DONE 0.0s\n\n#2 [internal] load metadata for docker.io/ultralytics/yolov5:latest-cpu\n#2 DONE 34.1s\n\n#3 [internal] load .dockerignore\n#3 transferring context: 2B done\n#3 DONE 0.0s\n\n#4 [1/8] FROM docker.io/ultralytics/yolov5:latest-cpu@sha256:69a9ed0a0ebf3d8b7ccfaf9f40806ea8596e538201c15a26c129fdf8af592ea6\n#4 CACHED\n\n#5 [internal] load build context\n#5 transferring context: 5.20kB done\n#5 DONE 0.0s\n\n#6 [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n#6 0.189 \n#6 0.189 WARNING: apt does not have a stable CLI interface. Use with caution in scripts.\n#6 0.189 \n#6 2.829 Ign:1 http://security.ubuntu.com/ubuntu mantic-security InRelease\n#6 2.830 Ign:2 http://archive.ubuntu.com/ubuntu mantic InRelease\n#6 3.091 Ign:3 http://archive.ubuntu.com/ubuntu mantic-updates InRelease\n#6 3.091 Err:4 http://security.ubuntu.com/ubuntu mantic-security Release\n#6 3.091 404 Not Found [IP: 91.189.91.81 80]\n#6 3.347 Ign:5 http://archive.ubuntu.com/ubuntu mantic-backports InRelease\n#6 3.592 Err:6 http://archive.ubuntu.com/ubuntu mantic Release\n#6 3.592 404 Not Found [IP: 185.125.190.83 80]\n#6 3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n#6 3.839 404 Not Found [IP: 185.125.190.83 80]\n#6 4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n#6 4.085 404 Not Found [IP: 185.125.190.83 80]\n#6 4.100 Reading package lists...\n#6 5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n#6 ERROR: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n------\n > [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0:\n3.592 404 Not Found [IP: 185.125.190.83 80]\n3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n3.839 404 Not Found [IP: 185.125.190.83 80]\n4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n4.085 404 Not Found [IP: 185.125.190.83 80]\n\n5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n------\nDockerfile.processor:18\n--------------------\n 16 | USER root\n 17 | \n 18 | >>> RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n 19 | \n 20 | WORKDIR /opt/nuclio\n--------------------\nERROR: failed to solve: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n\n /nuclio/pkg/cmdrunner/shellrunner.go:96\nFailed to build\n /nuclio/pkg/dockerclient/shell.go:118\nFailed to build", "errorCauses": [{"error": "stdout:\n\nstderr:\n#0 building with \"default\" instance using docker driver\n\n#1 [internal] load build definition from Dockerfile.processor\n#1 transferring dockerfile: 967B done\n#1 DONE 0.0s\n\n#2 [internal] load metadata for docker.io/ultralytics/yolov5:latest-cpu\n#2 DONE 34.1s\n\n#3 [internal] load .dockerignore\n#3 transferring context: 2B done\n#3 DONE 0.0s\n\n#4 [1/8] FROM docker.io/ultralytics/yolov5:latest-cpu@sha256:69a9ed0a0ebf3d8b7ccfaf9f40806ea8596e538201c15a26c129fdf8af592ea6\n#4 CACHED\n\n#5 [internal] load build context\n#5 transferring context: 5.20kB done\n#5 DONE 0.0s\n\n#6 [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n#6 0.189 \n#6 0.189 WARNING: apt does not have a stable CLI interface. Use with caution in scripts.\n#6 0.189 \n#6 2.829 Ign:1 http://security.ubuntu.com/ubuntu mantic-security InRelease\n#6 2.830 Ign:2 http://archive.ubuntu.com/ubuntu mantic InRelease\n#6 3.091 Ign:3 http://archive.ubuntu.com/ubuntu mantic-updates InRelease\n#6 3.091 Err:4 http://security.ubuntu.com/ubuntu mantic-security Release\n#6 3.091 404 Not Found [IP: 91.189.91.81 80]\n#6 3.347 Ign:5 http://archive.ubuntu.com/ubuntu mantic-backports InRelease\n#6 3.592 Err:6 http://archive.ubuntu.com/ubuntu mantic Release\n#6 3.592 404 Not Found [IP: 185.125.190.83 80]\n#6 3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n#6 3.839 404 Not Found [IP: 185.125.190.83 80]\n#6 4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n#6 4.085 404 Not Found [IP: 185.125.190.83 80]\n#6 4.100 Reading package lists...\n#6 5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n#6 ERROR: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n------\n > [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0:\n3.592 404 Not Found [IP: 185.125.190.83 80]\n3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n3.839 404 Not Found [IP: 185.125.190.83 80]\n4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n4.085 404 Not Found [IP: 185.125.190.83 80]\n\n5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n------\nDockerfile.processor:18\n--------------------\n 16 | USER root\n 17 | \n 18 | >>> RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n 19 | \n 20 | WORKDIR /opt/nuclio\n--------------------\nERROR: failed to solve: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n", "errorVerbose": "\nError - exit status 1\n /nuclio/pkg/cmdrunner/shellrunner.go:96\n\nCall stack:\nstdout:\n\nstderr:\n#0 building with \"default\" instance using docker driver\n\n#1 [internal] load build definition from Dockerfile.processor\n#1 transferring dockerfile: 967B done\n#1 DONE 0.0s\n\n#2 [internal] load metadata for docker.io/ultralytics/yolov5:latest-cpu\n#2 DONE 34.1s\n\n#3 [internal] load .dockerignore\n#3 transferring context: 2B done\n#3 DONE 0.0s\n\n#4 [1/8] FROM docker.io/ultralytics/yolov5:latest-cpu@sha256:69a9ed0a0ebf3d8b7ccfaf9f40806ea8596e538201c15a26c129fdf8af592ea6\n#4 CACHED\n\n#5 [internal] load build context\n#5 transferring context: 5.20kB done\n#5 DONE 0.0s\n\n#6 [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n#6 0.189 \n#6 0.189 WARNING: apt does not have a stable CLI interface. Use with caution in scripts.\n#6 0.189 \n#6 2.829 Ign:1 http://security.ubuntu.com/ubuntu mantic-security InRelease\n#6 2.830 Ign:2 http://archive.ubuntu.com/ubuntu mantic InRelease\n#6 3.091 Ign:3 http://archive.ubuntu.com/ubuntu mantic-updates InRelease\n#6 3.091 Err:4 http://security.ubuntu.com/ubuntu mantic-security Release\n#6 3.091 404 Not Found [IP: 91.189.91.81 80]\n#6 3.347 Ign:5 http://archive.ubuntu.com/ubuntu mantic-backports InRelease\n#6 3.592 Err:6 http://archive.ubuntu.com/ubuntu mantic Release\n#6 3.592 404 Not Found [IP: 185.125.190.83 80]\n#6 3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n#6 3.839 404 Not Found [IP: 185.125.190.83 80]\n#6 4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n#6 4.085 404 Not Found [IP: 185.125.190.83 80]\n#6 4.100 Reading package lists...\n#6 5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n#6 ERROR: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n------\n > [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0:\n3.592 404 Not Found [IP: 185.125.190.83 80]\n3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n3.839 404 Not Found [IP: 185.125.190.83 80]\n4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n4.085 404 Not Found [IP: 185.125.190.83 80]\n\n5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n------\nDockerfile.processor:18\n--------------------\n 16 | USER root\n 17 | \n 18 | >>> RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n 19 | \n 20 | WORKDIR /opt/nuclio\n--------------------\nERROR: failed to solve: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n\n /nuclio/pkg/cmdrunner/shellrunner.go:96\nstdout:\n\nstderr:\n#0 building with \"default\" instance using docker driver\n\n#1 [internal] load build definition from Dockerfile.processor\n#1 transferring dockerfile: 967B done\n#1 DONE 0.0s\n\n#2 [internal] load metadata for docker.io/ultralytics/yolov5:latest-cpu\n#2 DONE 34.1s\n\n#3 [internal] load .dockerignore\n#3 transferring context: 2B done\n#3 DONE 0.0s\n\n#4 [1/8] FROM docker.io/ultralytics/yolov5:latest-cpu@sha256:69a9ed0a0ebf3d8b7ccfaf9f40806ea8596e538201c15a26c129fdf8af592ea6\n#4 CACHED\n\n#5 [internal] load build context\n#5 transferring context: 5.20kB done\n#5 DONE 0.0s\n\n#6 [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n#6 0.189 \n#6 0.189 WARNING: apt does not have a stable CLI interface. Use with caution in scripts.\n#6 0.189 \n#6 2.829 Ign:1 http://security.ubuntu.com/ubuntu mantic-security InRelease\n#6 2.830 Ign:2 http://archive.ubuntu.com/ubuntu mantic InRelease\n#6 3.091 Ign:3 http://archive.ubuntu.com/ubuntu mantic-updates InRelease\n#6 3.091 Err:4 http://security.ubuntu.com/ubuntu mantic-security Release\n#6 3.091 404 Not Found [IP: 91.189.91.81 80]\n#6 3.347 Ign:5 http://archive.ubuntu.com/ubuntu mantic-backports InRelease\n#6 3.592 Err:6 http://archive.ubuntu.com/ubuntu mantic Release\n#6 3.592 404 Not Found [IP: 185.125.190.83 80]\n#6 3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n#6 3.839 404 Not Found [IP: 185.125.190.83 80]\n#6 4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n#6 4.085 404 Not Found [IP: 185.125.190.83 80]\n#6 4.100 Reading package lists...\n#6 5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n#6 ERROR: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n------\n > [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0:\n3.592 404 Not Found [IP: 185.125.190.83 80]\n3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release\n3.839 404 Not Found [IP: 185.125.190.83 80]\n4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release\n4.085 404 Not Found [IP: 185.125.190.83 80]\n\n5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.\n5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.\n------\nDockerfile.processor:18\n--------------------\n 16 | USER root\n 17 | \n 18 | >>> RUN apt update && apt install --no-install-recommends -y libglib2.0-0\n 19 | \n 20 | WORKDIR /opt/nuclio\n--------------------\nERROR: failed to solve: process \"/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0\" did not complete successfully: exit code: 100\n", "errorCauses": [{"error": "exit status 1"}]}]}]}]}
Error - exit status 1
/nuclio/pkg/cmdrunner/shellrunner.go:96
Call stack:
stdout:
stderr:
#0 building with "default" instance using docker driver
#1 [internal] load build definition from Dockerfile.processor
#1 transferring dockerfile: 967B done
#1 DONE 0.0s
#2 [internal] load metadata for docker.io/ultralytics/yolov5:latest-cpu
#2 DONE 34.1s
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s
#4 [1/8] FROM docker.io/ultralytics/yolov5:latest-cpu@sha256:69a9ed0a0ebf3d8b7ccfaf9f40806ea8596e538201c15a26c129fdf8af592ea6
#4 CACHED
#5 [internal] load build context
#5 transferring context: 5.20kB done
#5 DONE 0.0s
#6 [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0
#6 0.189
#6 0.189 WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
#6 0.189
#6 2.829 Ign:1 http://security.ubuntu.com/ubuntu mantic-security InRelease
#6 2.830 Ign:2 http://archive.ubuntu.com/ubuntu mantic InRelease
#6 3.091 Ign:3 http://archive.ubuntu.com/ubuntu mantic-updates InRelease
#6 3.091 Err:4 http://security.ubuntu.com/ubuntu mantic-security Release
#6 3.091 404 Not Found [IP: 91.189.91.81 80]
#6 3.347 Ign:5 http://archive.ubuntu.com/ubuntu mantic-backports InRelease
#6 3.592 Err:6 http://archive.ubuntu.com/ubuntu mantic Release
#6 3.592 404 Not Found [IP: 185.125.190.83 80]
#6 3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release
#6 3.839 404 Not Found [IP: 185.125.190.83 80]
#6 4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release
#6 4.085 404 Not Found [IP: 185.125.190.83 80]
#6 4.100 Reading package lists...
#6 5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.
#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.
#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.
#6 5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.
#6 ERROR: process "/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0" did not complete successfully: exit code: 100
------
> [2/8] RUN apt update && apt install --no-install-recommends -y libglib2.0-0:
3.592 404 Not Found [IP: 185.125.190.83 80]
3.839 Err:7 http://archive.ubuntu.com/ubuntu mantic-updates Release
3.839 404 Not Found [IP: 185.125.190.83 80]
4.085 Err:8 http://archive.ubuntu.com/ubuntu mantic-backports Release
4.085 404 Not Found [IP: 185.125.190.83 80]
5.419 E: The repository 'http://security.ubuntu.com/ubuntu mantic-security Release' no longer has a Release file.
5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic Release' no longer has a Release file.
5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-updates Release' no longer has a Release file.
5.419 E: The repository 'http://archive.ubuntu.com/ubuntu mantic-backports Release' no longer has a Release file.
------
Dockerfile.processor:18
--------------------
16 | USER root
17 |
18 | >>> RUN apt update && apt install --no-install-recommends -y libglib2.0-0
19 |
20 | WORKDIR /opt/nuclio
--------------------
ERROR: failed to solve: process "/bin/sh -c apt update && apt install --no-install-recommends -y libglib2.0-0" did not complete successfully: exit code: 100
/nuclio/pkg/cmdrunner/shellrunner.go:96
Failed to build
/nuclio/pkg/dockerclient/shell.go:118
Failed to build docker image
.../pkg/containerimagebuilderpusher/docker.go:53
Failed to build processor image
/nuclio/pkg/processor/build/builder.go:250
Failed to deploy function
...//nuclio/pkg/platform/abstract/platform.go:182
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
_No response_ | closed | 2024-12-09T07:48:27Z | 2024-12-09T07:54:41Z | https://github.com/cvat-ai/cvat/issues/8795 | [
"bug"
] | chang50961471 | 2 |
TracecatHQ/tracecat | pydantic | 71 | Change toast to sonner | # Motivation
Current toast is annoying - doesn't have expiry so you have to close it manually.
A sonner https://ui.shadcn.com/docs/components/sonner (so I can see past notifications) can make this UX better | closed | 2024-04-20T15:28:48Z | 2024-06-16T19:25:53Z | https://github.com/TracecatHQ/tracecat/issues/71 | [
"frontend"
] | daryllimyt | 1 |
streamlit/streamlit | machine-learning | 10,229 | Page Refresh detection and data persistance between refresh page | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Currently,
- There is no way to detect page refresh
- There is no way to persists data between page refresh
### Why?
I want to open serial port manually and it should be keep opened serial port object, even user press page refresh button on browser. secondly it should close it on page exist.
### How?
Currently i am using @st.cache_resource to open it by opening fixed serial port and close it using atexit.register(app_closed)
### Additional Context
_No response_ | open | 2025-01-22T16:13:29Z | 2025-01-22T16:13:42Z | https://github.com/streamlit/streamlit/issues/10229 | [
"type:enhancement"
] | learncodingforweb | 1 |
plotly/dash-component-boilerplate | dash | 173 | Unable to create project | Hi all, I'm trying to initialize the cookiecutter project by following the main page's instructions, but I receive the error shown below.
- Node.JS version: v23.5.0
- npm version: 10.9.2
- Python version: 3.11
I'm not sure if it's the same issue raised in #148 in 2022, but the proposed solution of downgrading to node.js v14 is no longer possible as it's been deprecated.
Any help would be appreciated, thanks!
```
(dash playground) ➜ dash playground cookiecutter gh:plotly/dash-component-boilerplate
You've downloaded /Users/username/.cookiecutters/dash-component-boilerplate
before. Is it okay to delete and re-download it? [y/n] (y): y
[1/14] project_name (my dash component):
[2/14] project_shortname (my_dash_component):
[3/14] component_name (MyDashComponent):
[4/14] jl_prefix ():
[5/14] r_prefix ():
[6/14] author_name (Enter your first and last name (For package.json)): username
[7/14] author_email (Enter your email (For package.json)): username@gmail.com
[8/14] github_org ():
[9/14] description (Project Description):
[10/14] Select use_async
1 - False
2 - True
Choose from [1/2] (1):
[11/14] Select component_type
1 - Function Component
2 - Class Component
Choose from [1/2] (1):
[12/14] Select license
1 - MIT License
2 - BSD License
3 - ISC License
4 - Apache Software License 2.0
5 - GNU General Public License v3
6 - Not open source
Choose from [1/2/3/4/5/6] (1):
[13/14] publish_on_npm [y/n] (y): n
[14/14] install_dependencies [y/n] (y):
use_async
False
use_async is set to False, your component will not be lazy loaded and fragments will not be created.
Executing: virtualenv venv
created virtual environment CPython3.13.1.final.0-64 in 198ms
creator CPython3macOsBrew(dest=/Users/username/Repositories/playground/dash playground/my_dash_component/venv, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, via=copy, app_data_dir=/Users/username/Library/Application Support/virtualenv)
added seed packages: pip==24.3.1
activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
Installing dependencies
Executing: venv/bin/python -m pip install -r requirements.txt
Collecting dash>=2.0.0 (from dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached dash-2.18.2-py3-none-any.whl.metadata (10 kB)
Collecting Flask<3.1,>=1.0.4 (from dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached flask-3.0.3-py3-none-any.whl.metadata (3.2 kB)
Collecting Werkzeug<3.1 (from dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached werkzeug-3.0.6-py3-none-any.whl.metadata (3.7 kB)
Collecting plotly>=5.0.0 (from dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached plotly-5.24.1-py3-none-any.whl.metadata (7.3 kB)
Collecting dash-html-components==2.0.0 (from dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached dash_html_components-2.0.0-py3-none-any.whl.metadata (3.8 kB)
Collecting dash-core-components==2.0.0 (from dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached dash_core_components-2.0.0-py3-none-any.whl.metadata (2.9 kB)
Collecting dash-table==5.0.0 (from dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached dash_table-5.0.0-py3-none-any.whl.metadata (2.4 kB)
Collecting importlib-metadata (from dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached importlib_metadata-8.5.0-py3-none-any.whl.metadata (4.8 kB)
Collecting typing-extensions>=4.1.1 (from dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting requests (from dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
Collecting retrying (from dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached retrying-1.3.4-py3-none-any.whl.metadata (6.9 kB)
Collecting nest-asyncio (from dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached nest_asyncio-1.6.0-py3-none-any.whl.metadata (2.8 kB)
Collecting setuptools (from dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached setuptools-75.6.0-py3-none-any.whl.metadata (6.7 kB)
Collecting coloredlogs>=15.0.1 (from dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached coloredlogs-15.0.1-py2.py3-none-any.whl.metadata (12 kB)
Collecting fire>=0.4.0 (from dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached fire-0.7.0-py3-none-any.whl
Collecting PyYAML>=5.4.1 (from dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached PyYAML-6.0.2-cp313-cp313-macosx_11_0_arm64.whl.metadata (2.1 kB)
Collecting humanfriendly>=9.1 (from coloredlogs>=15.0.1->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached humanfriendly-10.0-py2.py3-none-any.whl.metadata (9.2 kB)
Collecting termcolor (from fire>=0.4.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached termcolor-2.5.0-py3-none-any.whl.metadata (6.1 kB)
Collecting Jinja2>=3.1.2 (from Flask<3.1,>=1.0.4->dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached jinja2-3.1.5-py3-none-any.whl.metadata (2.6 kB)
Collecting itsdangerous>=2.1.2 (from Flask<3.1,>=1.0.4->dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached itsdangerous-2.2.0-py3-none-any.whl.metadata (1.9 kB)
Collecting click>=8.1.3 (from Flask<3.1,>=1.0.4->dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached click-8.1.8-py3-none-any.whl.metadata (2.3 kB)
Collecting blinker>=1.6.2 (from Flask<3.1,>=1.0.4->dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached blinker-1.9.0-py3-none-any.whl.metadata (1.6 kB)
Collecting tenacity>=6.2.0 (from plotly>=5.0.0->dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached tenacity-9.0.0-py3-none-any.whl.metadata (1.2 kB)
Collecting packaging (from plotly>=5.0.0->dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached packaging-24.2-py3-none-any.whl.metadata (3.2 kB)
Collecting MarkupSafe>=2.1.1 (from Werkzeug<3.1->dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached MarkupSafe-3.0.2-cp313-cp313-macosx_11_0_arm64.whl.metadata (4.0 kB)
Collecting zipp>=3.20 (from importlib-metadata->dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached zipp-3.21.0-py3-none-any.whl.metadata (3.7 kB)
Collecting charset-normalizer<4,>=2 (from requests->dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached charset_normalizer-3.4.1-cp313-cp313-macosx_10_13_universal2.whl.metadata (35 kB)
Collecting idna<4,>=2.5 (from requests->dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached idna-3.10-py3-none-any.whl.metadata (10 kB)
Collecting urllib3<3,>=1.21.1 (from requests->dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached urllib3-2.3.0-py3-none-any.whl.metadata (6.5 kB)
Collecting certifi>=2017.4.17 (from requests->dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached certifi-2024.12.14-py3-none-any.whl.metadata (2.3 kB)
Collecting six>=1.7.0 (from retrying->dash>=2.0.0->dash[dev]>=2.0.0->-r requirements.txt (line 2))
Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB)
Using cached dash-2.18.2-py3-none-any.whl (7.8 MB)
Using cached dash_core_components-2.0.0-py3-none-any.whl (3.8 kB)
Using cached dash_html_components-2.0.0-py3-none-any.whl (4.1 kB)
Using cached dash_table-5.0.0-py3-none-any.whl (3.9 kB)
Using cached coloredlogs-15.0.1-py2.py3-none-any.whl (46 kB)
Using cached flask-3.0.3-py3-none-any.whl (101 kB)
Using cached plotly-5.24.1-py3-none-any.whl (19.1 MB)
Using cached PyYAML-6.0.2-cp313-cp313-macosx_11_0_arm64.whl (171 kB)
Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Using cached werkzeug-3.0.6-py3-none-any.whl (227 kB)
Using cached importlib_metadata-8.5.0-py3-none-any.whl (26 kB)
Using cached nest_asyncio-1.6.0-py3-none-any.whl (5.2 kB)
Using cached requests-2.32.3-py3-none-any.whl (64 kB)
Using cached retrying-1.3.4-py3-none-any.whl (11 kB)
Using cached setuptools-75.6.0-py3-none-any.whl (1.2 MB)
Using cached blinker-1.9.0-py3-none-any.whl (8.5 kB)
Using cached certifi-2024.12.14-py3-none-any.whl (164 kB)
Using cached charset_normalizer-3.4.1-cp313-cp313-macosx_10_13_universal2.whl (195 kB)
Using cached click-8.1.8-py3-none-any.whl (98 kB)
Using cached humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
Using cached idna-3.10-py3-none-any.whl (70 kB)
Using cached itsdangerous-2.2.0-py3-none-any.whl (16 kB)
Using cached jinja2-3.1.5-py3-none-any.whl (134 kB)
Using cached MarkupSafe-3.0.2-cp313-cp313-macosx_11_0_arm64.whl (12 kB)
Using cached six-1.17.0-py2.py3-none-any.whl (11 kB)
Using cached tenacity-9.0.0-py3-none-any.whl (28 kB)
Using cached urllib3-2.3.0-py3-none-any.whl (128 kB)
Using cached zipp-3.21.0-py3-none-any.whl (9.6 kB)
Using cached packaging-24.2-py3-none-any.whl (65 kB)
Using cached termcolor-2.5.0-py3-none-any.whl (7.8 kB)
Installing collected packages: dash-table, dash-html-components, dash-core-components, zipp, urllib3, typing-extensions, termcolor, tenacity, six, setuptools, PyYAML, packaging, nest-asyncio, MarkupSafe, itsdangerous, idna, humanfriendly, click, charset-normalizer, certifi, blinker, Werkzeug, retrying, requests, plotly, Jinja2, importlib-metadata, fire, coloredlogs, Flask, dash
Successfully installed Flask-3.0.3 Jinja2-3.1.5 MarkupSafe-3.0.2 PyYAML-6.0.2 Werkzeug-3.0.6 blinker-1.9.0 certifi-2024.12.14 charset-normalizer-3.4.1 click-8.1.8 coloredlogs-15.0.1 dash-2.18.2 dash-core-components-2.0.0 dash-html-components-2.0.0 dash-table-5.0.0 fire-0.7.0 humanfriendly-10.0 idna-3.10 importlib-metadata-8.5.0 itsdangerous-2.2.0 nest-asyncio-1.6.0 packaging-24.2 plotly-5.24.1 requests-2.32.3 retrying-1.3.4 setuptools-75.6.0 six-1.17.0 tenacity-9.0.0 termcolor-2.5.0 typing-extensions-4.12.2 urllib3-2.3.0 zipp-3.21.0
Executing: npm install --ignore-scripts
npm warn deprecated inflight@1.0.6: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
npm warn deprecated rimraf@2.6.3: Rimraf versions prior to v4 are no longer supported
npm warn deprecated rimraf@3.0.2: Rimraf versions prior to v4 are no longer supported
npm warn deprecated rimraf@3.0.2: Rimraf versions prior to v4 are no longer supported
npm warn deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported
npm warn deprecated @babel/plugin-proposal-object-rest-spread@7.20.7: This proposal has been merged to the ECMAScript standard and thus this plugin is no longer maintained. Please use @babel/plugin-transform-object-rest-spread instead.
npm warn deprecated babel-eslint@10.1.0: babel-eslint is now @babel/eslint-parser. This package will no longer receive updates.
npm warn deprecated eslint@6.8.0: This version is no longer supported. Please see https://eslint.org/version-support for other options.
added 758 packages, and audited 759 packages in 5s
141 packages are looking for funding
run `npm fund` for details
2 vulnerabilities (1 high, 1 critical)
To address all issues (including breaking changes), run:
npm audit fix --force
Run `npm audit` for details.
Building initial bundles...
Executing: npm run build:js
> my_dash_component@0.0.1 build:js
> webpack --mode production
asset my_dash_component.min.js 1.54 KiB [emitted] [minimized] (name: main) 1 related asset
runtime modules 2.42 KiB 5 modules
orphan modules 1.7 KiB [orphan] 3 modules
./src/lib/index.js + 3 modules 1.84 KiB [not cacheable] [built] [code generated]
webpack 5.97.1 compiled successfully in 619 ms
Executing: venv/bin/python -m dash.development.component_generator ./src/lib/components my_dash_component -p package-info.json --jl-prefix '' --r-prefix ''
/Users/username/Repositories/playground/dash playground/my_dash_component/venv/lib/python3.13/site-packages/dash/development/component_generator.py:11: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
node:internal/modules/cjs/loader:1413
throw err;
^
Error: Cannot find module '/Users/username/Repositories/playground/dash'
at Function._resolveFilename (node:internal/modules/cjs/loader:1410:15)
at defaultResolveImpl (node:internal/modules/cjs/loader:1061:19)
at resolveForCJSWithHooks (node:internal/modules/cjs/loader:1066:22)
at Function._load (node:internal/modules/cjs/loader:1215:37)
at TracingChannel.traceSync (node:diagnostics_channel:322:14)
at wrapModuleLoad (node:internal/modules/cjs/loader:234:24)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:151:5)
at node:internal/main/run_main_module:33:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Node.js v23.5.0
Error generating metadata in my_dash_component (status=1)
post_gen_project command failed: venv/bin/python -m dash.development.component_generator ./src/lib/components my_dash_component -p package-info.json --jl-prefix '' --r-prefix ''
ERROR: Stopping generation because post_gen_project hook script didn't exit successfully
Hook script failed (exit status: 1)
``` | open | 2024-12-25T18:36:33Z | 2025-02-18T09:02:26Z | https://github.com/plotly/dash-component-boilerplate/issues/173 | [] | nulinspiratie | 1 |
lukasmasuch/streamlit-pydantic | pydantic | 54 | Requirements.txt | Thank you so much for the fantastic Streamlit plugin!
I'm currently facing some challenges in getting the examples to run smoothly. Could someone please share a requirements.txt file that is compatible with the provided examples?
Many thanks in advance! | open | 2024-01-26T22:40:44Z | 2024-01-26T22:40:44Z | https://github.com/lukasmasuch/streamlit-pydantic/issues/54 | [] | dmmsop | 0 |
open-mmlab/mmdetection | pytorch | 12,303 | RuntimeError: indices should be on the same device as the indexed tensor in MMRotate tutorial | **Description:**
I encountered a `RuntimeError` when following the [MMRotate tutorial](https://github.com/open-mmlab/mmrotate/blob/main/demo/MMRotate_Tutorial.ipynb). The error occurs when running `inference_detector(model, img)`. My model is on `cuda:0`, but it seems like some indices are on the CPU, causing a mismatch.
**Error Message:**
_RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)_
**Environment:**
- CUDA: **12.4**
- PyTorch: **2.6.0**
- MMDetection: **2.28.2**
- MMCV: **1.7.2**
- MMRotate: **0.3.4**
- GPU: **NVIDIA RTX A4500**
- OS: **Red Hat Enterprise Linux release 8.9 (Ootpa)**
**Steps to Reproduce:**
1. Install MMRotate following the official guide.
2. Load a model and set it to `cuda:0`.
3. Run `inference_detector(model, img)`.
4. The error occurs.
**Expected Behavior:**
The inference should run without device mismatch issues.
**What I Tried:**
- Reinstalling dependencies.
Let me know if you need additional logs or debugging outputs.
| open | 2025-02-07T12:59:56Z | 2025-02-07T13:00:14Z | https://github.com/open-mmlab/mmdetection/issues/12303 | [] | ramondalmau | 0 |
graphql-python/graphene-sqlalchemy | graphql | 407 | Allow a custom filter class with the purpose of using all the base filters, and adding sqlalchemy-filter esk filters | So a pretty important use case at my company is the ability to add custom filters that aren't field specific
Here is an example use case using the below hack discussed
```
class UserNode(SQLAlchemyObjectType):
class Meta:
model = User
interfaces = (LevNode,)
filter = UserFilter
class UserFilter(GrapheneSQLAlchemyFilter):
use_has_contact = graphene.Boolean()
is_valid = graphene.Boolean()
@staticmethod
def user_in_filter(info: LevResolveInfo, query: Query, value: bool) -> Query:
return query.join(Contact).filter(Contact.id.is_not(None))
@staticmethod
def is_valid_filter(info: LevResolveInfo, query: Query, value: bool) -> ColumnElement:
if value:
return User.deleted_at.is_(None)
else:
return User.deleted_at.is_not(None)
```
Step 1. Update BaseTypeFilter class to allow for "filter" as a _meta field. We get all the custom filter functions from the classes that extend GrapheneSQLAlchemyFilter. We ensure those functions contain the correct variables. and then add the fields to the filter fields list.
```
class GrapheneSQLAlchemyFilter(graphene.InputObjectType):
pass
class BaseTypeFilter(graphene.InputObjectType):
@classmethod
def __init_subclass_with_meta__(
cls, filter_fields=None, model=None, _meta=None, custom_filter_class=None, **options
):
from graphene_sqlalchemy.converter import convert_sqlalchemy_type
# Init meta options class if it doesn't exist already
if not _meta:
_meta = InputObjectTypeOptions(cls)
_meta.filter_class = custom_filter_class
logic_functions = _get_functions_by_regex(".+_logic$", "_logic$", cls)
custom_filter_fields = {}
if custom_filter_class and issubclass(custom_filter_class, GrapheneSQLAlchemyFilter):
custom_filter_fields = yank_fields_from_attrs(custom_filter_class.__dict__, _as=graphene.InputField)
functions = dict(_get_functions_by_regex(".+_filter$", "_filter$", custom_filter_class))
for field_name in custom_filter_fields.keys():
assert functions.get(field_name), f"Custom filter field {field_name} must have a corresponding filter method"
annotations = functions.get(field_name)
assert annotations.get('info'), "Each custom filter method must have an info field with valid type annotations"
assert annotations.get('query'), "Each custom filter method must have a query field with valid type annotations"
assert annotations.get('value'), "Each custom filter method must have a value field with valid type annotations"
new_filter_fields = custom_filter_fields
..........
```
**Then override the execute_filters method. We have it accept a "resolve_info" or "info" so that we can pass those to the custom filter functions**
```
@classmethod
def execute_filters(
cls, query, filter_dict: Dict[str, Any], model_alias=None, info=None
) -> Tuple[Query, List[Any]]:
model = cls._meta.model
.....
# Here we first check if this is input field that isn't a model_attr and is part of the filter_class (we set that on the meta earlier)
else:
# Allow custom filter class to be used for custom filtering over
if not hasattr(input_field, "model_attr") and cls._meta.filter_class:
clause = getattr(cls._meta.filter_class, field + "_filter")(info, query, field_filters)
if isinstance(clause, tuple):
query, clause = clause
elif isinstance(clause, Query):
query = clause
continue
clauses.append(clause)
else:
model_field = getattr(model, input_field.model_attr or field)
```
**Update SQLAlchemy base to accept a "filter" field**
```
class SQLAlchemyObjectTypeOptions(ObjectTypeOptions):
.....
filter = None
``` | closed | 2024-03-18T15:32:36Z | 2024-09-15T00:55:04Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/407 | [] | adiberk | 1 |
jupyter/docker-stacks | jupyter | 1,909 | CI improvements and speedups | There are a few things that might be improved / changed and everyone is welcome to implement these ideas:
1. Get rid of upload/download artifact and use a registry server. This would mean much less uploads (most layers will be already uploaded). It might be a good idea to use [service containers](https://docs.github.com/en/actions/using-containerized-services/about-service-containers) here.
2. Execute several steps inside one runner. For example, `docker-stacks-foundation`, `base-notebook` and `minimal-notebook` can be built together, and while one image is uploading as artefact, another can already start building. This doesn't look very clean though, as it will introduce complexity.
3. Use [Autoscaling with self-hosted runners](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/autoscaling-with-self-hosted-runners). This will allow to create aarch64 runners automatically and only when needed. It will save resources and my money 😆
To be honest, I'm pretty happy with the way CI works currently, so I don't think these are essential to implement, but I might give these ideas a try. | closed | 2023-05-28T15:06:10Z | 2024-01-13T14:57:18Z | https://github.com/jupyter/docker-stacks/issues/1909 | [
"type:Enhancement",
"good first issue"
] | mathbunnyru | 2 |
autogluon/autogluon | data-science | 4,679 | OSError: /usr/local/lib/python3.10/dist-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZNK3c105Error4whatEv | Hi,
unfortunately, the Google Colab: https://colab.research.google.com/github/autogluon/autogluon/blob/stable/docs/tutorials/multimodal/text_prediction/beginner_text.ipynb#scrollTo=d2535bb3
throws the following exception:

Thank you for your help.
Best regards,
Felix | closed | 2024-11-22T17:16:18Z | 2024-11-26T17:10:31Z | https://github.com/autogluon/autogluon/issues/4679 | [] | FelixNeutatz | 2 |
jina-ai/serve | fastapi | 6,026 | Integrated Gradio Demo App | **Describe the feature**
The Jina project is an incredible tool that simplifies the process of deploying machine learning models by just writing an executor. It enables users to get a complete working service with ease. However, during the development and testing stages, it would be extremely beneficial to have an integrated Gradio demo app. This will provide an interactive user interface that allows users to easily interact and test the model without the need for using CURL, Postman, or programming languages.
**Your proposal**
- The integrated Gradio demo app can be accessible at the `/demo` endpoint.
- Users can interact with the model using the Gradio interface, providing input and viewing the model's output in real-time.
- The Gradio demo app should automatically generate the appropriate input components based on the model's input types (e.g., string, integer, float, boolean, file, path) and utilize the docArray model's additional information for description, default values, minimum, maximum, and choices, when available.
- The Gradio integration should be optional, allowing users to enable or disable it based on their use-case. This will enable developers to use the interactive demo during development and prototyping and easily disable it for production deployment.
**Benefits:**
- Eases the testing and interaction with the model during development and prototyping stages.
- Provides a user-friendly interface for non-technical stakeholders to interact with the model and understand its capabilities.
- Improves user experience by eliminating the need for external tools (e.g., CURL, Postman) or custom code for testing the model.
**Implementation Approach:**
- Utilize Gradio's Python library to create the interactive demo app.
- Mount the Gradio demo app using FastAPI Gateway.
- Leverage the existing docArray documents information to generate appropriate input and output components with optional descriptions.
This feature will further enhance the usability and user experience of the Jina project, making it an even more valuable tool for deploying machine learning models.
I am willing to take on this feature request as it presents an excellent opportunity to learn more about Jina internals. | closed | 2023-08-11T05:02:54Z | 2023-12-08T00:18:28Z | https://github.com/jina-ai/serve/issues/6026 | [
"Stale"
] | Ankur-singh | 13 |
apragacz/django-rest-registration | rest-api | 111 | consistency in field errors for reset-password | Concerning the reset-password api:
1. if the password field contains a password that isn't long enough, response.data is an **array** with one string element stating "This password is too short. It must contain at least 8 characters."
2. if the password_confirm field doesn't match, response.data is an **object** with one key ('password_confirm') and it's value is the error message 'Passwords don't match'.
3. if the signature field is incorrect, response.data is an **object** with one key ('detail') and it's value is the error message 'Invalid signature'.
My suggestion would be for the first case that instead of an array it would also be an object with a 'password' key and the error as value. This is just as well a field error as the other 2 so it would make sense to have both the fieldname and error returned as a key value pair here instead of the array.
| closed | 2020-05-29T23:10:07Z | 2020-06-02T16:45:52Z | https://github.com/apragacz/django-rest-registration/issues/111 | [
"type:feature-request"
] | jonas87 | 3 |
jupyter-book/jupyter-book | jupyter | 1,962 | Host via gitlab pages | ### Describe the bug
Referring to the demo repo [here](https://gitlab-master.nvidia.com/zchandani/demo-book/-/tree/gh-pages)
Can you please point me to references which would allow me to host this on gitlab pages?
The instructions [here](https://jupyterbook.org/en/stable/start/publish.html) work for github but does not allow me to make repo private unless I upgrade to premium version.
### Reproduce the bug
1. run `jupyter-book create mynewbook/` to create demo book
2. `cd mynewbook`
3. Create a blank new public project on gitlab called `demo book` without a readme file
4. Run `jupyter-book build .` to build book which generates a `_build` folder
5. Clone empty `demo book` repo and copy all files of `mynewbook` into `demo book`
6. Push repo to gitlab which can be found [here](https://gitlab-master.nvidia.com/zchandani/demo-book)
7. Install `ghp-import`
8. Run `ghp-import -n -p -f _build/html`
Not sure what to do next? Please advise further
### List your environment
```Jupyter Book : 0.14.0
External ToC : 0.3.1
MyST-Parser : 0.18.1
MyST-NB : 0.17.1
Sphinx Book Theme : 0.4.0rc1
Jupyter-Cache : 0.5.0
NbClient : 0.5.13
```
Using VS code on Linux via ssh | open | 2023-03-07T14:06:00Z | 2024-10-28T18:17:39Z | https://github.com/jupyter-book/jupyter-book/issues/1962 | [
"bug"
] | zohimchandani | 2 |
neuml/txtai | nlp | 853 | PGVector not enable | Hi,
Trying to persist index via psql, but I get the error
`_
raise ImportError('PGVector is not available - install "ann" extra to enable')
ImportError: PGVector is not available - install "ann" extra to enable`
I've installed the ANN backend via `pip installl txtai[ann]`, yet same issue.
Sample code:
```
from txtai import Embeddings
from dotenv import load_dotenv
load_dotenv()
with Embeddings(
path="sentence-transformers/all-MiniLM-L6-v2",
content="client",
backend="pgvector") as embeddings:
# Works with a list, dataset or generator
data = [
"US tops 5 million confirmed virus cases",
"Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
"Beijing mobilises invasion craft along coast as Taiwan tensions escalate",
"The National Park Service warns against sacrificing slower friends in a bear attack",
"Maine man wins $1M from $25 lottery ticket",
"Make huge profits without work, earn up to $100,000 a day"
]
# Create an index for the list of text
embeddings.index(data)
print(embeddings.search("feel good story", 1))
``` | closed | 2025-01-12T16:17:37Z | 2025-01-20T18:26:39Z | https://github.com/neuml/txtai/issues/853 | [] | thelaycon | 5 |
tqdm/tqdm | pandas | 1,434 | Multiple pbars with leave=None cause leftover stuff on console and wrong position of cursor | - [x] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [x] visual output bug
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable: 4.64.1 3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022, 19:58:39) [MSC v.1934 64 bit (AMD64)] win32
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
When manually controlling multiple pbars, there are chances that something stuff would be left on the console.
Furthermore, the cursor will NOT be moved back to the beginning, cause nexe print starts in the middle of the line.
STR:
```python
from tqdm import tqdm
import time
import concurrent.futures
import random
NUM = 5
bars = []
total_chunks = NUM * 3
for i in range(NUM):
bars.append(tqdm(total=1024, unit="B", unit_scale=True, leave=None, unit_divisor=1024, ncols=100, position=i))
def upload_chunk(chunk_no, total_chunks):
bar = bars[chunk_no % NUM]
size = int(1024*1024*10*random.random())
if bar:
bar.desc = f'chunk {chunk_no + 1}/{total_chunks}'
bar.reset(total=size)
def gen():
offset = 0
while True:
if offset < size:
update_chunk = 1024*1000
time.sleep(0.1)
yield 'something'
offset += update_chunk
if bar:
bar.update(update_chunk)
else:
break
result = list(gen())
return result
with concurrent.futures.ThreadPoolExecutor(max_workers=NUM) as ex:
for i in range(0, total_chunks):
ex.submit(upload_chunk, i, total_chunks)
for bar in bars:
bar.close()
print('Done!')
```
Final result:
Windows Terminal - CMD:

Windows Terminal - PS7:

Ubuntu (remote):

| open | 2023-02-22T10:06:10Z | 2024-10-16T03:10:12Z | https://github.com/tqdm/tqdm/issues/1434 | [] | fireattack | 4 |
facebookresearch/fairseq | pytorch | 5,224 | ValueError: cannot register model architecture (w2l_conv_glu_enc) | ## 🐛 Bug
I have installed fairseq following the instructions on your site
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./
Then I follow the instructions at https://github.com/facebookresearch/fairseq/tree/main/examples/speech_recognition to train a Tranformers speech recognition model in Librispeech.
I have prepared the Librispeech data using command
pwd
/home/vivek/fairseq/
./examples/speech_recognition/datasets/prepare-librispeech.sh Libri_raw-data Libri_processed_data
and it creates dirs under Libri_processed_data
The I try to train the model using the command
python train.py Libri_processed_data --save-dir Libri_model --max-epoch 80 --task speech_recognition --arch vggtransformer_2 --optimizer adadelta --lr 1.0 --adadelta-eps 1e-8 --adadelta-rho 0.95 --clip-norm 10.0 --max-tokens 5000 --log-format json --log-interval 1 --criterion cross_entropy_acc --user-dir examples/speech_recognition/
However it crashes with the error:
base) vivek@vivek-deeplearning:~/fairseq$ python train.py Libri_processed_data/ --save-dir Libri_model/ --max-epoch 80 --task speech_recognition --arch vggtransformer --optimizer adadelta --lr 1.0 --adadelta-eps 1e-8 --adadelta-rho 0.95 --clip-norm 10.0 --max-tokens 5000 --log-format json --log-interval 1 --criterion cross_entropy_acc --user-dir examples/speech_recognition/
2023-06-28 23:54:58 | INFO | fairseq.tasks.text_to_speech | Please install tensorboardX: pip install tensorboardX
Traceback (most recent call last):
File "/home/vivek/fairseq/train.py", line 14, in <module>
cli_main()
File "/home/vivek/fairseq/fairseq_cli/train.py", line 558, in cli_main
parser = options.get_training_parser()
File "/home/vivek/fairseq/fairseq/options.py", line 38, in get_training_parser
parser = get_parser("Trainer", default_task)
File "/home/vivek/fairseq/fairseq/options.py", line 234, in get_parser
utils.import_user_module(usr_args)
File "/home/vivek/fairseq/fairseq/utils.py", line 503, in import_user_module
import_models(models_path, f"{module_name}.models")
File "/home/vivek/fairseq/fairseq/models/__init__.py", line 218, in import_models
importlib.import_module(namespace + "." + model_name)
File "/home/vivek/anaconda3/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/vivek/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py", line 174, in <module>
def w2l_conv_glu_enc(args):
File "/home/vivek/fairseq/fairseq/models/__init__.py", line 193, in register_model_arch_fn
raise ValueError(
ValueError: Cannot register duplicate model architecture (w2l_conv_glu_enc)
### To Reproduce
Steps to reproduce the behavior (**always include the command you ran**):
I have prepared the Librispeech data using command
pwd
/home/vivek/fairseq/
./examples/speech_recognition/datasets/prepare-librispeech.sh Libri_raw-data Libri_processed_data
and it creates dirs under Libri_processed_data
The I try to train the model using the command
python train.py Libri_processed_data --save-dir Libri_model --max-epoch 80 --task speech_recognition --arch vggtransformer_2 --optimizer adadelta --lr 1.0 --adadelta-eps 1e-8 --adadelta-rho 0.95 --clip-norm 10.0 --max-tokens 5000 --log-format json --log-interval 1 --criterion cross_entropy_acc --user-dir examples/speech_recognition/
However it crashes with the error:
base) vivek@vivek-deeplearning:~/fairseq$ python train.py Libri_processed_data/ --save-dir Libri_model/ --max-epoch 80 --task speech_recognition --arch vggtransformer --optimizer adadelta --lr 1.0 --adadelta-eps 1e-8 --adadelta-rho 0.95 --clip-norm 10.0 --max-tokens 5000 --log-format json --log-interval 1 --criterion cross_entropy_acc --user-dir examples/speech_recognition/
2023-06-28 23:54:58 | INFO | fairseq.tasks.text_to_speech | Please install tensorboardX: pip install tensorboardX
Traceback (most recent call last):
File "/home/vivek/fairseq/train.py", line 14, in <module>
cli_main()
File "/home/vivek/fairseq/fairseq_cli/train.py", line 558, in cli_main
parser = options.get_training_parser()
File "/home/vivek/fairseq/fairseq/options.py", line 38, in get_training_parser
parser = get_parser("Trainer", default_task)
File "/home/vivek/fairseq/fairseq/options.py", line 234, in get_parser
utils.import_user_module(usr_args)
File "/home/vivek/fairseq/fairseq/utils.py", line 503, in import_user_module
import_models(models_path, f"{module_name}.models")
File "/home/vivek/fairseq/fairseq/models/__init__.py", line 218, in import_models
importlib.import_module(namespace + "." + model_name)
File "/home/vivek/anaconda3/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/vivek/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py", line 174, in <module>
def w2l_conv_glu_enc(args):
File "/home/vivek/fairseq/fairseq/models/__init__.py", line 193, in register_model_arch_fn
raise ValueError(
ValueError: Cannot register duplicate model architecture (w2l_conv_glu_enc)
#### Code sample
pls see above
### Expected behavior
Please improve the documentation of https://github.com/facebookresearch/fairseq/tree/main/examples/speech_recognition
we don't know what the args are to train.py and what they do.
Also is it necessary to have complex decorators for register models etc.
A straightforward Transformers for speech recognition code with model code, train code and decoder code will be ideal for analysis purpose instead of a lot of boiler plate decorator code for registering model etc. Thank you for taking this request into consideration.
Expected behaviour: a normal training of the model should take place and model should be saved in the -save-dir Libri_model/ dir. (that what I'm assuming from the command above )
### Environment
- fairseq Version (e.g., 1.0 or main):
- PyTorch Version (e.g., 1.0)
- OS (e.g., Linux):
- How you installed fairseq (`pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
### Additional context
<!-- Add any other context about the problem here. -->
| open | 2023-06-28T23:09:42Z | 2023-07-20T16:21:08Z | https://github.com/facebookresearch/fairseq/issues/5224 | [
"bug",
"needs triage"
] | vivektyagiibm | 4 |
bmoscon/cryptofeed | asyncio | 342 | HUOBI_SWAP error | **Describe the bug**
I get following error message from HUOBI_SWAP.
```bash
2020-12-02 21:46:59,329 : ERROR : HUOBI_SWAP: encountered an exception, reconnecting
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/cryptofeed-1.6.2-py3.8.egg/cryptofeed/feedhandler.py", line 258, in _connect
await self._handler(websocket, feed.message_handler, feed.uuid)
File "/usr/local/lib/python3.8/dist-packages/cryptofeed-1.6.2-py3.8.egg/cryptofeed/feedhandler.py", line 287, in _handler
await handler(message, self.last_msg[feed_id])
File "/usr/local/lib/python3.8/dist-packages/cryptofeed-1.6.2-py3.8.egg/cryptofeed/exchange/huobi_dm.py", line 130, in message_handler
await self._book(msg, timestamp)
File "/usr/local/lib/python3.8/dist-packages/cryptofeed-1.6.2-py3.8.egg/cryptofeed/exchange/huobi_dm.py", line 79, in _book
for price, amount in data['bids']
KeyError: 'bids'
```
**To Reproduce**
Everything was working ok since several days, so I suspect there has been a change with HUOBI_SWAP API or with HUOBI_SWAP server maybe?
**Expected behavior**
The error message is pretty clear.
IMHO, 2 things could be improved:
- having an error because of a missing key should be stopped 'earlier'. Before trying to use the key. Before running this part of the code, maybe a check that a response has been received from HUOBI_SWAP?
- I get several tens of such messages per minutes. `feedhandler.log` is filling. Could there be a backhoff strategy in such a case, like there is for the rest feed maybe?
Well, I feel, such an error should not cause a 'ERROR : HUOBI_SWAP: encountered an exception, reconnecting'.
**Cryptofeed Version**
1.6.2 | closed | 2020-12-02T20:55:55Z | 2020-12-03T06:00:01Z | https://github.com/bmoscon/cryptofeed/issues/342 | [
"bug"
] | yohplala | 4 |
pywinauto/pywinauto | automation | 627 | How to check if edit control type is editable or not (to check whether it is disable)? | closed | 2018-12-10T08:46:00Z | 2019-05-18T14:16:04Z | https://github.com/pywinauto/pywinauto/issues/627 | [
"enhancement",
"refactoring_critical"
] | NeJaiswal | 4 | |
pyqtgraph/pyqtgraph | numpy | 2,772 | pyqtgraph.GraphItem horizontal lines appear when nodes share the exact same coordinates | ### Short description
Horizontal lines appearing in plotted Graph layouts using GraphItem
### Code to reproduce
Code to reproduce the horizontal lines using the test_data variable as the coordinates and the test_edges variable as the edgelist
```python
import pyqtgraph as pg
import numpy as np
from matplotlib import cm
from PyQt5 import QtWidgets
from PyQt5.QtGui import QFont
from PyQt5.QtWidgets import QLabel
test_data = [[0.97394865,0.76097697]
,[0.96331215,0.75899449]
,[0.73299923,0.71056367]
,[0.73299923,0.71056367]
,[1.02563796,0.75859416]
,[1.03750958,0.78332797]
,[1.04332385,0.74994675]
,[1.04332385,0.74994675]
,[1.03750958,0.78332797]
,[1.02465,0.79913865]
,[0.62148006,0.67390414]
,[0.77362938,0.8407148]
,[0.26024296,0.88460665]
,[0.77342673,0.84045795]
,[0.83003336,0.8140758]
,[0.83073405,0.81120133]
,[0.11780528,0.93765695]
,[0.12562618,0.86373582]
,[0.13258087,0.98060786]
,[0.09982391,0.96955255]
,[0.13258087,0.98060786]
,[0.10794434,1.]
,[0.09982391,0.96955255]
,[0.23024937,0.88570905]
,[0.35795589,0.5259309]
,[0.3510331,0.49033752]
,[0.24588205,0.59807632]
,[0.43293262,0.58418074]
,[0.58477147,0.79802616]
,[0.69619173,0.63869113]
,[0.34207692,0.88615848]
,[0.3576826,0.86441158]
,[0.73938475,0.8302106]
,[0.52426964,0.66345329]
,[0.76480449,0.62364043]
,[0.78150748,0.59797284]
,[0.75111531,0.59649831]
,[0.76479744,0.62363497]
,[0.78151441,0.59797748]
,[0.14719908,0.41482234]
,[0.34662769,0.57042823]
,[0.33603225,0.34886036]
,[0.33325824,0.43926137]
,[0.38229584,0.6499216]
,[0.60156156,0.81364349]
,[0.59430249,0.82235787]
,[0.401764,0.15304735]
,[0.41242327,0.14132702]
,[0.37417856,0.18227766]
,[0.14962665,0.49028655]
,[0.20592754,0.52797792]
,[0.13350614,0.48327371]
,[0.12038324,0.43799018]
,[0.07864037,0.49582502]
,[0.14351713,0.46621331]
,[0.16790967,0.38558931]
,[0.10804479,0.46119841]
,[0.19056462,0.16983147]
,[0.2727462,0.18523659]
,[0.1748359,0.10229834]
,[0.21819591,0.04863121]
,[0.18755566,0.13918122]
,[0.24545169,0.12790032]
,[0.21632604,0.05374931]
,[0.2502997,0.12243539]
,[0.17587824,0.09232711]
,[0.25675638,0.01856623]
,[0.08885948,0.14407327]
,[0.44240513,0.41857055]
,[0.44240513,0.41857055]
,[0.39389955,0.3856531]
,[0.4593268,0.36249775]
,[0.38229584,0.6499216]
,[0.51518239,0.]
,[0.51518239,0.]
,[0.41537436,0.35278072]
,[0.28109168,0.01921413]]
test_edges = [(0, 1), (1, 2), (1, 3), (1, 4), (1, 5), (1, 6), (1, 7), (1, 8), (1, 9), (1, 10), (2, 3), (2, 10), (3, 10), (10, 11), (10, 12), (10, 13), (10, 14), (10, 15), (10, 23), (10, 24), (10, 25), (10, 26), (10, 27), (10, 28), (10, 29), (10, 31), (10, 32), (10, 33), (10, 34), (10, 35), (10, 36), (10, 37), (10, 38), (10, 43), (10, 44), (10, 48), (10, 49), (10, 51), (10, 55), (10, 58), (10, 64), (10, 68), (10, 69), (10, 70), (10, 71), (10, 72), (12, 23), (16, 17), (16, 18), (16, 19), (16, 20), (16, 21), (16, 22), (16, 23), (17, 18), (17, 19), (17, 20), (17, 21), (17, 22), (17, 23), (17, 26), (17, 55), (18, 19), (18, 20), (18, 21), (18, 22), (18, 23), (19, 20), (19, 21), (19, 22), (19, 23), (20, 21), (20, 22), (20, 23), (21, 22), (21, 23), (22, 23), (23, 24), (23, 25), (23, 27), (23, 29), (23, 30), (23, 31), (24, 25), (24, 26), (24, 27), (24, 41), (24, 42), (24, 50), (24, 68), (24, 69), (24, 70), (25, 26), (25, 27), (25, 39), (25, 40), (25, 41), (25, 42), (25, 48), (25, 55), (25, 68), (25, 69), (25, 70), (25, 71), (25, 75), (26, 27), (26, 43), (26, 49), (26, 51), (26, 54), (26, 55), (26, 72), (27, 28), (27, 29), (27, 31), (27, 33), (27, 43), (27, 48), (27, 58), (27, 68), (27, 69), (27, 70), (27, 71), (27, 72), (28, 44), (28, 45), (29, 34), (29, 35), (29, 36), (29, 37), (29, 38), (30, 31), (34, 35), (34, 36), (34, 37), (34, 38), (35, 36), (35, 37), (35, 38), (36, 37), (36, 38), (37, 38), (39, 52), (39, 55), (41, 42), (41, 55), (41, 57), (41, 62), (41, 68), (41, 69), (41, 70), (41, 71), (41, 75), (46, 47), (46, 48), (48, 55), (48, 57), (48, 58), (48, 59), (48, 60), (48, 61), (48, 62), (48, 63), (48, 64), (48, 65), (48, 66), (48, 68), (48, 69), (48, 71), (48, 73), (48, 74), (48, 75), (48, 76), (49, 50), (49, 51), (49, 54), (49, 55), (49, 56), (51, 52), (51, 53), (51, 54), (51, 55), (54, 55), (55, 56), (55, 57), (55, 58), (55, 59), (55, 61), (55, 62), (55, 63), (55, 64), (55, 65), (57, 58), (57, 59), (57, 61), (57, 62), (57, 63), (57, 64), (57, 65), (57, 67), (58, 59), (58, 60), (58, 61), (58, 62), (58, 63), (58, 64), (58, 65), (58, 66), (58, 70), (58, 76), (59, 60), (59, 61), (59, 62), (59, 63), (59, 64), (59, 65), (59, 66), (60, 61), (60, 62), (60, 63), (60, 64), (60, 65), (60, 66), (61, 62), (61, 63), (61, 64), (61, 65), (61, 66), (62, 63), (62, 64), (62, 65), (62, 66), (62, 76), (63, 64), (63, 65), (63, 66), (63, 76), (64, 65), (64, 66), (64, 76), (65, 66), (65, 76), (66, 76), (68, 69), (68, 70), (68, 71), (68, 75), (69, 70), (69, 71), (69, 75), (70, 71), (70, 75), (71, 75), (73, 74)]
class Graph2D(pg.PlotWidget):
def __init__(self, data, edges, title="2D Projection", *args, **kwargs):
super().__init__(*args, **kwargs)
self.hideAxis('left')
self.hideAxis('bottom')
self.setAspectLocked()
self.setMouseEnabled(x=False, y=False)
self.cmap = cm.get_cmap('rainbow')
self.colors = pg.mkBrush(self.cmap(0, bytes=True))
self.title = title
self.label = QLabel(self.title, self.viewport())
data = np.array(data)
# uncomment to jitter duplicate node coordinates
# unq, count = np.unique(data, axis=0, return_counts=True)
# repeated_groups = unq[count > 1]
#
# for rp in repeated_groups:
# idx = np.argwhere(np.all(data == rp, axis = 1))[0][0]
# data[idx, :] = data[idx, :] + 0.000001
self.line_item = pg.GraphItem(pen=pg.mkPen('black', width=3), hoverable=False, pxMode=True, size=8, brush = self.colors)
self.line_item.setData(pos=data, adj=np.array(edges))
self.addItem(self.line_item)
def paintEvent(self, ev):
super().paintEvent(ev)
font_size = int(self.sceneRect().height() / 25)
font = QFont()
font.setPixelSize(font_size)
self.label.setFont(font)
self.label.move(10, 0)
self.label.setText(self.title)
self.label.adjustSize()
class Tool(pg.GraphicsWindow):
def __init__(self, pos, edges):
super(Tool, self).__init__()
# Grid initialization
self.setBackground((0, 0, 0, 60))
self.layoutgb = QtWidgets.QGridLayout()
self.layoutgb.setHorizontalSpacing(1)
self.layoutgb.setVerticalSpacing(1)
self.layoutgb.setContentsMargins(1, 1, 1, 1)
self.setLayout(self.layoutgb)
self.layoutgb.setColumnStretch(0, 2)
self.layoutgb.setColumnStretch(1, 10)
self.layoutgb.setColumnStretch(2, 10)
self.layoutgb.setRowStretch(0, 10)
self.layoutgb.setRowStretch(1, 10)
self.graph_2d = Graph2D(pos, edges, title = "2D Layout")
self.layoutgb.addWidget(self.graph_2d, 0, 2)
self.graph_2d.setBackground('w')
if __name__ == '__main__':
# start tool
win = Tool(test_data, test_edges)
win.showMaximized()
pg.exec()
```
### Expected behavior
The graph should be plotted, with points displaying the nodes and lines connecting the nodes that have a connection according to the edgelist:
<img width="563" alt="2d_problem2" src="https://github.com/pyqtgraph/pyqtgraph/assets/55921054/0d80696e-964e-484d-bd7d-25cfc0dfbe88">
### Real behavior
The graph is plotted according to the expected behavior above. However, additional horizontal lines appear on the screen:
<img width="577" alt="2d_problem" src="https://github.com/pyqtgraph/pyqtgraph/assets/55921054/5019a510-396c-4b65-8552-a3907089fab4">
After some experimentations I have noticed that these horizontal lines appear whenever two nodes have the EXACT same coordinates. Jittering these duplicate coordinates removed all horizontal lines. In the test case above the results are slightly different than my actual code. I noticed that due to some rounding inconsistencies sometimes the node coordinates differed after 8 decimals or more, meaning that they were not EXACTLY equal but still produced horizontal lines. Therefore I assume that there is some margin that if two nodes have the same coordinates (within this margin) the horizontal line is produced.
### Tested environment(s)
* PyQtGraph version: '0.12.4'
* Qt Python binding: 'PyQt5 5.15.6 Qt 5.15.2'
* Python version: 3.9
* NumPy version: '1.22.4'
* Operating system: Windows 11
* Installation method: pip
| closed | 2023-07-11T12:58:22Z | 2024-02-16T02:58:09Z | https://github.com/pyqtgraph/pyqtgraph/issues/2772 | [] | simonvw95 | 6 |
huggingface/transformers | machine-learning | 36,125 | Transformers | ### Model description
Transformer Practice
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | closed | 2025-02-10T22:18:45Z | 2025-02-11T13:52:00Z | https://github.com/huggingface/transformers/issues/36125 | [
"New model"
] | HemanthVasireddy | 1 |
tfranzel/drf-spectacular | rest-api | 397 | Allow json/yaml schema response selection via query parameter | When using ReDoc or SwaggerUI it could be helpful if users could view the json and yaml schema in the browser. Currently the `/schema` url selects the response format bases on content negotiation. I am not sure how browsers handle this and what the defaults are, for me chrome seems to default to yaml. I prefer json and in general I think it would be good to be able to choose the format via a query parameter. Similar to what was possible in `drf-yasg`.
I know DRF has some possiblity to append a `.json` suffix, but I couldn't get it to work for the root/schema url.
Or maybe spectacular could use the default based the DRF settings:
```
'DEFAULT_RENDERER_CLASSES': (
'rest_framework.renderers.JSONRenderer',
),
``` | closed | 2021-05-21T08:01:03Z | 2021-05-27T18:15:00Z | https://github.com/tfranzel/drf-spectacular/issues/397 | [] | valentijnscholten | 2 |
FujiwaraChoki/MoneyPrinter | automation | 157 | [Feature request] Adding custom script | **Describe the solution you'd like**
Frontend option to add custom script for video
| closed | 2024-02-10T16:19:45Z | 2024-02-11T09:46:42Z | https://github.com/FujiwaraChoki/MoneyPrinter/issues/157 | [] | Tungstenfur | 1 |
rthalley/dnspython | asyncio | 1,064 | License specification is missing | I see some time earlier someone already [raised that question](https://github.com/rthalley/dnspython/pull/1038)
I just pulled the latest package from Pypi and it still doesn't have License information:
<img width="516" alt="Screenshot 2024-03-05 at 1 26 49 PM" src="https://github.com/rthalley/dnspython/assets/31595973/2acaa053-61fc-4ca6-b2ec-bce8059a1c67">
I see that on master branch license is present in `pyproject.toml`, but for some reason in `pip show` it's still empty.
I think a license is one of the things that should be | closed | 2024-03-05T06:33:28Z | 2024-03-05T13:59:23Z | https://github.com/rthalley/dnspython/issues/1064 | [
"Bug"
] | freakaton | 2 |
encode/databases | sqlalchemy | 73 | Accessing Record values by dot notation | If I am not missing anything, is there a way to get PG Record values using dot notation instead of dict-like syntax?
For now we have to use something like `user['hashed_password']` which breaks `Readability counts.` of zen :)
Would be great to use just `user.hashed_password`. | closed | 2019-03-26T10:26:15Z | 2019-03-26T10:34:35Z | https://github.com/encode/databases/issues/73 | [] | Spacehug | 1 |
keras-team/keras | tensorflow | 20,189 | Keras different versions have numerical deviations when using pretrain model | The following code will have output deviations between Keras 3.3.3 and Keras 3.5.0.
```python
#download model
from modelscope import snapshot_download
base_path = 'q935499957/Qwen2-0.5B-Keras'
import os
dir = 'models'
try:
os.mkdir(dir)
except:
pass
model_dir = snapshot_download(base_path,local_dir=dir)
#config
import os
os.environ["KERAS_BACKEND"] = "torch"
import keras
keras.config.set_dtype_policy("bfloat16")
from transformers import AutoTokenizer
import numpy as np
from bert4keras3.models import build_transformer_model,Llama
from bert4keras3.snippets import sequence_padding
base_path = dir+'/'
config_path = base_path+'config.json'
weights_path = base_path+'QWen.weights.h5'#保存路径expand_lm.weights.h5
dict_path = base_path+'qwen_tokenizer'
tokenizer = AutoTokenizer.from_pretrained(dict_path)
#define a model to print middle tensor
class Llama_print(Llama):
def apply_main_cache_layers(self, inputs, index,self_cache_update_index,
cross_cache_update_index=None,
attention_mask=None,position_bias=None,
):
print(inputs[0][:,:,:8])
print(index)
print(inputs[0].shape)
print('-'*50)
return super().apply_main_cache_layers(inputs, index,self_cache_update_index,
cross_cache_update_index,
attention_mask,position_bias)
Novel = build_transformer_model(
config_path,
keras_weights_path=weights_path,
model=Llama_print,
with_lm=True,
return_keras_model=False,
)
x = np.array([tokenizer.encode('hello,')+[0]])
print(Novel.cache_call([x],input_lengths=[3],
end_token=-1,search_mode='topp',k=1))
```
This is a llama-like pre-trained model. The code above will output the middle tensor during the prefill process and the decode process.
With the exact same code, the input during the prefill process is completely different in the two different versions. In the decode phase, even when the input is the same, there will be significant differences in the outputs as the iterations proceed between the two versions.
keras 3.3.3 print
```
#prefill
tensor([[[ 0.0164, 0.0070, -0.0019, -0.0013, 0.0156, 0.0074, -0.0055,
-0.0139],
[-0.0325, -0.0471, 0.0239, -0.0009, 0.0129, 0.0027, 0.0299,
0.0160],
[-0.0204, -0.0093, 0.0121, 0.0091, -0.0065, -0.0225, 0.0149,
0.0108]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
0
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-0.0459, -0.0967, -0.0270, 0.0452, 0.2500, -0.1387, 0.1094,
-0.1436],
[ 0.0031, -0.0479, 0.0107, -0.0291, -0.0869, 0.0549, 0.0579,
0.0618],
[-0.1099, 0.0183, 0.1309, -0.1406, 0.0204, -0.0154, 0.2656,
0.0669]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
1
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-0.3398, -0.2988, 0.1143, -0.2109, 0.5625, 0.0869, -0.3281,
-0.1465],
[ 0.1895, -0.1562, -0.0292, -0.1348, 0.0283, 0.0452, 0.2734,
0.0396],
[ 0.0127, -0.0498, 0.0388, -0.1484, 0.0791, 0.1118, 0.2578,
0.0879]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
2
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-9.2188e-01, 1.7109e+00, 3.3281e+00, -2.5000e+00, -2.0312e-01,
-4.7070e-01, -7.1250e+00, 3.7891e-01],
[ 6.5918e-02, -3.2031e-01, -2.0312e-01, 1.2207e-01, -1.2598e-01,
1.7090e-03, 9.2773e-02, -1.6699e-01],
[-1.6846e-02, -1.9531e-01, -2.1875e-01, 1.4648e-02, 7.3242e-04,
6.0303e-02, 4.2773e-01, 2.3438e-02]]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<SliceBackward0>)
3
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-1.1719, 1.4062, 3.2031, -1.6328, -0.8047, -1.0938, -7.9062,
1.2266],
[ 0.3594, -0.1025, 0.0869, 0.3496, -0.0132, 0.0515, 0.2168,
0.1016],
[ 0.0449, -0.2910, -0.2305, 0.0383, 0.1592, -0.1016, 0.6328,
0.0190]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
4
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-2.1562e+00, 2.0156e+00, 4.3125e+00, 9.5312e-01, 2.7344e-01,
-1.8750e+00, -1.3875e+01, 2.4062e+00],
[ 4.5703e-01, -2.6172e-01, -2.4414e-02, 3.6133e-01, 1.6016e-01,
1.1768e-01, 4.1992e-01, -4.5898e-02],
[ 9.6680e-02, -4.1016e-01, -2.8906e-01, 7.9346e-03, -1.5430e-01,
-1.5430e-01, 4.7266e-01, -2.6562e-01]]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<SliceBackward0>)
5
torch.Size([1, 3, 896])
#decode
--------------------------------------------------
tensor(1, device='cuda:0', dtype=torch.int32)tensor([[[-0.0325, -0.0471, 0.0239, -0.0009, 0.0129, 0.0027, 0.0299,
0.0160]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
0
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.0031, -0.0481, 0.0107, -0.0291, -0.0869, 0.0549, 0.0579,
0.0618]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
1
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.1895, -0.1572, -0.0299, -0.1367, 0.0283, 0.0452, 0.2754,
0.0405]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
2
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.0654, -0.3203, -0.2041, 0.1221, -0.1260, 0.0039, 0.0933,
-0.1660]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
3
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.3574, -0.0986, 0.0898, 0.3516, -0.0137, 0.0518, 0.2158,
0.1064]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
4
torch.Size([1, 1, 896])
--------------------------------------------------
```
keras 3.5.0 print
```
#prefill
tensor([[[-0.0096, 0.0126, -0.0063, 0.0044, 0.0121, 0.0038, 0.0104,
-0.0009],
[-0.0325, -0.0471, 0.0239, -0.0009, 0.0129, 0.0027, 0.0299,
0.0160],
[-0.0204, -0.0093, 0.0121, 0.0091, -0.0065, -0.0225, 0.0149,
0.0108]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
0
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-0.1807, 0.0674, -0.3926, -0.0278, 0.2520, -0.0840, -0.0669,
-0.3047],
[-0.0072, -0.0415, 0.0123, -0.0146, -0.1270, 0.0679, 0.0610,
-0.0205],
[-0.1279, 0.0349, 0.2539, -0.1611, -0.0225, 0.0275, 0.1338,
0.0386]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
1
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[ 5.6250e-01, -2.3633e-01, -1.0781e+00, -1.2988e-01, 3.4180e-01,
3.7109e-01, -3.1250e-01, -1.9531e-01],
[ 1.2598e-01, -1.2695e-02, -7.1289e-02, -1.3672e-01, 3.3203e-02,
1.4941e-01, 1.9922e-01, -2.1875e-01],
[-9.5215e-02, -5.9570e-02, 2.0117e-01, -3.2031e-01, 3.6621e-04,
5.8350e-02, 1.6504e-01, -8.9355e-02]]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<SliceBackward0>)
2
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[ 0.4277, 0.6680, 2.3750, -2.8750, -0.5039, 0.0742, -6.5625,
0.4082],
[ 0.2256, -0.3047, -0.0349, -0.0859, 0.1191, 0.2334, 0.3262,
-0.0088],
[-0.1025, -0.0918, 0.3105, -0.2227, -0.0162, 0.2715, 0.4746,
0.0371]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
3
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[ 0.1719, 0.3652, 2.2812, -2.0156, -1.0938, -0.5547, -7.3438,
1.2500],
[ 0.5938, -0.3047, -0.0126, -0.0981, 0.2676, 0.0479, 0.0771,
0.1455],
[ 0.2051, -0.2188, 0.0391, -0.2949, 0.2539, 0.0566, 0.4355,
0.0227]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
4
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-8.0469e-01, 9.8438e-01, 3.4062e+00, 6.0938e-01, -7.8125e-03,
-1.3438e+00, -1.3375e+01, 2.4219e+00],
[ 6.3672e-01, -5.7422e-01, 2.8931e-02, -3.1250e-01, 3.2422e-01,
-6.7871e-02, 4.0430e-01, -4.0039e-02],
[ 3.5156e-01, -4.4531e-01, -1.8066e-02, -2.2070e-01, 1.1377e-01,
3.0884e-02, 4.5508e-01, 1.4160e-01]]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<SliceBackward0>)
#decode
--------------------------------------------------
tensor(1, device='cuda:0', dtype=torch.int32)tensor([[[-0.0325, -0.0471, 0.0239, -0.0009, 0.0129, 0.0027, 0.0299,
0.0160]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
0
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[-0.0072, -0.0415, 0.0122, -0.0146, -0.1270, 0.0679, 0.0610,
-0.0205]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
1
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.1230, -0.0083, -0.0713, -0.1260, 0.0293, 0.1436, 0.2051,
-0.2090]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
2
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.2266, -0.3008, -0.0327, -0.0791, 0.1143, 0.2285, 0.3320,
-0.0068]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
3
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.5977, -0.2930, -0.0059, -0.0884, 0.2637, 0.0449, 0.0889,
0.1465]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
4
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.6367, -0.5586, 0.0376, -0.3047, 0.3242, -0.0654, 0.4277,
-0.0312]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
5
torch.Size([1, 1, 896])
--------------------------------------------------
```
| closed | 2024-08-30T11:21:08Z | 2024-08-31T07:38:23Z | https://github.com/keras-team/keras/issues/20189 | [] | pass-lin | 2 |
huggingface/transformers | pytorch | 36,472 | Dtensor support requires torch>=2.5.1 | ### System Info
torch==2.4.1
transformers@main
### Who can help?
#36335 introduced an import on Dtensor https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L44
but this doesn't exist on torch==2.4.1, but their is no guard around this import and setup.py lists torch>=2.0.
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
install torch==2.4.1
install transformers@main
attempt to load any prretained model
see axolotl ci https://github.com/axolotl-ai-cloud/axolotl/actions/runs/13578637245/job/37960393969
### Expected behavior
regular functionality so import from transformers doesn't fail | closed | 2025-02-28T05:02:22Z | 2025-03-05T10:27:02Z | https://github.com/huggingface/transformers/issues/36472 | [
"bug"
] | winglian | 6 |
scikit-tda/kepler-mapper | data-visualization | 159 | Target space of lens function something other than R? | The mapper algorithm starts with a real valued function f:X->R, where X is the data set and R is 1d Euclidean space.
According to the original paper, the target space of the lenses (called filters in the paper) may be Euclidean space, but it might also be a circle, a torus, a tree or another metric space.
My question is whether with the Kepler Mapper we can implement the mapper algorithm with the target space being, e.g. a circle (the simplest next case after R)?
[
According to the paper, the way this is done (if I understood correctly the introduction of the paper where they allude to this) is by mapping the topological graph generated to the desired target space. I am not sure if I understand exactly what this means...
]
Thanks for any help! | closed | 2019-03-19T13:02:09Z | 2019-04-10T08:37:40Z | https://github.com/scikit-tda/kepler-mapper/issues/159 | [] | karinsasaki | 3 |
dmlc/gluon-nlp | numpy | 1,473 | [Benchmark] Improve NLP Backbone Benchmark | ## Description
In GluonNLP, we introduced the benchmarking script in https://github.com/dmlc/gluon-nlp/tree/master/scripts/benchmarks.
The goal is to track the training + inference latency of common NLP backbones so that we can choose the appropriate ones for our task. This will help users train + deploy models with AWS.
Currently, we covered:
- Huggingface/Transformer-based backbone with FP32 + FP16 training / inference. For FP16 training, we are not profiling against the AMP-based solution so this gives an edge of pytorch, in which we need to fix
- MXNet 2.0-nightly version (only for community use) + GluonNLP 1.0 with FP32 + FP16 (amp) training / inference.
- TVM FP32 inference. Due to some recent upgrade of the code base, this is currently broken.
I will share the following action items that I feel are worthwhile doing:
### Short-term Bug-fix + Improvement
- [ ] Fix the FP16 training benchmark in Huggingface/Transformer to use AMP in PyTorch
- [ ] Fix the TVM benchmark. This is also tracked in https://github.com/dmlc/gluon-nlp/issues/1425
- [ ] Add FP16 inference to TVM benchmark.
- [ ] Turn on einsum acceleration in MXNet-based benchmark. This is added in https://github.com/apache/incubator-mxnet/pull/18921
### Automation + Visualization
- [x] Support launching benchmark job with AWS Batch. Currently tracked in https://github.com/dmlc/gluon-nlp/pull/1471.
- [ ] Automate benchmarking process via Github actions.
- [ ] Support visualization of benchmark results
### Longer-term Backbone Benchmarking Effort
- [ ] Add JAX/flax-based solution, which is internally using XLA.
- [ ] Support AutoScheduler in TVM benchmark
- [ ] Enable ONNX + TensorRT. This is considered the fastest solution for conducting NLP inference.
### Other longer-term efforts
- [ ] Support benchmarks for Data-loaders.
- [ ] Support common end-to-end training benchmarks like the SQuAD 2.0 finetuning. We may focus on single-instance-based benchmarks.
@dmlc/gluon-nlp-committers | open | 2021-01-09T23:00:58Z | 2021-01-13T17:30:16Z | https://github.com/dmlc/gluon-nlp/issues/1473 | [
"enhancement",
"help wanted",
"performance"
] | sxjscience | 0 |
sebastianruder/NLP-progress | machine-learning | 82 | Where do the ELMo WSD results come from? | The only result in the paper is the averaged/overall result. | closed | 2018-08-24T10:08:10Z | 2018-08-29T13:50:44Z | https://github.com/sebastianruder/NLP-progress/issues/82 | [] | frankier | 3 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 362 | Syntax Error when deploying Flask App to sever | Ubuntu: 20.04
Python: 3.8.5
Flask 1.1.2
marshmallow: 3.10.0
SQLAlchemy: 1.3.22
marshmallow-sqlalchemy: 0.24.1
Following [this tutorial](https://www.digitalocean.com/community/tutorials/how-to-deploy-a-flask-application-on-an-ubuntu-vps) on setting a Flask app up on a server. Seemingly is all working however I receive a 500 error when attempting to view the server in browser, checking the logs it says the following:
```
==> /var/log/apache2/error.log <==
[Tue Jan 19 13:03:07.447664 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] mod_wsgi (pid=82542): Failed to exec Python script file '/var/www/ProductService/ProductService.wsgi'.
[Tue Jan 19 13:03:07.447962 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] mod_wsgi (pid=82542): Exception occurred processing WSGI script '/var/www/ProductService/ProductService.wsgi'.
[Tue Jan 19 13:03:07.448121 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] Traceback (most recent call last):
[Tue Jan 19 13:03:07.448247 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] File "/var/www/ProductService/ProductService.wsgi", line 9, in <module>
[Tue Jan 19 13:03:07.448419 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] from ProductService import app as application
[Tue Jan 19 13:03:07.448520 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] File "/var/www/ProductService/ProductService/__init__.py", line 2, in <module>
[Tue Jan 19 13:03:07.448630 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] from flask_marshmallow import Marshmallow
[Tue Jan 19 13:03:07.448733 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] File "/var/www/ProductService/venv/lib/python3.8/site-packages/flask_marshmallow/__init__.py", line 12, in <module>
[Tue Jan 19 13:03:07.448816 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] from marshmallow import fields as base_fields, exceptions, pprint
[Tue Jan 19 13:03:07.448884 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] File "/var/www/ProductService/venv/lib/python3.8/site-packages/marshmallow/__init__.py", line 1, in <module>
[Tue Jan 19 13:03:07.448959 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] from marshmallow.schema import Schema, SchemaOpts
[Tue Jan 19 13:03:07.449046 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] File "/var/www/ProductService/venv/lib/python3.8/site-packages/marshmallow/schema.py", line 132
[Tue Jan 19 13:03:07.449110 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] klass: type,
[Tue Jan 19 13:03:07.449157 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] ^
[Tue Jan 19 13:03:07.449200 2021] [wsgi:error] [pid 82542:tid 140296339048192] [client 86.25.43.81:50302] SyntaxError: invalid syntax
```
Any help would be greatly appreciated! | closed | 2021-01-19T13:07:43Z | 2021-01-21T11:59:11Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/362 | [] | klongbeard | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.