repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
dask/dask | pandas | 11,566 | Should `dask.persist` raise on non-persistable objects? | # Problem
Until [recently](https://github.com/dask/distributed/issues/8948), `dask.persist()` supported both persistable Dask collections and ordinary Python objects as inputs. The Dask collections would be persisted (as expected) while the Python objects would be handled transparently and returned as-is in the output.
To the best of my knowledge, this behavior is not documented anywhere, and there is only a single test for this (`test_distributed.py::test_persist_nested`).
To me, this behavior seems odd: I would argue that it's reasonable for a user to expect that `dask.persist(some_large_pandas_dataframe)` actually persists that large object on a `distributed` cluster to make it available. It would also hide user errors where the user intends to persist a collection but instead persists `Future`s, e.g., by calling `persist(df.compute())` instead of `persist(df)`.
# Possible solution
Instead of fixing this undocumented behavior, I suggest that `persist` should raise on inputs that are no persistable Dask collection. This clarifies the intended and supported behavior, limits the amount of hidden magic, and allows us to raise meaningful errors on anti-patterns like persisting `Future`s.
# Caveat
This would break current undocumented Dask behavior, and it's unclear how much users or downstream libraries rely on this. | open | 2024-11-25T18:20:34Z | 2025-02-24T02:01:27Z | https://github.com/dask/dask/issues/11566 | [
"needs attention",
"needs triage"
] | hendrikmakait | 3 |
TheKevJames/coveralls-python | pytest | 238 | Unable to submit coverage during re-run travis jobs | Whenever trying to re-run a travis ci job, submitting to coveralls.io fails with:
```422 Client Error: Unprocessable Entity for url: https://coveralls.io/api/v1/jobs```
Additional debugging indicates this is due to the job id not being unique, as the submit/post returns an error of the form:
```service_job_id (XXXXXXX) must be unique for Travis Jobs not supplying a Coveralls Repo Token```
| closed | 2020-11-03T11:35:33Z | 2021-01-12T01:59:56Z | https://github.com/TheKevJames/coveralls-python/issues/238 | [] | r0ps3c | 1 |
MaartenGr/BERTopic | nlp | 1,913 | Can I merge_topics() after reduce_outliers() and update_topics() | Thank you so much for the great job.
In my case, there are too many outliers here. I want to reduce the outliers, but at the same time I need to merge topics.
My question is, can I merge topics after reducing the outliers and updating the topics. The code is below:
```
new_topics = topic_model.reduce_outliers(docs, topics)
topic_model.update_topics(docs, topics=new_topics)
topic_model.merge_topics(docs, [...])
topic_info = topic_model.get_topic_info()
````
I see this warning in the [official documentation](https://maartengr.github.io/BERTopic/getting_started/outlier_reduction/outlier_reduction.html#chain-strategies)
"In both cases, it is important to realize that updating the topics this way may lead to errors if topic reduction or topic merging techniques The reason for this is that when you assign a -1 document to topic 1 and another -1 document to topic 2, it is unclear how you map the -1 documents. Is it matched to topic 1 or 2."
It looks like topics should not be merged after reducing outliers and updating topics. But executing the code above doesn't seem to report an error.
What should I do to achieve my goal? For example, should I merge the topics first, then reduce the outliers, then update the topics? Is it right to put updating topics at the end?
```
topic_model.merge_topics(docs, [...]
new_topics = topic_model.reduce_outliers(docs, topics)
topic_model.update_topics(docs, topics=new_topics)
````
Thank you very much. | closed | 2024-04-06T01:43:05Z | 2024-04-11T13:10:04Z | https://github.com/MaartenGr/BERTopic/issues/1913 | [] | lynn1885 | 4 |
httpie/http-prompt | rest-api | 215 | An error occurs during installation | Encoutered a problem while installing http-prompt using yay on manjaro.
I'm using python 3.10. Installation scripts use a folder which path contains python3.10. But when running install_scripts command, the path that is used contains 3.9 instead.
It looks like this :
==> Starting package()...
running install
...
creating /home/chobaz/.cache/yay/http-prompt/pkg/http-prompt/usr/lib/python3.10/site-packages/http_prompt
...
running install_scripts
Installing http-prompt script to /home/chobaz/.cache/yay/http-prompt/pkg/http-prompt/usr/bin
sed: can't read /home/chobaz/.cache/yay/http-prompt/pkg/http-prompt/usr/lib/python3.9/site-packages/http_prompt-2.1.0-py3.9.egg-info/requires.txt: No such file or directory
**==> ERROR: A failure occurred in package().
Aborting...
-> error making: http-prompt**
| open | 2023-01-15T16:42:24Z | 2023-01-15T16:42:24Z | https://github.com/httpie/http-prompt/issues/215 | [] | Chobazkun | 0 |
HumanSignal/labelImg | deep-learning | 822 | Deleting an image resets the file list without going to next image [suggested fix included] | In the latest version, when I delete an image, it resets my selection back to the first image.
This change seems to work:
def delete_image(self):
delete_path = self.file_path
if delete_path is not None:
idx = self.cur_img_idx
if os.path.exists(delete_path):
os.remove(delete_path)
self.import_dir_images(self.last_open_dir)
self.cur_img_idx = idx
self.cur_img_idx -= 1
self.open_next_image() | open | 2021-11-24T19:59:13Z | 2023-02-22T10:31:05Z | https://github.com/HumanSignal/labelImg/issues/822 | [] | CrucialDrew | 5 |
deezer/spleeter | tensorflow | 733 | [Discussion] Pretrained Spleeter models as layer in a Keras model | Is it possible to use the pretrained Spleeter models as a layer in a tensorflow keras model built with the functional api?
I currently use Spleeter in my preprocessing pipeline for a CRNN classifier, but I had the idea to move it into my model instead of doing the source separation beforehand.
```
┌───────┐ ┌──────────┐ ┌────────────┐
│ Input ├────►│ Spleeter ├────►│ Classifier │
└───────┘ └──────────┘ └────────────┘
``` | open | 2022-02-24T13:18:52Z | 2022-04-23T23:46:16Z | https://github.com/deezer/spleeter/issues/733 | [
"question"
] | oststef | 5 |
modelscope/data-juicer | data-visualization | 340 | [Bug]: language_id_score_filter affect the speed of other operators | ### Before Reporting 报告之前
- [X] I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引,并且在安装过程中没有错误发生。(否则,我们建议您使用Question模板向我们进行提问)
### Search before reporting 先搜索,再报告
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的bug报告。
### OS 系统
ubuntu
### Installation Method 安装方式
from source
### Data-Juicer Version Data-Juicer版本
v0.2.0
### Python Version Python版本
3.9
### Describe the bug 描述这个bug

如果process里有基于fasttext的language_id_score_filter算子时,会影响其它算子的运行速度,速度明显变慢
### To Reproduce 如何复现
`python tools/process_data.py --config configs/demo/process.yaml`
### Configs 配置信息
```
# Process config example for dataset
# global parameters
project_name: 'demo-process'
dataset_path: './demos/data/demo-dataset.jsonl' # path to your dataset directory or file
np: 48 # number of subprocess to process your dataset
text_keys: 'text'
export_path: './outputs/demo-process/demo-processed.jsonl'
# process schedule
# a list of several process operators with their arguments
process:
- chinese_convert_mapper: # convert Chinese between Traditional Chinese, Simplified Chinese and Japanese Kanji.
mode: 's2t' # choose the mode to convert Chinese: ['s2t', 't2s', 's2tw', 'tw2s', 's2hk', 'hk2s', 's2twp', 'tw2sp', 't2tw', 'tw2t', 'hk2t', 't2hk', 't2jp', 'jp2t']
- clean_email_mapper: # remove emails from text.
- clean_html_mapper: # remove html formats form text.
- clean_ip_mapper: # remove ip addresses from text.
- clean_links_mapper: # remove web links from text.
- clean_copyright_mapper: # remove copyright comments.
- punctuation_normalization_mapper: # normalize unicode punctuations to English punctuations.
- remove_bibliography_mapper: # remove bibliography from Latex text.
- remove_comments_mapper:
- remove_repeat_sentences_mapper: # remove repeat sentences in text samples.
lowercase: false # whether to convert sample text to lower case
ignore_special_character: true # whether to ignore special characters when judging repeated sentences. Special characters are all characters except Chinese characters, letters and numbers
min_repeat_sentence_length: 2 # sentences shorter than this length will not be deduplicated. If ignore_special_character is set to True, then special characters are not included in this length
- remove_specific_chars_mapper: # remove characters specified by users
chars_to_remove: '◆●■►▼▲▴∆▻▷❖♡□'
- alphanumeric_filter: # filter text with alphabet/numeric ratio out of specific range.
tokenization: false # whether to count the ratio of alphanumeric to the total number of tokens.
min_ratio: 0.0 # the min ratio of filter range
max_ratio: 0.9
- average_line_length_filter: # filter text with the average length of lines out of specific range.
min_len: 10 # the min length of filter range
max_len: 10000 # the max length of filter range
- character_repetition_filter: # filter text with the character repetition ratio out of specific range
rep_len: 10 # repetition length for char-level n-gram
min_ratio: 0.0 # the min ratio of filter range
max_ratio: 0.5
- language_id_score_filter:
lang: 'zh'
```
### Logs 报错日志
_No response_
### Screenshots 截图
_No response_
### Additional 额外信息
_No response_ | closed | 2024-07-04T04:07:45Z | 2024-07-05T06:44:17Z | https://github.com/modelscope/data-juicer/issues/340 | [
"bug"
] | simplew2011 | 3 |
albumentations-team/albumentations | deep-learning | 2,056 | [Documentation] Add explanation why you may get additional keyplints or Bounding boxes with Reflection Padding | To question on why the number of keypoints or bounding boxes increased https://github.com/albumentations-team/albumentations/issues/2055
Need to add clear explanation with example to docs about the behavior of transforms with Reflection padding | open | 2024-11-04T15:52:59Z | 2024-11-05T22:55:22Z | https://github.com/albumentations-team/albumentations/issues/2056 | [
"good first issue",
"documentation"
] | ternaus | 0 |
nltk/nltk | nlp | 3,355 | Broken link to NLTK team on Adding a Corpus page | The Adding a Corpus page - https://github.com/nltk/nltk/wiki/Adding-a-Corpus - there's the following text
wait for approval from someone in the NLTK team
The link - https://github.com/orgs/nltk/teams/team-nltk - is broken, gets a Page not found. | closed | 2025-01-09T16:01:22Z | 2025-03-10T11:51:20Z | https://github.com/nltk/nltk/issues/3355 | [] | trevorjwood | 6 |
automl/auto-sklearn | scikit-learn | 1,529 | `predict_proba` in classifier estimators is doing needless assertions | https://github.com/automl/auto-sklearn/blob/5e21e9cbd405eaef47b5e5d68cf092254ccffb51/autosklearn/estimators.py#L1453-L1465
There's a lot of assertion checking to do here which can really eat into inference time. While the checks are helpful, they seem like they should really be enforced in testing. | open | 2022-06-23T15:08:05Z | 2023-11-05T10:39:16Z | https://github.com/automl/auto-sklearn/issues/1529 | [
"enhancement"
] | eddiebergman | 7 |
tflearn/tflearn | tensorflow | 553 | How to use sparse tensor as input data properly? | Hi, all
I'm trying to input sparse data as X in order to use `embedding_lookup_sparse`, but seems not that easy to add this feature to tflearn.
The largest problem I found now is the feeding logic: When I'm trying to input sparse tensor for X and dense tensor for y, the feeding logic could not handle batch properly in this case. It requires the X and y has the same length (aka. shape[0]), but since the X is a sparse tensor, I have to make batches of myself when generating SparseTensorValue.
So I want to know what is the right way to do that in current version of tflearn? Or it's just a "not supported yet" requirement?
Thanks. | open | 2017-01-09T07:13:50Z | 2017-01-09T07:13:50Z | https://github.com/tflearn/tflearn/issues/553 | [] | lipixun | 0 |
NVlabs/neuralangelo | computer-vision | 6 | Number of epochs | Hi everyone, I'm training with the following command from readme file:
```
EXPERIMENT=toy_example
GROUP=example_group
NAME=example_name
CONFIG=projects/neuralangelo/configs/custom/${EXPERIMENT}.yaml
GPUS=1 # use >1 for multi-GPU training!
torchrun --nproc_per_node=${GPUS} train.py \
--logdir=logs/${GROUP}/${NAME} \
--config=${CONFIG} \
--show_pbar
```
It's currently running and is in Epoch number 1200. I read the files and there's a max_epoch parameter, which is set to 9999999999. I wanted to ask what is the normal and optimal epoch number, and should I change anything in the code before running the command?
Thanks | closed | 2023-08-13T06:40:04Z | 2023-08-18T11:49:50Z | https://github.com/NVlabs/neuralangelo/issues/6 | [] | smtabatabaie | 1 |
apache/airflow | automation | 47,499 | TriggerDagRunOperator is failing for reason 'Direct database access via the ORM is not allowed in Airflow 3.0' | ### Apache Airflow version
3.0.0b2
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
TriggerDagRunOperator is trying to connect to the DB when 'wait_for_completion' is true.
[2025-03-07, 12:54:58] - Task failed with exception logger="task" error_detail=[{"exc_type":"RuntimeError","exc_value":"Direct database access via the ORM is not allowed in Airflow 3.0","syntax_error":null,"is_cause":false,"frames":[{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":609,"name":"run"},{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":730,"name":"_execute_task"},{"filename":"/opt/airflow/airflow/models/baseoperator.py","lineno":168,"name":"wrapper"},{"filename":"/opt/airflow/providers/standard/src/airflow/providers/standard/operators/trigger_dagrun.py","lineno":207,"name":"execute"},{"filename":"/opt/airflow/airflow/utils/session.py","lineno":100,"name":"wrapper"},{"filename":"/usr/local/lib/python3.9/contextlib.py","lineno":119,"name":"__enter__"},{"filename":"/opt/airflow/airflow/utils/session.py","lineno":40,"name":"create_session"},{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/supervisor.py","lineno":207,"name":"__init__"}]}]
[2025-03-07, 12:54:58] - Top level error logger="task" error_detail=[{"exc_type":"RuntimeError","exc_value":"Direct database access via the ORM is not allowed in Airflow 3.0","syntax_error":null,"is_cause":false,"frames":[{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":817,"name":"main"},{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":786,"name":"finalize"},{"filename":"/opt/airflow/providers/standard/src/airflow/providers/standard/operators/trigger_dagrun.py","lineno":80,"name":"get_link"},{"filename":"/opt/airflow/airflow/utils/session.py","lineno":100,"name":"wrapper"},{"filename":"/usr/local/lib/python3.9/contextlib.py","lineno":119,"name":"__enter__"},{"filename":"/opt/airflow/airflow/utils/session.py","lineno":40,"name":"create_session"},{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/supervisor.py","lineno":207,"name":"__init__"}]}]
### What you think should happen instead?
A DAG utilising TriggerDagRunOperator should pass.
### How to reproduce
Run the below DAG in AF3 beta2:
Controller DAG:
```python
from airflow import DAG
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
from pendulum import today
dag = DAG(
dag_id="trigger_controller_dag",
default_args={"owner": "airflow", "start_date": today('UTC').add(days=-2)},
schedule=None,
tags=["core"],
)
trigger = TriggerDagRunOperator(
task_id="test_trigger_dagrun",
trigger_dag_id="trigger_target_dag",
reset_dag_run=True,
wait_for_completion=True,
conf={"message": "Hello World"},
dag=dag,
)
```
Target DAG:
```python
from airflow.models import DAG
from airflow.providers.standard.operators.bash import BashOperator
from airflow.providers.standard.operators.python import PythonOperator
from pendulum import today
dag = DAG(
dag_id="trigger_target_dag",
default_args={"start_date": today('UTC').add(days=-2), "owner": "Airflow"},
tags=["core"],
schedule=None, # This must be none so it's triggered by the controller
is_paused_upon_creation=False, # This must be set so other workers can pick this dag up. mabye it's a bug idk
)
def run_this_func(**context):
print(
f"Remotely received value of {context['dag_run'].conf['message']} for key=message "
)
run_this = PythonOperator(
task_id="run_this",
python_callable=run_this_func,
dag=dag,
)
# You can also access the DagRun object in templates
bash_task = BashOperator(
task_id="bash_task",
bash_command='echo "Here is the message: $message"',
env={"message": '{{ dag_run.conf["message"] if dag_run else "" }}'},
dag=dag,
)
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-07T14:37:56Z | 2025-03-19T07:42:20Z | https://github.com/apache/airflow/issues/47499 | [
"kind:bug",
"priority:high",
"area:core",
"affected_version:3.0.0beta"
] | atul-astronomer | 4 |
pydata/xarray | pandas | 9,698 | Nightly Hypothesis tests failed | [Workflow Run URL](https://github.com/pydata/xarray/actions/runs/11622006687)
<details><summary>Python 3.12 Test Summary</summary>
```
properties/test_index_manipulation.py::DatasetTest::runTest: ValueError: Cannot unstack MultiIndex containing duplicates. Make sure entries are unique, e.g., by calling ``.drop_duplicates('0')``, before unstacking.
Falsifying example:
state = DatasetStateMachine()
state.init_ds(var=Variable(data=array(['', '\x000'], dtype='<U2'), dims=['1'], attrs={}))
state.assert_invariants()
Draw 1: ['1']
> stacking ['1'] as 0
state.stack(create_index=True, data=data(...), newname='0')
state.assert_invariants()
Draw 2: '0'
> unstacking 0
state.unstack(data=data(...))
state.teardown()
Explanation:
These lines were always and only run by failing examples:
/home/runner/micromamba/envs/xarray-tests/lib/python3.12/site-packages/pandas/core/indexes/multi.py:1199
/home/runner/micromamba/envs/xarray-tests/lib/python3.12/site-packages/pandas/core/indexes/multi.py:1200
/home/runner/micromamba/envs/xarray-tests/lib/python3.12/site-packages/pandas/core/indexes/multi.py:1213
/home/runner/micromamba/envs/xarray-tests/lib/python3.12/site-packages/pandas/core/indexes/multi.py:1319
/home/runner/micromamba/envs/xarray-tests/lib/python3.12/site-packages/pandas/core/indexes/multi.py:154
(and 22 more with settings.verbosity >= verbose)
You can reproduce this example by temporarily adding @reproduce_failure('6.115.6', b'AXicY2BgZWRkYGSIYmRgYAAxQICRg4ERwgLSjIzMIAYbAA7CAH4=') as a decorator on your test case
```
</details>
| open | 2024-10-30T00:30:44Z | 2024-11-01T00:30:39Z | https://github.com/pydata/xarray/issues/9698 | [
"topic-hypothesis"
] | github-actions[bot] | 1 |
adamerose/PandasGUI | pandas | 195 | Very slow to load dataframe of 369936 rows x 2 columns | Hi all,
I have been trying to use your cool and nice pandasGUI and it is really a useful tool.
However, to load the specified dataframe is quite fast, but the interactivity is difficult while making a x,y line graph, or almost impossible to do. I suggest to have a different thread for grapher tab, allowing the user to do something else in the meantime while applying finish button.
Any trick to overcome that situation.
Thanks.
Francis | open | 2022-02-19T02:57:03Z | 2022-02-20T19:52:47Z | https://github.com/adamerose/PandasGUI/issues/195 | [] | FrancisThibaultNRC | 0 |
pytorch/vision | machine-learning | 8,751 | Building libtorchvision.so with nvJPEG | ### 🐛 Describe the bug
When I build `libtorchvision.so` with cmake, it ends up without `nvJPEG`.
How should I modify `CMakeLists.txt` in order to get this to work?
Hope you can help me :) /Soren
### Versions
0.20.0 | open | 2024-11-26T00:21:10Z | 2024-11-27T15:37:33Z | https://github.com/pytorch/vision/issues/8751 | [] | srasmussenvl | 1 |
hankcs/HanLP | nlp | 579 | http://ictclas.nlpir.org/nlpir/ 请问跟这个比,区别在哪 | closed | 2017-07-10T16:02:29Z | 2017-07-12T10:11:46Z | https://github.com/hankcs/HanLP/issues/579 | [
"duplicated"
] | ExtremeYu | 1 | |
pytest-dev/pytest-randomly | pytest | 378 | Incompatible with `pytest-mypy-plugins` (3.10 version) | ### Python Version
3.8.11
### Package Version
3.10
### Description
I am trying to update `pytest-randomly` from `3.8` to `3.10`. But my test suite now fails:
```
============================= test session starts ==============================
platform linux -- Python 3.8.11, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
Using --randomly-seed=2912292964
rootdir: /home/runner/work/returns/returns, configfile: setup.cfg
plugins: randomly-3.10.0, subtests-0.5.0, xdist-2.3.0, hypothesis-6.14.6, returns-0.16.0, mypy-plugins-1.7.0, anyio-3.3.0, forked-1.3.0
collected 830 items
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/_pytest/main.py", line 269, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/_pytest/main.py", line 322, in _main
INTERNALERROR> config.hook.pytest_collection(session=session)
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
INTERNALERROR> self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/_pytest/main.py", line 333, in pytest_collection
INTERNALERROR> session.perform_collect()
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/_pytest/main.py", line 637, in perform_collect
INTERNALERROR> hook.pytest_collection_modifyitems(
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
INTERNALERROR> self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pytest_randomly/__init__.py", line 212, in pytest_collection_modifyitems
INTERNALERROR> _shuffle_by_class(list(group), seed),
INTERNALERROR> File "/home/runner/work/returns/returns/.venv/lib/python3.8/site-packages/pytest_randomly/__init__.py", line 239, in _shuffle_by_class
INTERNALERROR> klass_items.sort()
INTERNALERROR> TypeError: '<' not supported between instances of 'YamlTestItem' and 'YamlTestItem'
```
Link: https://github.com/dry-python/returns/pull/1021/checks?check_run_id=3322840792
I guess this happens because we use [`pytest-mypy-plugins`](https://github.com/typeddjango/pytest-mypy-plugins) where we define our tests in `yml` files, example: https://github.com/dry-python/returns/blob/master/typesafety/test_functions/test_tap.yml
Steps to reproduce:
1. Clone https://github.com/dry-python/returns
2. Run `poetry install`
3. Run `poetry run pip install 'pytest-randomly==3.10.0'`
4. Run `poetry run pytest typesafety -p no:cov -o addopts="" --mypy-ini-file=setup.cfg
`: https://github.com/dry-python/returns/blob/master/.github/workflows/test.yml#L67 | closed | 2021-08-13T16:21:18Z | 2021-08-13T21:05:51Z | https://github.com/pytest-dev/pytest-randomly/issues/378 | [] | sobolevn | 2 |
OWASP/Nettacker | automation | 62 | The scan methods in the readme need to be updated | The readme does not have the latest scan methods included.
| closed | 2018-03-09T13:47:22Z | 2018-04-17T21:41:38Z | https://github.com/OWASP/Nettacker/issues/62 | [
"enhancement",
"done"
] | shaddygarg | 1 |
deezer/spleeter | deep-learning | 303 | [Bug] IndexError: index 1 is out of bounds for axis 1 with size 1 | <!-- PLEASE READ THIS CAREFULLY :
- Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed
- First check FAQ from wiki to see if your problem is not already known
-->
## Description
When I use ```spleeter separate``` a audio file, I met this error:
```prompt
Traceback (most recent call last):
File "/usr/local/bin/spleeter", line 8, in <module>
sys.exit(entrypoint())
File "/usr/local/lib/python3.6/dist-packages/spleeter/__main__.py", line 54, in entrypoint
main(sys.argv)
File "/usr/local/lib/python3.6/dist-packages/spleeter/__main__.py", line 46, in main
entrypoint(arguments, params)
File "/usr/local/lib/python3.6/dist-packages/spleeter/commands/separate.py", line 45, in entrypoint
synchronous=False
File "/usr/local/lib/python3.6/dist-packages/spleeter/separator.py", line 191, in separate_to_file
sources = self.separate(waveform, audio_descriptor)
File "/usr/local/lib/python3.6/dist-packages/spleeter/separator.py", line 157, in separate
return self.separate_librosa(waveform, audio_descriptor)
File "/usr/local/lib/python3.6/dist-packages/spleeter/separator.py", line 145, in separate_librosa
stft = self.stft(waveform)
File "/usr/local/lib/python3.6/dist-packages/spleeter/separator.py", line 126, in stft
dl, dr = (data[:, :, 0].T, data[:, :, 1].T) if inverse else (data[:, 0], data[:, 1])
IndexError: index 1 is out of bounds for axis 1 with size 1
```
<!-- Give us a clear and concise description of the bug you are reporting. -->
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
1. Installed using `pip install spleeter`(spleeter-1.5.0)
2. Run as `spleeter separate -i 68481579482858.mp3 -o output/` (On Colab environment)
Attach: [68481579482858.zip](https://github.com/deezer/spleeter/files/4385132/68481579482858.zip)
3. Got this error
## Output
```bash
Share what your terminal says when you run the script (as well as what you would expect).
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Ubuntu 18.04 |
| Installation type | pip |
| RAM available | 12GB |
| Hardware spec | CPU |
## Additional context
<!-- Add any other context about the problem here, references, cites, etc.. -->
| closed | 2020-03-26T05:21:42Z | 2020-04-13T09:32:53Z | https://github.com/deezer/spleeter/issues/303 | [
"bug",
"invalid"
] | DGideas | 5 |
graphql-python/gql | graphql | 356 | Am i able to mock response with gql? | I am trying to do some unit tests and i used requests_mock.
I got an error "Invalid or incomplete introspection result. Ensure that you are passing the 'data' attribute of an introspection response and no 'errors' were returned alongside"
i searched for mock test instructions in the doco but cant find anything. i found with graphine one seemed to be able to mock a test by patching it. Not sure if there is a go-to solution with this client? | closed | 2022-08-26T01:50:51Z | 2022-08-31T00:43:50Z | https://github.com/graphql-python/gql/issues/356 | [] | Mimezzz | 4 |
ultralytics/ultralytics | computer-vision | 19,082 | nms=true for exporting to onnx | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
i get this error
```
(yolo) root@workstation-016:/mnt/4T/Tohidi/object_detector_service# yolo export model=yolo11
x.pt nms=true format=engine device=3
Ultralytics 8.3.71 🚀 Python-3.10.0 torch-2.5.1+cu124 CUDA:3 (NVIDIA H100 PCIe, 80995MiB)
YOLO11x summary (fused): 464 layers, 56,919,424 parameters, 0 gradients, 194.9 GFLOPs
Traceback (most recent call last):
File "/opt/anaconda3/envs/yolo/bin/yolo", line 8, in <module>
sys.exit(entrypoint())
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/cfg/__init__.py",
line 986, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/engine/model.py",
line 740, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/engine/exporter.py
", line 354, in __call__
y = NMSModel(model, self.args)(im) if self.args.nms and not coreml else model(im)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/nn/modules/module.py", l
ine 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/nn/modules/module.py", l
ine 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/engine/exporter.py
", line 1559, in forward
extra_shape = pred.shape[-1] - (4 + self.model.nc) # extras from Segment, OBB, Pose
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/nn/modules/module.py", l
ine 1931, in __getattr__
raise AttributeError(
AttributeError: 'DetectionModel' object has no attribute 'nc'
```
****
### Environment
```
Ultralytics 8.3.71 🚀 Python-3.10.0 torch-2.5.1+cu124 CUDA:0 (NVIDIA H100 80GB HBM3, 80995MiB)
Setup complete ✅ (255 CPUs, 1007.7 GB RAM, 1807.6/1831.2 GB disk)
OS Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Environment Linux
Python 3.10.0
Install pip
RAM 1007.65 GB
Disk 1807.6/1831.2 GB
CPU AMD EPYC 7773X 64-Core Processor
CPU count 255
GPU NVIDIA H100 80GB HBM3, 80995MiB
GPU count 6
CUDA 12.4
numpy ✅ 1.26.4<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.1>=1.4.1
torch ✅ 2.5.1>=1.8.0
torch ✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 6.1.1
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.0.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
```
### Minimal Reproducible Example
```
yolo export model=yolo11x.pt format=engine device=3 nms=true
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-02-05T12:00:05Z | 2025-02-06T02:43:54Z | https://github.com/ultralytics/ultralytics/issues/19082 | [
"bug",
"fixed",
"exports"
] | mohamad-tohidi | 2 |
ionelmc/pytest-benchmark | pytest | 143 | test_commit_info_error fails with new git | With git version 2.20.1:
```
=================================== FAILURES ===================================
____________________________ test_commit_info_error ____________________________
/build/python-pytest-benchmark/src/pytest-benchmark-3.2.0/tests/test_utils.py:123: in test_commit_info_error
assert info['error'].lower() == 'CalledProcessError(128, ' \
E assert "calledproces...ot set).\\n')" == "calledprocess...s): .git\\n')"
E Skipping 51 identical leading characters in diff, use -v to show
E - y (or any parent up to mount point /)\nstopping at filesystem boundary (git_discovery_across_filesystem not set).\n')
E + y (or any of the parent directories): .git\n')
``` | closed | 2019-01-09T00:34:07Z | 2019-01-10T15:37:14Z | https://github.com/ionelmc/pytest-benchmark/issues/143 | [] | felixonmars | 0 |
deepset-ai/haystack | machine-learning | 8,291 | Rename `Pipeline.__init__()` argument `max_loops_allowed` as it is misleading | ## Summary and motivation
The name `max_loops_allowed` is misleading when calling `Pipeline.__init__()` as it doesn't limit the number of times a cycle in the Pipeline graph is executed.
Instead it limits the number of times a single Component can run. This is misleading for the user and we should rename it to something more sensible. `max_run_per_component` could be a good alternative.
We should deprecate `max_loops_allowed` to start with. | closed | 2024-08-27T08:37:10Z | 2024-09-12T09:00:14Z | https://github.com/deepset-ai/haystack/issues/8291 | [
"breaking change",
"type:enhancement",
"P2"
] | silvanocerza | 1 |
mljar/mljar-supervised | scikit-learn | 696 | Get confidence scores for regression predictions | Is it possible to get confidence scores in predicting values for regression predictions in mljar-supervised AutoML? | open | 2024-01-25T13:15:08Z | 2024-01-25T13:41:05Z | https://github.com/mljar/mljar-supervised/issues/696 | [
"enhancement",
"help wanted"
] | namelessperson0 | 1 |
noirbizarre/flask-restplus | api | 194 | AttributeError: Api does not have __schema__ attribute | Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
Flask (0.11.1)
flask-restplus (0.9.2)
Flask-Script (2.0.5)
[2016-08-17 15:07:57,503] ERROR in app: Exception on /mailang/api/v1/swagger.json [GET]
Traceback (most recent call last):
File "E:\Python27\lib\site-packages\flask\app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "E:\Python27\lib\site-packages\flask\app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "E:\Python27\lib\site-packages\flask_restplus\api.py", line 310, in wrapper
resp = resource(_args, *_kwargs)
File "E:\Python27\lib\site-packages\flask\views.py", line 84, in view
return self.dispatch_request(_args, *_kwargs)
File "E:\Python27\lib\site-packages\flask_restplus\resource.py", line 44, in dispatch_request
resp = meth(_args, *_kwargs)
File "E:\Python27\lib\site-packages\flask_restplus\api.py", line 751, in get
return self.api.**schema**
File "E:\Python27\lib\site-packages\flask_restplus\api.py", line 206, in **getattr**
raise AttributeError('Api does not have {0} attribute'.format(name))
AttributeError: Api does not have **schema** attribute
127.0.0.1 - - [17/Aug/2016 15:07:57] "GET /mailang/api/v1/swagger.json HTTP/1.1" 500 -
| closed | 2016-08-17T07:12:55Z | 2019-02-27T02:23:27Z | https://github.com/noirbizarre/flask-restplus/issues/194 | [
"Need more feedback"
] | 984958198 | 25 |
anselal/antminer-monitor | dash | 124 | Add default value on KeyError | Some times the software cannot retrieve some values from the miner like POOLS or HASHRATE and it result for the app to crash.
Adding an exception or a default value for that key may fix the bug. | closed | 2018-09-14T18:45:07Z | 2019-12-06T12:58:24Z | https://github.com/anselal/antminer-monitor/issues/124 | [
":bug: bug"
] | anselal | 0 |
prkumar/uplink | rest-api | 20 | Example change from uplink import * to only include used modules | Any specific reason why in the example you're importing all packages like this?
```python
from uplink import *
```
References:
- https://stackoverflow.com/questions/2386714/why-is-import-bad
I think in the example we should be doing explicit imports for `headers `, `Consumer `, etc. This is maybe just a following the best practices of python so feel free to close or I could issue a PR fixing those after trying out `uplink`. | closed | 2017-11-18T17:56:24Z | 2018-01-09T04:29:31Z | https://github.com/prkumar/uplink/issues/20 | [
"Documentation"
] | ivansabik | 4 |
encode/databases | sqlalchemy | 426 | sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:postgresql.aiopg | My database url is
```
DATABASE_URL = "postgresql+aiopg://nemo:password@localhost:5432/nemo"
database = databases.Database(DATABASE_URL)
```
Getting `sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:postgresql.aiopg` | closed | 2021-11-20T10:37:37Z | 2021-11-20T10:40:54Z | https://github.com/encode/databases/issues/426 | [] | harshitsinghai77 | 0 |
blb-ventures/strawberry-django-plus | graphql | 213 | Documentation for migrating Graphene relay tests | I just wanted to leave a few pointers to others migrating their tests from Graphene with relay support. Below are code snippets that can be replaced one-to-one:
```py
# If not using the replacement base test class
- response = self.query(
+ from my_app.graphql import schema
+ response = schema.execute_sync(
# If using the replacement base test class
class MyTestCase(GraphQLTestCase):
def setUp(self) -> None:
self.schema = schema
self.user_a = get_user_model().objects.create(username="UserA")
self.setQueryUser(self.user_a)
def test_controller(self):
...
- self.assertResponseNoErrors(response)
+ self.assertFalse(response.errors)
- self.assertResponseHasErrors(response)
+ self.assertTrue(response.errors)
- response.json()["data"]
+ response.data
- response.json()["errors"]
+ response.errors
- from_global_id(
+ from strawberry_django_plus.relay import from_base64
+ from_base64(
- to_global_id(
+ from my_app.graphql import to_gid
+ to_gid(
- with self.assertLogs("graphql.execution",
+ with self.assertLogs("strawberry.execution",
- self.client.force_login(
+ self.setQueryUser(
```
The following class aims to be a replacement to [Graphene's GraphQLTestCase](https://github.com/graphql-python/graphene-django/blob/4e5acd47025043764ad113cc4613db3f80c050bd/graphene_django/utils/testing.py#L150) class:
```
def to_gid(gql_type, pk: uuid.UUID) -> relay.GlobalID | None:
if pk:
return relay.GlobalID(gql_type._type_definition.name, str(pk))
return None
class GraphQLTestCase(TestCase):
def __init__(self, *args, **kwargs):
self._query_user = None
self.schema = None
super().__init__(*args, **kwargs)
def query(
self, query, operation_name=None, input_data=None, variables=None, headers=None
):
request = GraphQLHTTPConsumer(self.schema)
request.scope = {"user": self._query_user} if self._query_user else {}
context = StrawberryChannelsContext(request=request)
if input_data:
variables = {} if not variables else variables
variables = {**variables, "input": input_data}
response = self.schema.execute_sync(
query,
variable_values=variables,
context_value=context,
operation_name=operation_name,
)
return response
def setQueryUser(self, user) -> None:
self._query_user = user
```
Additionally, handling errors in GraphQL mutations requires the following change:
```gql
# Graphene style
mutation createController($input: CreateControllerInput!) {
createController(input: $input) {
controller {
id, name
}
}
}
# Strawberry style
mutation createController($input: CreateControllerInput!) {
createController(input: $input) {
... on ControllerGQLNode {
id, name
}
... on OperationInfo {
messages {
kind, message, field
}
}
}
}
``` | closed | 2023-05-28T16:06:34Z | 2023-06-08T13:15:44Z | https://github.com/blb-ventures/strawberry-django-plus/issues/213 | [] | moritz89 | 0 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 738 | Relationship update using flask marshmallow does not work | Hi
The following code produces constraint error (unable to update foreign key with null value:
```
class Model1(db.Model):
__tablename__ = "model1"
id = db.Column(db.Integer, primary_key=True)
field1 = db.Column(db.String, nullable=True)
model2 = db.relationship('Model2', uselist=False, back_populates='alert', lazy='select')
class Model2(db.Model):
__tablename__ = "model2"
id = db.Column(db.Integer, primary_key=True)
model1_id = db.Column(db.Integer, db.ForeignKey('model1.id'), nullable=False)
field11 = db.Column(db.String, nullable=True)
class Model2Schema(ModelSchema):
class Meta:
model = Model2
sqla_session = db.session
class Model1Schema(ModelSchema):
model2 = Nested(Model2, many=True,
exclude=('id', 'model2',))
class Meta:
model = Model2
sqla_session = db.session
model1 = Model1.query.get_or_404(1)
json = { "id": 1, "field1": "asdasd" , "model2": { "field11": "asdasdasdas" } }
Model1Schema().load(json, instance=model1, partial=True)
db.session.commit()
```
the exception thrown is that updating model2 fails due to an attempt to set model1_id to null
any idea if that's a bug or am I doing anything wrong?
| closed | 2019-05-16T14:20:59Z | 2020-12-05T20:21:52Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/738 | [] | liorchen | 1 |
amisadmin/fastapi-amis-admin | fastapi | 119 | 可否补充单纯使用CRUD模块的文档 | open | 2023-08-09T05:45:33Z | 2023-08-09T05:45:33Z | https://github.com/amisadmin/fastapi-amis-admin/issues/119 | [] | 0x587 | 0 | |
lepture/authlib | flask | 323 | OAuth and JWT | I am working on django + react app and want to integrate OAuth 2.0 using JWT
I tried to follow docs and found docs for integrating Authlib with django app but there was no option for integrating JWT with OAuth
```
<script>
function signInCallback(authResult) {
if (authResult['code']) {
// Hide the sign-in button now that the user is authorized, for example:
$('#signinButton').attr('style', 'display: none');
// Send the code to the server
$.ajax({
type: 'POST',
url: 'http://example.com/storeauthcode',
// Always include an `X-Requested-With` header in every AJAX request,
// to protect against CSRF attacks.
headers: {
'X-Requested-With': 'XMLHttpRequest'
},
contentType: 'application/octet-stream; charset=utf-8',
success: function(result) {
// Handle or verify the server response.
},
processData: false,
data: authResult['code']
});
} else {
// There was an error.
}
}
</script>
```
This code sends auth_code to backend server, there should be a way to authorize users based on this auth code
I tried to follow this entire [guide](https://developers.google.com/identity/sign-in/web/server-side-flow)
| closed | 2021-02-22T04:16:27Z | 2021-02-24T06:46:45Z | https://github.com/lepture/authlib/issues/323 | [] | Priyansh2001here | 1 |
deepspeedai/DeepSpeed | pytorch | 7,150 | [BUG] Receiving CUDA error: invalid argument using pytorch 2.7 with deepspeed 0.16.4 with Cuda 12.8 | **Describe the bug**
I am currently not able to run deepspeed latest version (0.16.4) with cuda 12.8 using pytorch 2.7. I am receiving the following error stack:
GPU: 3090 TI FE
```
[rank0]: RuntimeError: CUDA error: invalid argument
[rank0]: CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
[rank0]: For debugging consider passing CUDA_LAUNCH_BLOCKING=1
[rank0]: Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
**To Reproduce**
Model Name: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
Use deepspeed config:
```
{
"train_batch_size": 1,
"gradient_accumulation_steps": 1,
"optimizer": {
"type": "AdamW",
"params": {
"lr": 2e-5
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"fp16": {
"enabled": true
}
}
```
**Expected behavior**
I expect it to be able to run and allow me to train the model without getting CUDA error: invalid argument
**ds_report output**
```
[2025-03-18 20:48:02,997] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
async_io ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
[WARNING] FP Quantizer is using an untested triton version (3.2.0), only 2.3.(0, 1) and 3.0.0 are known to be compatible with these kernels
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
gds .................... [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.7
[WARNING] using untested triton version (3.2.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/xxxx/lib/python3.12/site-packages/torch']
torch version .................... 2.7.0.dev20250309+cu128
deepspeed install path ........... ['/xxxx/lib/python3.12/site-packages/deepspeed']
deepspeed info ................... 0.16.4, unknown, unknown
torch cuda version ............... 12.8
torch hip version ................ None
nvcc version ..................... 12.8
deepspeed wheel compiled w. ...... torch 0.0, cuda 0.0
shared memory (/dev/shm) size .... 61.66 GB
```
**Screenshots**
N/A
**System info (please complete the following information):**
- OS: Ubuntu 24.10
- GPU count and types: 3090 TI FE 1
- Interconnects : N/A
- Python version: 3.12.7
**Launcher context**
Are you launching your experiment with the `deepspeed` launcher, MPI, or something else? No
**Docker context**
Are you using a specific docker image that you can share? No
**Additional context**
N/A
| open | 2025-03-19T03:51:10Z | 2025-03-24T01:24:21Z | https://github.com/deepspeedai/DeepSpeed/issues/7150 | [
"bug",
"training"
] | rpgmaker | 32 |
Esri/arcgis-python-api | jupyter | 1,737 | documentation refers to a list of values for the privileges parameter that doesn't seem to exist | **Screenshots**
If applicable, add screenshots to help explain your problem.
https://developers.arcgis.com/python/api-reference/arcgis.gis.toc.html#apikeymanager

| closed | 2024-01-09T20:09:10Z | 2024-01-15T07:37:48Z | https://github.com/Esri/arcgis-python-api/issues/1737 | [
"bug"
] | leeberryman | 2 |
mwaskom/seaborn | pandas | 2,924 | next gen usage question | How can I plot all (or a subset of) the columns of a pandas dataframe, using the index as x-axis, with the new object-based interface? | closed | 2022-07-26T11:18:56Z | 2022-07-28T14:17:38Z | https://github.com/mwaskom/seaborn/issues/2924 | [
"question",
"objects-plot"
] | bdch1234 | 6 |
seleniumbase/SeleniumBase | pytest | 2,442 | Dashboard and report not reflecting failure correctly with pytest-check | I use [pytest-check](https://pypi.org/project/pytest-check/) for times that my test function checks more than one thing and want to proceed with the other checks if one of them fail.
When a test fails with pytest-check:
(a) for dashboard - chart shows Passed, table shows Passed
(b) for report - chart shows Passed, table shows Failed
```
import pytest
from seleniumbase import BaseCase
import pytest_check
BaseCase.main(__name__, __file__)
class FailingTests(BaseCase):
@pytest.mark.expected_failure
def test_find_army_of_robots_on_xkcd_desert_island(self):
self.open("https://xkcd.com/731/")
print("\n(This test should fail 1)")
count = 0
# self.assert_equal(count, 1)
pytest_check.equal(count, 1)
print("\n(This test should fail 2)")
# self.assert_equal(count, 2)
pytest_check.equal(count, 2)
```
Test info:
```
================================= FAILURES ==================================
________ FailingTests.test_find_army_of_robots_on_xkcd_desert_island ________
FAILURE: check 0 == 1
test_fail.py:13 in test_find_army_of_robots_on_xkcd_desert_island() -> pytest_check.equal(count, 1)
FAILURE: check 0 == 2
------------------------------------------------------------
Failed Checks: 2
----------------------------------------------------------------------- generated html file: file://C: ...
``` | closed | 2024-01-20T11:53:11Z | 2024-01-20T14:43:05Z | https://github.com/seleniumbase/SeleniumBase/issues/2442 | [
"invalid usage"
] | listerplus | 1 |
apache/airflow | python | 47,382 | Make FAB provider smaller | FAB provider currently contains all the node modules and others - they should be excluded.
Currently the size of FAB provider wheel is 137 MB | open | 2025-03-05T12:38:16Z | 2025-03-06T15:54:50Z | https://github.com/apache/airflow/issues/47382 | [
"provider:fab"
] | potiuk | 5 |
timkpaine/lantern | plotly | 190 | dist with twine | closed | 2019-08-03T20:43:19Z | 2019-08-06T20:45:35Z | https://github.com/timkpaine/lantern/issues/190 | [
"ready",
"feature"
] | timkpaine | 0 | |
SYSTRAN/faster-whisper | deep-learning | 309 | run a local fine tuned model? | in order to run a fine tuned local model with whisper, this has to be done:
def hf_to_whisper_states(text):
text = re.sub('.layers.', '.blocks.', text)
text = re.sub('.self_attn.', '.attn.', text)
text = re.sub('.q_proj.', '.query.', text)
text = re.sub('.k_proj.', '.key.', text)
text = re.sub('.v_proj.', '.value.', text)
text = re.sub('.out_proj.', '.out.', text)
text = re.sub('.fc1.', '.mlp.0.', text)
text = re.sub('.fc2.', '.mlp.2.', text)
text = re.sub('.fc3.', '.mlp.3.', text)
text = re.sub('.fc3.', '.mlp.3.', text)
text = re.sub('.encoder_attn.', '.cross_attn.', text)
text = re.sub('.cross_attn.ln.', '.cross_attn_ln.', text)
text = re.sub('.embed_positions.weight', '.positional_embedding', text)
text = re.sub('.embed_tokens.', '.token_embedding.', text)
text = re.sub('model.', '', text)
text = re.sub('attn.layer_norm.', 'attn_ln.', text)
text = re.sub('.final_layer_norm.', '.mlp_ln.', text)
text = re.sub('encoder.layer_norm.', 'encoder.ln_post.', text)
text = re.sub('decoder.layer_norm.', 'decoder.ln.', text)
return text
hf_state_dict = torch.load("/home/silvacarl/Desktop/whisper-stt-finetune/op_dir_epoch/checkpoint-540/pytorch_model.bin") # pytorch_model.bin file
# Rename layers
for key in list(hf_state_dict.keys())[:]:
new_key = hf_to_whisper_states(key)
hf_state_dict[new_key] = hf_state_dict.pop(key)
# Init Whisper Model and replace model weights
whisper_model = whisper.load_model('large-v2')
whisper_model.load_state_dict(hf_state_dict)
model = whisper.load_model(model_name).cuda()
so my question is to run a local fine tuned model with faster-whisper, is it the same or similar?
or would i have to do above, then save the model locally, then run it through something else to convert it to faster-whisper model format? | closed | 2023-06-19T20:21:46Z | 2023-06-20T14:34:44Z | https://github.com/SYSTRAN/faster-whisper/issues/309 | [] | silvacarl2 | 2 |
wkentaro/labelme | computer-vision | 556 | Application crashes on polygon deletion | How to reproduce:
1. Select a polygon
2. Move the polygon
3. Do not move the mouse, do mouse right-click
4. Select "Delete polygons"
Expected behavior: Polygon gets deleted
Observed behavior: Application crashes
The application crashes due to the `ValueError` happening on the following line in the function `mouseReleaseEvent` when the polygon was already deleted.
```
index = self.shapes.index(self.hShape)
```
Deeper investigation showed that at the moment of the crash both conditions are true:
`self.movingShape == True` and `self.hShape is not None`. It seems that `movingShape` only resets to `False` when the mouse moves. | closed | 2020-01-20T02:12:51Z | 2020-01-22T02:26:11Z | https://github.com/wkentaro/labelme/issues/556 | [] | sergeyshilin | 1 |
arogozhnikov/einops | numpy | 315 | Circular imports when importing einops and torch._dynamo | **Describe the bug**
Importing `einops` before `torch._dynamo` currently leads to warnings. I'm not sure if this needs a fix on the `pytorch` or `einops` side. This is annoying for CI pipelines, where warnings are typically treated as errors. Note that, with sorted imports, `einops` will typically import before any `torch` namespaces.
```
$ python -W "error" -c 'import einops; import torch._dynamo'
Traceback (most recent call last):
File "/home/bfe2rng/Code/rlcore/venv/lib/python3.11/site-packages/einops/_torch_specific.py", line 106, in allow_ops_in_compiled_graph
from torch._dynamo import allow_in_graph
ImportError: cannot import name 'allow_in_graph' from partially initialized module 'torch._dynamo' (most likely due to a circular import) (/home/bfe2rng/Code/rlcore/venv/lib/python3.11/site-packages/torch/_dynamo/__init__.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/bfe2rng/Code/rlcore/venv/lib/python3.11/site-packages/torch/_dynamo/__init__.py", line 2, in <module>
from . import allowed_functions, convert_frame, eval_frame, resume_execution
File "/home/bfe2rng/Code/rlcore/venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 62, in <module>
from .output_graph import OutputGraph
File "/home/bfe2rng/Code/rlcore/venv/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 89, in <module>
from .variables.builder import GraphArg, TrackedFake, VariableBuilder, wrap_fx_proxy
File "/home/bfe2rng/Code/rlcore/venv/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 143, in <module>
from .optimizer import OptimizerVariable
File "/home/bfe2rng/Code/rlcore/venv/lib/python3.11/site-packages/torch/_dynamo/variables/optimizer.py", line 5, in <module>
from ..decorators import mark_static_address
File "/home/bfe2rng/Code/rlcore/venv/lib/python3.11/site-packages/torch/_dynamo/decorators.py", line 284, in <module>
allowed_functions.add_module_init_func("einops", _allow_in_graph_einops)
File "/home/bfe2rng/Code/rlcore/venv/lib/python3.11/site-packages/torch/_dynamo/allowed_functions.py", line 489, in add_module_init_func
init_func()
File "/home/bfe2rng/Code/rlcore/venv/lib/python3.11/site-packages/torch/_dynamo/decorators.py", line 264, in _allow_in_graph_einops
from einops._torch_specific import ( # noqa: F401
File "/home/bfe2rng/Code/rlcore/venv/lib/python3.11/site-packages/einops/_torch_specific.py", line 127, in <module>
allow_ops_in_compiled_graph()
File "/home/bfe2rng/Code/rlcore/venv/lib/python3.11/site-packages/einops/_torch_specific.py", line 108, in allow_ops_in_compiled_graph
warnings.warn("allow_ops_in_compiled_graph failed to import torch: ensure pytorch >=2.0", ImportWarning)
ImportWarning: allow_ops_in_compiled_graph failed to import torch: ensure pytorch >=2.0
```
**Reproduction steps**
Steps to reproduce the behavior:
```shell
python -W "error" -c 'import einops; import torch._dynamo'
```
**Expected behavior**
No output, import order does not matter.
**Your platform**
Ubuntu 22.04, `torch==2.2.2`, `einops==0.7.0` | closed | 2024-04-11T06:06:36Z | 2025-01-06T12:58:06Z | https://github.com/arogozhnikov/einops/issues/315 | [
"backend bug"
] | befelix | 8 |
fastapi-users/fastapi-users | asyncio | 273 | Use email as field name instead of username | Hello, Frankie!
First, I want to thank you for this fantastic job; it has helped me a lot in a few projects (both personal projects and for school). I'm posting this issue because in `auth/jwt/login`, I want to keep using email authentication, but I want to use "email" as the name of the corresponding field, instead of the current name, which is "username". I would like to accomplish the following:
```
POST /auth/jwt/login HTTP/1.1
Content-Type: application/x-www-form-urlencoded
accept: application/json
Host: 127.0.0.1:8000
Connection: close
User-Agent: Paw/3.1.10 (Macintosh; OS X/10.15.4) GCDHTTPRequest
Content-Length: 39
email=example%40uc.cl&password=hellohello123
```
Instead of the current:
```
POST /auth/jwt/login HTTP/1.1
Content-Type: application/x-www-form-urlencoded
accept: application/json
Host: 127.0.0.1:8000
Connection: close
User-Agent: Paw/3.1.10 (Macintosh; OS X/10.15.4) GCDHTTPRequest
Content-Length: 39
username=example%40uc.cl&password=hellohello123
```
What should I overload in the source code to rename the "username" field to "email" on that login request?
Any help would be much appreciated. | closed | 2020-07-25T18:54:13Z | 2024-07-14T12:34:11Z | https://github.com/fastapi-users/fastapi-users/issues/273 | [
"question"
] | diflores | 4 |
chaos-genius/chaos_genius | data-visualization | 1,199 | [BUG] Clickhouse datasource not finishing schema scan | Clickhouse datasource schema sync is never finished / hangs at `in progress`, even several hours after adding it. This is on a completely fresh docker-compose based installation. The database is not excessively large from my understanding.
I tried triggering the sync again and, judging from the logs, it seems that CG is indeed working on the sync:
```
chaosgenius-worker-alerts | {"asctime": "2023-03-30 21:00:08,476", "levelname": "WARNING", "name": "chaos_genius.controllers.data_source_metadata_controller", "message": "Datasource with id: 2 already in Progress, skipping..", "lineno": 133, "funcName": "run_metadata_prefetch", "filename": "data_source_metadata_controller.py"}
chaosgenius-worker-alerts | [2023-03-30 21:00:08,476: WARNING/ForkPoolWorker-1] Datasource with id: 2 already in Progress, skipping..
```
Apart from that, I'm not sure what I'm supposed to be looking for, please let me know what infos/logs are relevant for debugging.
## Explain the environment
- **Chaos Genius version**: latest + 0.11.0 + 0.10.2 (tried all three)
- **OS Version / Instance**: amx64 / Debian 10.11 / Docker-compose 1.24.1
- **Deployment type**: docker-compos
- **Datasource type**: Clickhouse
| open | 2023-03-30T21:40:25Z | 2023-03-30T21:41:11Z | https://github.com/chaos-genius/chaos_genius/issues/1199 | [] | fstolba | 1 |
ipython/ipython | jupyter | 14,641 | IPython parsing 0_* in Python 3.12 | After typing something like `0_z` and Enter, IPython shows the continuation prompt and the cursor: it expects more! But what? Example:
```
Python 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.31.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: 0_z
...:
...:
...: |
```
The vanilla Python prompt, as expected, gives a SyntaxError: invalid decimal literal
It appears that Python 3.12 triggers this IPython problem, as [3.11 works fine](https://stackoverflow.com/questions/79336132/what-does-ipython-expect-after-an-underscore-in-a-numeric-literal?noredirect=1#comment139903154_79336132).
Is this some parser bug perhaps?
This is especially cumbersome with filenames where an underscore follows a digit, e.g.:
```
In [1]: run 3_rag_agent.py
...:
...:
...: |
```
| open | 2025-01-07T15:54:20Z | 2025-01-13T18:31:28Z | https://github.com/ipython/ipython/issues/14641 | [] | mdruiter | 2 |
CorentinJ/Real-Time-Voice-Cloning | python | 717 | I can't use Dataset/Speaker/Utterance | I can't use the upper section in the software. when loading it shows:
Warning: you did not pass a root directory for datasets as argument.
How can I fix this?
Thank you
| closed | 2021-03-31T15:37:41Z | 2021-04-01T00:02:54Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/717 | [] | HeiseBombe | 1 |
ultralytics/ultralytics | pytorch | 18,711 | Why the mAP increase only 0.001 percent every epoch. Any suggestion how to make fast? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
Hello,
I’ve been training a YOLO model on a custom dataset and have noticed that the mean Average Precision (mAP) increases by approximately 0.001% with each epoch. The training process doesn't provide clear guidance on when to stop, and I'm concerned that the model might be overfitting. However, the confusion matrix at epoch 400 doesn't seem to indicate overfitting.
Do you have any suggestions on how to determine the optimal stopping point or strategies to prevent potential overfitting?
Thank you!
<img width="855" alt="Image" src="https://github.com/user-attachments/assets/3cd039bc-5ed8-4ea2-b646-1b47bfd0c1f5" />
Thanks
### Additional
_No response_ | open | 2025-01-16T12:15:37Z | 2025-01-16T13:59:07Z | https://github.com/ultralytics/ultralytics/issues/18711 | [
"question",
"detect"
] | khandriod | 2 |
Lightning-AI/pytorch-lightning | data-science | 19,991 | Show how to over-fit batches for real | ### Description & Motivation
Ok, there is a flag for the `Trainer`. But how to programmatically check that the loss goes down?
I'd like to write a test, checking that the loss at the end is lower than at the start.
### Pitch
```python
import lightning.pytorch as pl
trainer = pl.Trainer(max_steps=64, overfit_batches=1)
(loss_start, loss_end) = trainer.fit(model, datamodule)
assert loss_end < loss_start
```
### Alternatives
Show how to get those values from the `Trainer`.
### Additional context
_No response_
cc @borda | open | 2024-06-18T16:52:48Z | 2024-07-01T11:32:08Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19991 | [
"feature",
"needs triage"
] | svnv-svsv-jm | 2 |
litestar-org/litestar | api | 3,787 | Bug: "Lockfile hash doesn't match pyproject.toml, packages may be outdated" warning in pdm | ### Description
When running `pdm install` on `litestar` repo you get:
```
Run pdm install -G:all
WARNING: Lockfile is generated on an older version of PDM
WARNING: Lockfile hash doesn't match pyproject.toml, packages may be outdated
Updating the lock file...
```
Link: https://github.com/litestar-org/litestar/actions/runs/11290808586/job/31403420888?pr=3784#step:5:13
I don't think that this is correct.
### URL to code causing the issue
_No response_
### MCVE
_No response_
### Steps to reproduce
```bash
1. Run `pdm install` on clean repo with no `venv`
```
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
`main`
### Platform
- [X] Linux
- [X] Mac
- [X] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-10-14T06:45:08Z | 2025-03-20T15:54:58Z | https://github.com/litestar-org/litestar/issues/3787 | [
"Bug :bug:",
"Dependencies",
"Package"
] | sobolevn | 2 |
huggingface/transformers | deep-learning | 35,948 | Document Question Answering Pipeline fails due to array with an inhomogeneous shape | ### System Info
transformers==4.48.1
### Who can help?
Maybe @amyeroberts ?
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import pipeline
document_qa = pipeline("document-question-answering", model="impira/layoutlm-document-qa")
word_boxes = [["Britta",[100,157,129,163]],["Beispiel",[134,157,176,165]],["Sanit\u00e4r",[181,157,218,163]],["Herrn",[99,177,143,185]],["Emil",[99,191,133,199]],["Exempel",[138,191,200,201]],["Donaustrasse",[99,205,195,213]],["46",[200,205,216,213]],["11300",[100,219,140,227]],["Berlin",[146,219,190,227]],["-",[223,162,226,162]],["Beispielstrasse",[230,157,309,165]],["12",[316,157,326,163]],["-",[331,162,334,162]],["80888",[339,158,371,163]],["Berlin",[376,157,408,163]],["So",[643,172,659,180]],["erreichen",[664,172,731,180]],["Sie",[737,172,757,180]],["uns",[762,175,786,180]],["Internet",[642,187,694,201]],["www",[737,191,772,196]],["britta-sanitaer.de",[776,188,890,196]],["E-Mail",[643,200,689,214]],["britta.beispiel@",[737,202,842,212]],["gmx.net",[845,203,898,212]],["Telefon",[642,214,693,228]],["030",[738,216,762,224]],["\/",[767,216,771,224]],["999666",[776,216,824,224]],["Fax",[643,230,667,238]],["030",[738,230,762,238]],["\/",[767,230,771,238]],["999777",[776,230,824,238]],["Mobil",[642,242,682,256]],["0179",[738,244,770,252]],["\/",[775,244,779,252]],["999888",[784,244,833,252]],["Steuer-Nr",[643,272,708,280]],["122\/5678\/1234",[738,272,838,280]],["UStD",[643,286,685,294]],["DE12345678",[737,286,827,294]],["Datum",[643,315,688,323]],["30.11.2009",[737,315,811,323]],["Kunde",[643,328,687,336]],["14002",[738,328,778,336]],["Rechnung",[643,342,710,352]],["200910214",[737,342,811,350]],["Angebot:",[99,372,160,382]],["10154",[168,372,207,380]],["vom",[213,375,242,380]],["16.11.2009",[248,372,321,380]],["Objekt:",[100,386,148,396]],["10244",[156,386,195,394]],["Berlin,",[200,386,245,395]],["Charlottenstr.",[251,386,341,394]],["152",[348,386,370,394]],["Sehr",[100,415,130,423]],["geehrter",[135,415,189,425]],["Herr",[194,415,224,423]],["Exempel,",[229,415,293,425]],["nach",[99,442,131,450]],["Ausf\u00fchrung",[136,442,215,452]],["der",[220,442,241,450]],["Arbeiten",[246,442,303,450]],["entsprechend",[309,442,397,453]],["meinem",[401,442,455,450]],["og.",[460,445,479,453]],["Angebot",[485,442,542,452]],["erlaube",[547,442,595,450]],["ich",[601,442,620,450]],["mir",[625,442,647,450]],["wie",[652,442,676,450]],["folgt",[681,442,712,452]],["zu",[717,445,732,450]],["berechnen:",[737,442,808,450]],["Rechnung",[100,470,186,482]],["Nr.",[192,470,219,480]],["200910214",[226,470,316,480]],["Das",[538,473,563,481]],["Rechnungsdatum",[568,473,684,484]],["entspricht",[689,473,754,483]],["dem",[759,473,787,481]],["Leistungsdatum",[793,473,899,484]],["Pos",[99,508,123,516]],["Art-Nr.",[133,508,187,516]],["Bezeichnung",[256,506,347,520]],["Menge",[660,508,708,518]],["Einzelpreis",[724,508,803,518]],["Betrag",[851,508,899,518]],["1",[101,522,105,530]],["Austausch",[257,522,325,530]],["der",[331,522,351,530]],["defekten",[357,522,413,530]],["Zuleitung",[419,522,483,532]],["im",[489,522,505,530]],["2,0",[657,522,677,531]],["Std.",[683,522,708,530]],["30,00",[767,522,804,531]],["60,00",[862,522,898,531]],["WC",[256,536,282,544]],["des",[288,536,309,544]],["Ergeschosses",[314,536,402,546]],["2",[99,564,106,572]],["Materialkosten",[256,564,356,572]],["(Diverses",[362,564,424,574]],["Kleinmaterial)",[430,564,527,574]],["3,0",[658,564,677,573]],["Stk.",[683,564,708,572]],["24,56",[766,563,803,573]],["73,68",[862,564,898,573]],["Zahlbar",[100,606,151,614]],["innerhalb",[157,606,219,614]],["von",[224,609,248,614]],["7",[253,606,260,614]],["Tagen",[266,606,307,616]],["(bis",[312,606,336,616]],["zum",[342,609,370,614]],["07.12.2009)",[375,606,454,616]],["unter",[460,607,494,613]],["Abzug",[499,606,543,617]],["Rechnungsbetrag",[600,604,725,618]],["133,68",[814,606,858,616]],["EUR",[864,606,899,614]],["von",[100,623,124,628]],["3%",[129,620,150,628]],["Skonto",[156,620,203,628]],["(Zahlungsbetrag",[208,620,317,630]],["=",[322,624,330,627]],["129,67",[337,620,380,629]],["EUR).",[385,620,427,630]],["Bis",[433,620,454,628]],["zum",[460,623,487,628]],["14.12.2009",[101,634,174,642]],["ohne",[180,634,212,642]],["Abzug.",[216,634,264,644]],["Umsatzsteuer",[99,672,190,680]],["wird",[195,672,225,680]],["nicht",[230,672,263,680]],["in",[269,672,281,680]],["Rechnung",[285,672,352,682]],["gestellt.",[358,672,409,682]],["Als",[415,672,437,680]],["sogenannter",[443,673,522,682]],["Kleinunternehmer",[527,672,648,680]],["i.",[654,672,661,680]],["S.",[667,672,679,680]],["von",[685,675,708,680]],["$",[715,672,721,682]],["19",[728,672,742,680]],["Abs.",[748,672,777,680]],["1",[784,672,788,680]],["UStG",[795,672,833,680]],["wird",[837,672,867,680]],["auf",[873,672,894,680]],["die",[100,685,119,693]],["Regelbesteuerung",[125,685,244,696]],["verzichtet.",[249,686,318,694]],["Vielen",[100,728,144,736]],["Dank",[149,728,185,736]],["f\u00fcr",[189,728,208,736]],["Ihren",[213,728,247,736]],["Auftrag!",[252,728,308,738]],["Ich",[99,742,120,750]],["bitte",[125,742,154,750]],["um",[159,745,180,750]],["\u00dcberweisung",[185,740,274,752]],["des",[279,742,300,750]],["Rechnungsbetrages",[306,742,435,752]],["innerhalb",[440,742,502,750]],["von",[100,758,124,763]],["14",[135,756,150,764]],["Tagen",[159,756,200,766]],["an",[205,758,219,763]],["die",[225,755,244,763]],["unten",[249,757,285,763]],["genannte",[292,757,351,766]],["Bankverbindung",[356,755,467,766]],["Mit",[99,784,123,792]],["freundlichen",[128,784,212,792]],["Gr\u00fc\u00dfen",[218,784,266,792]],["Britta",[99,826,137,834]],["Beispiel",[142,826,196,836]],["Seite",[840,881,872,889]],["1\/1",[879,881,897,889]],["Zahlungsempf\u00e4nger",[100,900,193,907]],["Bankverbindung",[99,910,177,917]],["aus",[100,922,115,925]],["dem",[119,920,138,925]],["Ausland",[142,920,180,925]],["Gesch\u00e4ftsf\u00fchrung",[100,936,190,943]],["Britta",[302,900,329,905]],["Beispiel",[331,900,368,907]],["Beispielbank,",[302,910,366,917]],["KTO",[369,910,393,915]],["0098765,",[397,910,440,916]],["BLZ",[443,910,465,915]],["88899900",[469,910,515,915]],["BIC",[302,920,321,925]],["asdfasdf,",[325,920,366,926]],["IBAN",[370,920,398,925]],["asdfasdf4848",[402,920,463,925]],["Britta",[302,932,329,946]],["Beispiel",[333,932,368,946]]]
document_qa(question="What is the invoice number?", word_boxes=word_boxes, image=None)
```
https://colab.research.google.com/drive/1Rk-a68zREdBBuYN8jVMKUcQUG73Me_6Z?usp=sharing
### Expected behavior
I would expect that the pipeline works with any list of word boxes. But certain word boxes cause the document question answering pipeline to throw a `ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.`
This error does not always occur. Only certain word box combinations seem to trigger it. For example in the reproduction above I was not able to pinpoint the error to any specific word box.
Also, it seems like this error was introduced with transformers version 4.43 as the example wont throw any errors when using version 4.42. Therefore I suspect that this could be the PR that introduced the bug: https://github.com/huggingface/transformers/pull/32076 | closed | 2025-01-28T22:43:40Z | 2025-01-30T15:38:34Z | https://github.com/huggingface/transformers/issues/35948 | [
"bug"
] | tillschander | 4 |
HIT-SCIR/ltp | nlp | 510 | ltp对英文语义依存分析结果的疑问与求助 | 请问一下,对英文句子进行SDP分析,如何得到格式化的结果呢?
ltp.ai 的demo演示结果效果还可以 ,但是只有一个示意图(或者是我没找到格式化输出?)
而通过pip install ltp,调用ltp.sdp得到的结果不太好(是最新的ltp 4.1.4),不知道是small的分析效果问题还是对英文句子分析的支持度不够?希望得到解答,谢谢!
调用ltp.sdp的测试结果如下,可以看到SDP分析结果输出为空:
from ltp import LTP
ltp = LTP()
seg, hidden = ltp.seg(["The boy wants to go to school."])
sdp = ltp.sdp(hidden)
print(sdp)
[[]]
seg, hidden = ltp.seg(["Iraqi Vice President Taha Yasin Ramadan, speaking on Iraqi Radio, was quoted by the British Broadcasting Corp.'s monitoring service as accusing Britain and the United States of working to ``eliminate'' the state of Iraq."])
sdp = ltp.sdp(hidden)
print(sdp)
[[]]
希望尽快得到答复,十分感谢!(通过什么方式能够得到ltp.ai的demo演示结果示意图这种效果的格式化输出,望指路,感谢!) | closed | 2021-05-10T13:57:35Z | 2022-09-12T06:51:23Z | https://github.com/HIT-SCIR/ltp/issues/510 | [] | Cyich | 3 |
microsoft/nni | machine-learning | 5,632 | The accuracy and loss of ENAS algorithm are not convergent. Is there any appropriate parameter setting for the optimizer | **Describe the issue**:
Tensorboard shows:The accuracy and loss of ENAS algorithm are not convergent. Is there any appropriate parameter setting for the optimizer.
**Environment**:
- NNI version:2.10
- Training service (local|remote|pai|aml|etc):remote
- Client OS:Ubuntu
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | open | 2023-07-10T03:02:09Z | 2023-08-11T06:54:27Z | https://github.com/microsoft/nni/issues/5632 | [] | Drxdx | 1 |
nonebot/nonebot2 | fastapi | 2,917 | Plugin: nonebot-plugin-lolinfo | ### PyPI 项目名
nonebot-plugin-lolinfo
### 插件 import 包名
nonebot_plugin_lolinfo
### 标签
[{"label":"LOL","color":"#ea5252"},{"label":"英雄联盟","color":"#00aaff"}]
### 插件配置项
_No response_ | closed | 2024-08-21T11:07:25Z | 2024-10-02T11:43:11Z | https://github.com/nonebot/nonebot2/issues/2917 | [
"Plugin"
] | Shadow403 | 5 |
huggingface/datasets | deep-learning | 6,435 | Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method | ### Describe the bug
1. I ran dataset mapping with `num_proc=6` in it and got this error:
`RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method`
I can't actually find a way to run multi-GPU dataset mapping. Can you help?
### Steps to reproduce the bug
1. Rund SDXL training with `num_proc=6`: https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py
### Expected behavior
Should work well
### Environment info
6x A100 SXM, Linux | closed | 2023-11-19T04:21:16Z | 2024-01-27T17:14:20Z | https://github.com/huggingface/datasets/issues/6435 | [] | kopyl | 3 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 519 | AttributeError: module 'umap' has no attribute 'UMAP' on Windows 10 | not sure if it is a platform specific problem, all umap import would need to be changed to this format:
import umap.umap_ as umap
to resolve this problem from multiple files:
AttributeError: module 'umap' has no attribute 'UMAP'
| closed | 2020-09-03T01:31:28Z | 2020-09-04T00:01:31Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/519 | [] | lawrence124 | 4 |
jupyter/nbgrader | jupyter | 1,622 | upgrade nbclassic to version 0.4.0 | `nbclassic` in version `0.4.0` breaks `nbgrader` with labextensions :
- it seems that the `mathjax-url` settings of the handler in not populated anymore.
- it break the installation of server extensions in dev mode (maybe only if `notebook<7` is also installed).
The first issue should be fixed at least, as `notebook<7` is not supposed to be supported anymore. | closed | 2022-07-04T22:26:55Z | 2023-03-24T10:13:55Z | https://github.com/jupyter/nbgrader/issues/1622 | [] | brichet | 3 |
tfranzel/drf-spectacular | rest-api | 1,077 | These three interfaces are accessible in a development environment, but not in an environment deployed through docker | **Describe the bug**
swagger_url = [
url('^schema/', SpectacularAPIView.as_view(api_version='v1'), name='schema'),
url('^api-ui/', SpectacularSwaggerView.as_view(url_name='schema'), name='swagger-ui'),
url('^api-doc/', SpectacularRedocView.as_view(url_name='schema'), name='swagger-doc'),
]
These three interfaces are accessible in a development environment, but not in an environment deployed through docker
**To Reproduce**
It would be most helpful to provide a small snippet to see how the bug was provoked.
**Expected behavior**
A clear and concise description of what you expected to happen.
Expect normal access to all three interfaces | closed | 2023-09-18T10:01:49Z | 2023-09-18T10:52:05Z | https://github.com/tfranzel/drf-spectacular/issues/1077 | [] | Cloverxue | 5 |
deepset-ai/haystack | nlp | 8,290 | Hosted vector stores: Vertex Search AI, AWS Knowledge Bases, Azure AI Search | I'm curious if there's a reason you've stayed away from the big-tech vector/doc search tools like Google Vertex Search AI, AWS Knowledge Bases, Azure AI Search.
Don't get me wrong: I love pgvector, etc. But the ease of use of 100% hosted services is sometimes helpful.
**Describe the solution you'd like**
Any chance you're adding these, or is langchain/llamaindex more appropriate for these abstractions.
| closed | 2024-08-26T13:36:12Z | 2024-08-27T08:31:55Z | https://github.com/deepset-ai/haystack/issues/8290 | [] | scosman | 2 |
labmlai/annotated_deep_learning_paper_implementations | deep-learning | 252 | Website Code | Hi,
I'm a huge fan of your walkthroughs and website. I've been thinking about trying to make a similar project for robotics and robot-learning specifically and we'd like to model it after your site. Do you share the nn.labmlai.ai code anywhere?
If you'd prefer, I'd also be happy to contribute tutorials to your website. My idea was that some would be robot learning (which definitely fits the theme) and some would be more classical methods in planning and control but implemented in Python as explainers.
Thanks! | closed | 2024-05-08T01:02:17Z | 2024-06-19T10:48:54Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/252 | [] | fishbotics | 1 |
streamlit/streamlit | streamlit | 10,598 | Avoid `StreamlitDuplicateElementId` error when the same widget is in the main area and sidebar | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Using the same widget in the main area and the sidebar results in a `StreamlitDuplicateElementId` error:
```python
import streamlit as st
st.button("Button")
st.sidebar.button("Button")
```

However, we could easily differentiate the automatically generated keys for these two elements, given that one of them is in the sidebar and the other isn't.
### Why?
Convenience, don't need to assign a key but it "just works".
### How?
_No response_
### Additional Context
_No response_ | open | 2025-03-02T19:48:13Z | 2025-03-17T11:55:52Z | https://github.com/streamlit/streamlit/issues/10598 | [
"type:enhancement",
"good first issue",
"feature:st.sidebar"
] | sfc-gh-jrieke | 7 |
ResidentMario/missingno | pandas | 165 | Module loading error | Despite installing the package under the base in Anaconda. Still, I am getting a module not found error when importing it.
<img width="1049" alt="Screen Shot 2023-05-24 at 9 08 13 AM" src="https://github.com/ResidentMario/missingno/assets/49127145/25439824-474b-4770-9c34-e7cbd07b2b43">
| closed | 2023-05-24T14:08:23Z | 2023-07-05T02:18:13Z | https://github.com/ResidentMario/missingno/issues/165 | [] | ishita199615 | 1 |
kizniche/Mycodo | automation | 1,222 | Turkish Language Support Add | **Is your feature request related to a problem? Please describe.**
I can do the Turkish translation for you.
**Describe the solution you'd like**
[I can do the Turkish translation for you.](http://translate.kylegabriel.com:8080/projects/mycodo/)
Language can be added for translation.
**Describe alternatives you've considered**
**Additional context**
| closed | 2022-08-29T13:23:36Z | 2022-11-22T16:20:35Z | https://github.com/kizniche/Mycodo/issues/1222 | [
"enhancement",
"Implemented"
] | ramazansancar | 4 |
nltk/nltk | nlp | 2,472 | cannot be skolemized | I am new to this
I encounter the error Exception: '{-P(x,y), Q(x,y,f(x,y))}' cannot be skolemized
It's from this
x1 = Clause([-P(x,y),Q(x,y,f(x,y))])
Am I encoding this clause correctly? | closed | 2019-12-03T21:13:55Z | 2019-12-03T21:32:42Z | https://github.com/nltk/nltk/issues/2472 | [] | daleangus | 0 |
axnsan12/drf-yasg | rest-api | 607 | How to get filter/ordering params for custom action. | Hi,
Is there something I can put in the @swagger_auto_schema decorator so my custom action has the Viewset's filter/ordering fields documented in Swagger similar to how it is generated automatically for my the list endpoint from the ListModelMixin? Right now I'm having to pass them all through manual_parameters
ex:
class MyView(viewsets.GenericViewSet, mixins.ListModelMixin):
queryset = [some queryset]
serializer_class = [some serializer]
filter_backends = [DjangoFilterBackend]
filterset_class = MyFilterSetClass
ordering_fields = ['fields....']
@swagger_auto_schema(operation_description='Some custom action',
responses={status.HTTP_200_OK: 'Ok',
status.HTTP_404_NOT_FOUND: 'Not found.'})
@action(methods=['get'], detail=True)
def my_custom_action(self, request, pk=None):
queryset = self.filter_queryset(self.get_queryset()) | open | 2020-06-29T16:44:49Z | 2025-03-07T12:14:03Z | https://github.com/axnsan12/drf-yasg/issues/607 | [
"triage"
] | adl-asi | 1 |
python-arq/arq | asyncio | 471 | Too many Redis calls happen randomly | From time to time we notice that ARQ starts to produce a lot of calls to Redis. It consumes a lot of connections and puts a lot of load on Redis
Here how it looks like during the incident:
<img width="336" alt="image" src="https://github.com/user-attachments/assets/c8a1aab4-d55c-4d35-beac-a1cee26505d0">
And most of the time it looks like this:
<img width="338" alt="image" src="https://github.com/user-attachments/assets/99f3be56-bfdc-4db5-a6be-2764d758d539">
It feels like it starts to attempt to execute jobs over and over nonstop. I think this behaviour could happen when you have `queue_read_limit` lower than max jobs on all servers, but that's not the case for us.
We're using:
```
python==3.11.3
arq==0.26.0
redis==4.6.0
```
Unfortunately, I can't reproduce the issue locally, but it happens a couple times a month on prod | open | 2024-07-19T09:54:57Z | 2024-07-19T10:02:32Z | https://github.com/python-arq/arq/issues/471 | [] | SlavaSkvortsov | 0 |
plotly/dash-table | dash | 400 | Thousands separator not working with format.locale | #### Issue description
The `format.locale` on Dash table version 3.6.0 is not working correctly.
#### Steps to reproduce the issue
Using dash-table==3.6.0 and the example code from the documentation on https://dash.plot.ly/datatable as a basis I added `'format': {'locale': {'group': '.', 'decimal': ','}}`
```
import dash
import dash_table
import pandas as pd
import locale
locale.setlocale( locale.LC_ALL, 'de_DE.UTF-8' )
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/solar.csv')
app = dash.Dash(__name__)
app.layout = dash_table.DataTable(
id='table',
columns=[{"name": i, "id": i, "type": "numeric", 'format': {'locale': {'group': '.', 'decimal': ','}}} for i in df.columns],
data=df.to_dict("rows"),
)
if __name__ == '__main__':
app.run_server(debug=True)
```
#### What's the expected result?
The numbers should have a thousands separator.
#### What's the actual result?
The thousands separator is not showing.
#### Additional informations
Also symbols seem not to work. | open | 2019-03-20T14:06:29Z | 2019-09-30T10:35:50Z | https://github.com/plotly/dash-table/issues/400 | [
"dash-type-question"
] | wild-thomas | 4 |
onnx/onnx | deep-learning | 6,720 | Why is the output of the ONNX MatMul node never the same as what PyTorch gives? | # Why the output of the ONNX MatMul node never be the same as what PyTorch gives?
### Question
The output of the ONNX MatMul node is never the same as what PyTorch gives, whether during CPU inference or GPU inference. I've tried a lot of different approaches to test the situation.
### Further information
I write a simple linear layer by torch and export it as onnx model. After that, I could never get the same output from onnx as what my torch model give.
```
import torch
import torch.nn as nn
import torch.onnx
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.fc = nn.Linear(5, 7, bias=False)
def forward(self, x):
return self.fc(x)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
for type in [torch.float32,torch.float64]:
model = SimpleModel().to(type).to(device)
#torch output
x = torch.randn(2,3,5).to(type).to(device)
model.eval()
with torch.no_grad():
outtorch=model(x)
#numpy matmul
with torch.no_grad():
for name, param in model.named_parameters():
numpy_weight=param.data
out_numpy=np.matmul(np.array(x.cpu()),np.array(numpy_weight.cpu()).T)
#export onnx model
onnx_model_path = "simple_model.onnx"
torch.onnx.export(model,
x,
onnx_model_path,
verbose=False,
input_names=['input'],
output_names=['output'],
opset_version=13)
#load onnx model
onnx_model = onnx.load(onnx_model_path)
#check
for initializer in onnx_model.graph.initializer:
onnx_dtype = onnx.TensorProto.DataType.Name(initializer.data_type)
print(f"Dtype: {onnx_dtype}")
onnx_session_cpu = ort.InferenceSession(onnx_model_path)
#format
input_data = np.array(x.cpu())
print(input_data.dtype)
input_name = onnx_session_cpu.get_inputs()[0].name
output_name = onnx_session_cpu.get_outputs()[0].name
#onnx cpy inference
onnx_output_cpu = onnx_session_cpu.run([output_name], {input_name: input_data})
#onnx gpu inference
options = ort.SessionOptions()
options.enable_cpu_mem_arena=False
options.enable_mem_pattern = False
options.enable_mem_reuse = False
options.intra_op_num_threads = 1
cuda_provider_options = {'arena_extend_strategy':'kSameAsRequested',}
ort_session_gpu = ort.InferenceSession(onnx_model_path, sess_options=options, providers=[('CUDAExecutionProvider', cuda_provider_options)])
onnx_output_gpu = ort_session_gpu.run([output_name], {input_name: input_data})
print("onnx cpu vs onnx gpu: ",np.array_equal(onnx_output_cpu,onnx_output_gpu))
print("onnx cpu vs torch (gpu): ",np.array_equal(onnx_output_cpu,np.array(outtorch.cpu())))
print("onnx gpu vs torch (gpu): ",np.array_equal(np.array(outtorch.cpu()),onnx_output_gpu))
print("numpy matmul vs torch (gpu)",np.array_equal(np.array(outtorch.cpu()),out_numpy))
```
The results are:
```
Dtype: FLOAT
float32
onnx cpu vs onnx gpu: False
onnx cpu vs torch (gpu): False
onnx gpu vs torch (gpu): False
numpy matmul vs torch (gpu) True
Dtype: DOUBLE
float64
onnx cpu vs onnx gpu: True
onnx cpu vs torch (gpu): False
onnx gpu vs torch (gpu): False
numpy matmul vs torch (gpu) True
```
What's more, the output of an nn.Conv layer is the same as the corresponding ONNX model's output. The problem only occurs with the Linear layer till now. It's quite strange because the two operations, Conv and Linear, aren't that different. | closed | 2025-02-21T08:35:38Z | 2025-02-22T22:55:37Z | https://github.com/onnx/onnx/issues/6720 | [
"question",
"topic: runtime"
] | JNR000 | 1 |
roboflow/supervision | computer-vision | 1,019 | Bounding box and label annotations do not appear on resulting image | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
I'm currently unable to get annotations (bounding box and label) to show up in the resulting image, with no error logs.
This is what the code looks like for these annotations (note I have tried to debug by varying parameters as much as I could):
```python
def _draw_bounding_boxes(
img: NDArray[np.single], detections: Detections, labels: list[str]
) -> NDArray[np.single]:
print(detections)
bounding_box_annotator = sv.BoundingBoxAnnotator(
thickness=4, color_lookup=ColorLookup.INDEX
)
label_annotator = sv.LabelAnnotator(text_thickness=4, text_scale=2)
# Must use ".copy()" to avoid "provided NumPy array marked as readonly"
# error.
annotated_img = img.copy()
annotated_img: NDArray[np.single] = bounding_box_annotator.annotate( # type: ignore
scene=annotated_img, detections=detections
)
annotated_img = label_annotator.annotate( # type: ignore
scene=annotated_img, detections=detections, labels=labels
)
return annotated_img
```
I am able to draw zones with the `PolygonZone` annotator:
```python
def _zone_roi_to_zone_object(
zone_roi: RegionOfInterest, img_width: int, img_height: int
) -> Zone:
zone_ndarray = _zone_roi_to_supervision_ndarray(zone_roi, img_width, img_height)
polygon_zone = sv.PolygonZone(
polygon=zone_ndarray,
frame_resolution_wh=(img_width, img_height),
)
polygon_annotator = sv.PolygonZoneAnnotator(
zone=polygon_zone, color=sv.Color(255, 255, 255), thickness=4
)
return Zone(
zone=polygon_zone,
annotator=polygon_annotator,
)
def _draw_zones(
img: NDArray[np.single],
detections: Detections,
zone_rois: list[RegionOfInterest],
) -> NDArray[np.single]:
(width, height) = (img.shape[1], img.shape[0])
zones: list[Zone] = list(
map(lambda x: _zone_roi_to_zone_object(x, width, height), zone_rois)
)
# Must use ".copy()" to avoid "provided NumPy array marked as readonly"
# error.
annotated_img = img.copy()
for zone in zones:
zone_presence = cast(
NDArray[np.bool_], zone.zone.trigger(detections) # type: ignore
)
annotated_img: NDArray[np.single] = zone.annotator.annotate( # type: ignore
scene=annotated_img, label=f"Count: {len(zone_presence)}"
)
return annotated_img
```
I am defining my `Detections` object manually. This is what it looks like right before being used for the annotations:
```python
Detections(
xyxy=array([
[ 362, 83, 400, 117],
[1095, 314, 1205, 406],
[ 284, 82, 326, 123],
[ 283, 205, 358, 281],
[ 577, 235, 640, 323],
[ 407, 35, 435, 62]
]),
mask=None
confidence=array([
0.87252545, 0.7196112 , 0.6872527 , 0.62087387, 0.4653169, 0.42191458
], dtype=float32),
class_id=array([0, 0, 0, 0, 0, 0]),
tracker_id=None,
data={}
)
```
I have tried modifying `xyxy` to be normalized coordinates, ensuring the types of each `ndarray` passed to `Detections` are adequate, and so on:
```python
sv_detections = sv.Detections(
xyxy=np.array(detections_xyxy, dtype=np.int_),
class_id=np.array(detections_class_ids, dtype=np.int_),
confidence=np.array(detections_confidence, dtype=np.single),
)
```
Do you guys have any idea what the issue could be?
I'm not sure what to try next as there are no error logs.
### Additional
_No response_ | closed | 2024-03-18T17:42:28Z | 2024-03-19T10:46:34Z | https://github.com/roboflow/supervision/issues/1019 | [
"duplicate",
"question"
] | marcospgp | 3 |
proplot-dev/proplot | data-visualization | 3 | Legend label size not mutable by rcparams | Calling ax.format(rc_kw={'legend.fontsize': int}) does not affect the font size of labels within the legend, and in fact overrides 'axes.labelsize', affecting the size of tick labels. However, other rcparams for the legend (e.g., legend.handlelength) do work. Because of this, I anticipate this is a `proplot` issue and not a `matplotlib` issue.
Sample code:
```
import numpy as np
import proplot as plot
def _set_aeshetics(ax):
rc_kw = {'axes.labelsize': 8,
'legend.fontsize': 20,
'legend.handlelength': 6,
}
ax.format(rc_kw=rc_kw)
f, ax = plot.subplots(axwidth=6, aspect=5, bottomlegend=True)
x = np.random.rand(100,)
p = ax.plot(x, label='signal')
set_aeshetics(ax)
f.bottompanel.legend([p])
```
Note that 'legend.fontsize' doesn't affect the font size of labels within the legend, but actually blows up the tick labels to huge sizes. However, 'legend.handlelength' will stretch out the handle graphics.
---
Also, something internal to `proplot` blocks outside calls from managing this, e.g.:
```
plt.legend([p], prop={'size': 24})
``` | closed | 2019-01-10T22:01:21Z | 2019-09-14T21:22:52Z | https://github.com/proplot-dev/proplot/issues/3 | [
"bug"
] | bradyrx | 7 |
dask/dask | pandas | 11,112 | Add support for `pip install dask[jobqueue]` | It might be worthwhile to add support for doing `pip install dask[jobqueue]` by adding the dependency to `[project.optional-dependencies]` in `pyproject.toml`. Additionally, it could be nice to have this tested in some integration CI run. | closed | 2024-05-09T06:23:02Z | 2024-05-21T15:01:42Z | https://github.com/dask/dask/issues/11112 | [
"discussion",
"tests"
] | Andrew-S-Rosen | 4 |
encode/databases | sqlalchemy | 8 | Don't require explicit `await database.connect()` | We should lazily establish the connection pool as needed.
| closed | 2019-02-07T17:14:10Z | 2020-05-01T17:05:33Z | https://github.com/encode/databases/issues/8 | [
"clean up"
] | tomchristie | 6 |
ray-project/ray | tensorflow | 51,214 | [Core] Ray core tasks tutorial not works. error msg: `Error Type: WORKER_DIED` | ### Description
# Issue with Ray Remote Functions Tutorial Example
<!-- Answering the following questions will greatly help the community to triage your post -->
**1. Severity of the issue: (select one)**
[x] Medium: Significantly affects my productivity but can find a workaround.
**2. Environment:**
- Ray version: 2.42.0
- Python version: 3.9.21
- OS: Ubuntu 22.04
- Cloud/Infrastructure: N/A
- Other libs/tools (if relevant): N/A
**3. What happened vs. what you expected:**
- Expected: All remote functions in the official Ray tasks tutorial should execute successfully when submitted as a Ray job
- Actual: Only functions with explicit `ray.get()` calls complete successfully; tasks without `ray.get()` fail in the dashboard
## Problem Description
I'm following the official Ray tutorial for remote functions (https://docs.ray.io/en/releases-2.42.0/ray-core/tasks.html), but the example doesn't work as expected when submitted as a Ray job. The dashboard shows that `my_function` runs successfully, but all four `slow_function` tasks fail. (Error Type: WORKER_DIED
Job finishes (1d000000) as driver exits. Marking all non-terminal tasks as failed.)

Here's the tutorial code I'm running:
```python
import ray
import time
# A regular Python function.
def normal_function():
return 1
# By adding the `@ray.remote` decorator, a regular Python function
# becomes a Ray remote function.
@ray.remote
def my_function():
return 1
# To invoke this remote function, use the `remote` method.
# This will immediately return an object ref (a future) and then create
# a task that will be executed on a worker process.
obj_ref = my_function.remote()
# The result can be retrieved with ``ray.get``.
assert ray.get(obj_ref) == 1
@ray.remote
def slow_function():
time.sleep(10)
return 1
# Ray tasks are executed in parallel.
# All computation is performed in the background, driven by Ray's internal event loop.
for _ in range(4):
# This doesn't block.
slow_function.remote()
```
I'm running it with:
```
RAY_ENABLE_RECORD_ACTOR_TASK_LOGGING=1 RAY_ADDRESS='http://xxx.xxx.xxx.xxx:8265' ray job submit --no-wait --working-dir . -- python ray_tutor/tasks.py
```
**Important observation**: Only when I modify the code to use `ray.get()` to collect the results from the slow functions does the dashboard show all tasks running successfully:
```python
# Modified version that works
refs = [slow_function.remote() for _ in range(4)]
ray.get(refs) # Wait for all tasks to complete
```
I believe this is confusing for new users following the tutorial. The example code suggests these remote tasks will run in the background, but they're failing silently when the program exits before they complete.
## Questions
1. Is this the expected behavior?
2. Is there a way to ensure background tasks complete without explicitly calling `ray.get()`?
Thank you for your help!
### Link
_No response_ | open | 2025-03-10T12:12:00Z | 2025-03-10T12:12:00Z | https://github.com/ray-project/ray/issues/51214 | [
"triage",
"docs"
] | jankinf | 0 |
JaidedAI/EasyOCR | pytorch | 936 | AttributeError: 'numpy.float64' object has no attribute 'lower' | Hi.
I am trying to train a model. I have created a dataset as required and I tried to run the training script.
This is the result:
```
File "Downloads/EasyOCR-master/trainer/start_train.py", line 30, in <module>
train(opt, amp=False)
File "Downloads/EasyOCR-master/trainer/train.py", line 40, in train
train_dataset = Batch_Balanced_Dataset(opt)
File "Downloads/EasyOCR-master/trainer/dataset.py", line 56, in __init__
_dataset, _dataset_log = hierarchical_dataset(root=opt.train_data, opt=opt, select_data=[selected_d])
File "Downloads/EasyOCR-master/trainer/dataset.py", line 132, in hierarchical_dataset
dataset = OCRDataset(dirpath, opt)
File "Downloads/EasyOCR-master/trainer/dataset.py", line 164, in __init__
if re.search(out_of_char, label.lower()):
AttributeError: 'numpy.float64' object has no attribute 'lower'
```
I found that this is correlated with my dataset containing only numbers (so the labels.csv are only numbers). In fact, If I try to modify my dataset and add some letters, the error is gone.
But I don't need to add letters, my dataset are composed by numbers only.
I think I need to modify dataset.py in some way.
Furthermore, for some reason, when I trained another model some weeks ago with a similar dataset containing only numbers, I hadn't this error. I don't know what happened in meantime. I only modified some model parameters in opt and nothing more.
Thanks. | closed | 2023-01-25T09:14:06Z | 2024-02-13T11:07:46Z | https://github.com/JaidedAI/EasyOCR/issues/936 | [] | proclaim5584 | 1 |
graphistry/pygraphistry | jupyter | 266 | [FEA] tree layouts should support a planar layout mode | **Is your feature request related to a problem? Please describe.**
When plotting tree data, edges sometimes intersect. Ex:
```
g = (g
.get_degrees()
.tree_layout(
level_sort_values_by='degree',
level_sort_values_by_ascending=False,
level_align='center',
vertical=True,
ascending=True)
.settings(url_params={'bg': '%23' + '000000', 'edgeCurvature': 0.05,'edgeInfluence': 1.5,'pointSize' : 0.5})
.layout_settings(play=0, locked_x=True)
)
return g.plot(render = False,memoize = False)
```
On planar graphs (ex: trees), this should be avoidable
**Describe the solution you'd like**
The default should be something like sort-by-parent-position (`level_sort_mode='parent'`?), which can be used if `level_sort_values_by` is not set
**Describe alternatives you've considered**
* radial versions of the same
* implement / embed reingold tilford (tidier) (ex: https://hci.stanford.edu/courses/cs448b/f09/lectures/CS448B-20091021-GraphsAndTrees.pdf)
| open | 2021-10-05T03:19:44Z | 2021-10-05T03:25:23Z | https://github.com/graphistry/pygraphistry/issues/266 | [
"enhancement",
"good-first-issue"
] | lmeyerov | 0 |
plotly/dash-component-boilerplate | dash | 114 | internal/main/run_main_module.js:17:47 code: 'MODULE_NOT_FOUND', | Hi!
I am getting this error following the tutorial:
```
Hash: 2f30a0133b3ea5344684
Version: webpack 4.36.1
Time: 2434ms
Built at: 12/21/2020 1:39:28 PM
Asset Size Chunks Chunk Names
web.min.js 3.15 KiB 0 [emitted] main
web.min.js.map 7.41 KiB 0 [emitted] main
Entrypoint main = web.min.js web.min.js.map
[0] external "PropTypes" 42 bytes {0} [built]
[1] external "React" 42 bytes {0} [built]
[2] ./src/lib/index.js + 1 modules 5.14 KiB {0} [built]
| ./src/lib/index.js 146 bytes [built]
| + 1 hidden module
Executing: venv/bin/python -m dash.development.component_generator ./src/lib/components web -p package-info.json --jl-prefix '' --r-prefix ''
internal/modules/cjs/loader.js:883
throw err;
^
Error: Cannot find module '/Users/jonathanmorenon/OneDrive/Freelance/Data'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:880:15)
at Function.Module._load (internal/modules/cjs/loader.js:725:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Error generating metadata in web (status=1)
post_gen_project command failed: venv/bin/python -m dash.development.component_generator ./src/lib/components web -p package-info.json --jl-prefix '' --r-prefix ''
ERROR: Stopping generation because post_gen_project hook script didn't exit successfully
Hook script failed (exit status: 1)
```
I am on a Mac computer with Node v14.15.3, npm 6.14.9, and Python 3.7.6. Also getting the same error with a PC with node V12.10.0, npm 6.10.3, and python 3.8.3. | closed | 2020-12-21T19:06:30Z | 2022-05-14T21:19:15Z | https://github.com/plotly/dash-component-boilerplate/issues/114 | [] | jmoreno127 | 11 |
SciTools/cartopy | matplotlib | 1,549 | Support for xkcd style | It would be nice if `add_geometries()` could play well with matplotlib's `with plt.xkcd()` contexrt manager. I'm currently resorting to adding `.exterior` and all of `.interiors` individually to my plot so it will get styled correctly.
| open | 2020-05-01T21:58:44Z | 2020-05-01T21:58:44Z | https://github.com/SciTools/cartopy/issues/1549 | [] | yportier | 0 |
pyqtgraph/pyqtgraph | numpy | 2,269 | why change plotItem.plot source code clearPlots() to clear() | when I upgrade pyqtgraph 0.11 to 0.12 , I found that my application hs a bug, the InfiniteLine is not visible, then I found that is because I set 'clear=True' in PlotItem.plot, so I check the source code, found 'clearPlots' is changed to 'clear'.
I think 'clearPlots' is better than 'clear', because when I set 'clear=True', I want to just clear the plots, not others, because the line is just like a tool, It`s unnessery to add a tool everytime I plot data.
Of course, there must be a reason why you did this,so I want to ask the reason. | closed | 2022-04-22T03:36:08Z | 2022-07-14T05:30:52Z | https://github.com/pyqtgraph/pyqtgraph/issues/2269 | [
"cannot reproduce"
] | zhangxingjing | 2 |
unit8co/darts | data-science | 2,278 | learning more about covariates (finding relevant papers). Use with darts project |
I am working on a small project (forecasting electricity data). I am trying to write a report (planning to write a report is more correct to say). Searching the common scientific databases (e.g. google scholar). Particularly I want to focus on the exploration of covariates (i.e. their effects on forecasting electricity data).
The issue is that I am not finding a good review or report that goes through covariates (I searched for other terms as well: independent variables, categorical features, exogenous variables).
I will of course reference your guy's paper (Since I plan on continuing using darts)
:
Herzen, J., Lässig, F., Piazzetta, S. G., Neuer, T., Tafti, L., Raille, G., Pottelbergh, T. V., Pasieka, M., Skrodzki, A., Huguenin, N., Dumonal, M., Kościsz, J., Bader, D., Gusset, F., Benheddi, M., Williamson, C., Kosinski, M., Petrik, M., & Grosch, G. (2022). Darts: User-Friendly Modern Machine Learning for Time Series. Journal of Machine Learning Research, 23(124), 1–6. https://www.jmlr.org/papers/v23/21-1177.html
I saw that you always reference the paper for the model you implement in darts and that is really useful and good to see. So if I use that model I can read the paper and reference it as well!
I am just wondering if there are any pivotal papers relating to covariates that you guys have used or found? Either a flagpole paper or a good review. I looked though the refences that you include in your own paper, but none mention covariates (at least explicitly).
I understand and would be ok if you close this issue since it not strictly darts related. | closed | 2024-03-12T05:21:42Z | 2024-03-14T08:29:47Z | https://github.com/unit8co/darts/issues/2278 | [
"question"
] | Allena101 | 1 |
ultralytics/ultralytics | pytorch | 19,518 | Metrics all 0 after TensorRT INT8 export for mode val, only INT8 ONNX performs well | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I succesfully exported my FP32 YOLOv8 OBB (s) model to FP16 and INT8. For FP16 I get nearly the same metrics values like FP32, but the INT8 model performs very bad. My calibration set are 3699 images, I tried with training calibration set (18536 images) too, but the metrics stay all at 0. Different export `batch_sizes=1,8,16` didn't helped.
Update: The problem, must be between the conversion from `ONNX` to `engine` format (see below). There must be a bug between the conversion process, which leads to 0 in all metrics using `engine` model.
Exporter Code:
```python
from ultralytics import YOLO
import argparse
def export_model(model, export_args):
model.export(**export_args)
def main():
parser = argparse.ArgumentParser(description='Export YOLOv8 OBB model to TensorRT with user-configurable parameters.')
parser.add_argument('--model_path', type=str, required=True, help='Path to the trained YOLOv8 model (.pt file).')
parser.add_argument('--export_fp16', type=bool, default=False, help='Export to FP16 TensorRT model.')
parser.add_argument('--export_int8', type=bool, default=False, help='Export to INT8 TensorRT model.')
parser.add_argument('--format', type=str, default='engine', help="Format to export to (e.g., 'engine', 'onnx').")
parser.add_argument('--imgsz', type=int, default=640, help='Desired image size for the model input. Can be an integer for square images or a tuple (height, width) for specific dimensions.')
parser.add_argument('--keras', type=bool, default=False, help='Enables export to Keras format for TensorFlow SavedModel, providing compatibility with TensorFlow serving and APIs.')
parser.add_argument('--optimize', type=bool, default=False, help='Applies optimization for mobile devices when exporting to TorchScript, potentially reducing model size and improving performance.')
parser.add_argument('--half', type=bool, default=False, help='Enables FP16 (half-precision) quantization, reducing model size and potentially speeding up inference on supported hardware.')
parser.add_argument('--int8', type=bool, default=False, help='Activates INT8 quantization, further compressing the model and speeding up inference with minimal accuracy loss, primarily for edge devices.')
parser.add_argument('--dynamic', type=bool, default=False, help='Allows dynamic input sizes for ONNX, TensorRT and OpenVINO exports, enhancing flexibility in handling varying image dimensions (enforced).')
parser.add_argument('--simplify', type=bool, default=False, help='Simplifies the model graph for ONNX exports with onnxslim, potentially improving performance and compatibility.')
parser.add_argument('--opset', type=int, default=None, help='Specifies the ONNX opset version for compatibility with different ONNX parsers and runtimes. If not set, uses the latest supported version.')
parser.add_argument('--workspace', type=int, default=None, help='Sets the maximum workspace size in GiB for TensorRT optimizations, balancing memory usage and performance; use None for auto-allocation by TensorRT up to device maximum.')
parser.add_argument('--nms', type=bool, default=False, help='Adds Non-Maximum Suppression (NMS) to the exported model when supported (see Export Formats), improving detection post-processing efficiency.')
parser.add_argument('--batch', type=int, default=1, help="Batch size for export. For INT8 it's recommended using a larger batch like batch=8 (calibrated as batch=16))")
parser.add_argument('--device', type=str, default='0', help="Device to use for export (e.g., '0' for GPU 0).")
parser.add_argument('--data', type=str, default=None, help="Path to the dataset configuration file for INT8 calibration.")
args = parser.parse_args()
# Load the final trained YOLOv8 model
model = YOLO(args.model_path, task='obb')
export_args = {
'format': args.format,
'imgsz': args.imgsz,
'keras': args.keras,
'optimize': args.optimize,
'half': args.half,
'int8': args.int8,
'dynamic': args.dynamic,
'simplify': args.simplify,
'opset': args.opset,
'workspace': args.workspace,
'nms': args.nms,
'batch': args.batch,
'device': args.device,
'data': args.data,
}
if args.export_fp16: # data argument isn't needed for FP16 exports since no calibration is required
print('Exporting to FP16 TensorRT model...')
fp16_args = export_args.copy()
fp16_args['half'] = True
fp16_args['int8'] = False
export_model(model, fp16_args)
print('FP16 export completed.')
if args.export_int8: # NOTE: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#enable_int8_c, for INT8 calibration, the kitti_bev.yaml val split with 3769 images is used.
print('Exporting to INT8 TensorRT model...')
int8_args = export_args.copy()
int8_args['half'] = False
int8_args['int8'] = True
export_model(model, int8_args)
print('INT8 export completed.\nThe calibration .cache which can be reused to speed up export of future model weights using the same data, but this may result in poor calibration when the data is vastly different or if the batch value is changed drastically. In these circumstances, the existing .cache should be renamed and moved to a different directory or deleted entirely.')
if not args.export_fp16 and not args.export_int8:
print('No export option selected. Please specify --export_fp16 and/or --export_int8.')
if __name__ == '__main__':
main()
```
Used export command:
```txt
python export_kitti_obb.py --model_path /home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/kitti_bev_yolo/run_94_Adam_88.8_87.2/weights/best.pt --export_int8 True --int8 True --dynamic=True --batch 1 --data /home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/cfg/datasets/kitti_bev.yaml
```
Validation script:
```python
from ultralytics import YOLO
model = YOLO('/home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/kitti_bev_yolo/run_94_Adam_88.8_87.2/weights/best_1.engine', task='obb', verbose=False)
metrics = model.val(data='/home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/cfg/datasets/kitti_bev.yaml', imgsz=640,
batch=16, save_json=False, save_hybrid=False, conf=0.001, iou=0.5, max_det=300, half=False,
device='0', dnn=False, plots=False, rect=False, split='val', project=None, name=None)
```
Validation output with INT8 TensorRT:

Validation output with INT8 ONNX:

Thank you very much!
### Additional
_No response_ | open | 2025-03-04T17:11:26Z | 2025-03-14T01:33:53Z | https://github.com/ultralytics/ultralytics/issues/19518 | [
"question",
"OBB",
"exports"
] | Petros626 | 19 |
microsoft/JARVIS | pytorch | 41 | Does this support multi-gpus? | have not a >48g gpu.... | closed | 2023-04-05T10:42:56Z | 2023-04-05T10:44:38Z | https://github.com/microsoft/JARVIS/issues/41 | [] | laoda512 | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,658 | Help! | I have a question regarding training my CycleGAN model. My images have a height of 640 pixels and a width of 480 pixels. What adjustments should I make to my training parameters to accommodate these image dimensions? | open | 2024-05-21T01:27:43Z | 2024-05-21T01:44:57Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1658 | [] | izhaolinger | 1 |
pyeve/eve | flask | 1,060 | $regex doesn't work | I found the solution of case insensetive search on https://stackoverflow.com/a/31027960 But it doesn't work for me. I receive an error something like this
```
<resource>
<_error>
<code>400</code>
<message>
The browser (or proxy) sent a request that this server could not understand.
</message>
</_error>
<_status>ERR</_status>
</resource> | closed | 2017-09-05T20:06:34Z | 2017-09-25T07:30:48Z | https://github.com/pyeve/eve/issues/1060 | [] | athlonUA | 4 |
vitalik/django-ninja | pydantic | 618 | Implementing object-esque query params | (I suspect this is related to https://github.com/vitalik/django-ninja/issues/86.)
I want to implement arbitrary key/value filtering. A toy example of this:
```
class ObjectFilter(Schema):
metadata: dict[str, str] # `/objects?metadata[foo]=bar`
@router.get('/')
def list_objects(request, filters: ObjectFilter = Query(...))
```
This doesn't appear to work out of the box, and I've tried various different approaches. I suspect the issue is because the 'key' presented to django-ninja is `metadata[foo]`, but I can't actually figure out where to plug into the framework to override the translation of the query parameters into the parsed input that's passed to the Schema. Any guidance? Thanks!
| open | 2022-12-02T19:02:31Z | 2023-01-14T13:29:02Z | https://github.com/vitalik/django-ninja/issues/618 | [] | jmduke | 5 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 724 | Unable to unzip VoxCeleb1 and VoxCeleb2 | I follow the instructions to download the datasets - VoxCeleb1 and VoxCeleb2 and concatenate the files using "cat vox1_dev* > vox1_dev_wav.zip". However, when I get the following error when I try to unzip it:
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Archive contains ‘\020\313¨/b\374!8\320\373h’ where numeric off_t value expected
tar: Archive contains ‘V\001W\216A\306R\201\373\231\020\311’ where numeric off_t value expected
tar: Archive contains ‘e\036\363\257N*\225\330[?\242\034’ where numeric off_t value expected
tar: Archive contains ‘\272\r\227:jτ\335CT\277G’ where numeric off_t value expected
tar: Exiting with failure status due to previous errors
Kindly let me know how the datasets are supposed to be downloaded and used. Thank you! | closed | 2021-04-05T12:04:18Z | 2021-04-09T19:32:52Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/724 | [] | namanshah971 | 3 |
SYSTRAN/faster-whisper | deep-learning | 539 | Is there any way we can get an example how to translate? | I only see examples pertaining to transcription, but can't faster-whisper also be used for translation and include parameters such as beam size, specifying the language translating from, etc., all the parameters in the WhisperModel class? | open | 2023-11-03T15:35:14Z | 2024-07-21T05:21:04Z | https://github.com/SYSTRAN/faster-whisper/issues/539 | [] | BBC-Esq | 6 |
inducer/pudb | pytest | 160 | AttributeError: 'FileSourceCodeProvider' object has no attribute 'source_hscroll_start' | Similar to #116 . I'm using **pudb 2015.4.1** (via `pip`)
``` py
(py)[timeless@gcc2-power8 crew]$ hg push -r . --debugger --config ui.debugger=pudb
entering debugger - type c to continue starting hg or h for help
Traceback (most recent call last):
File "/home/timeless/hg/crew/mercurial/dispatch.py", line 187, in _runcatch
debugtrace[debugger]()
File "/usr/lib64/python2.7/contextlib.py", line 21, in __exit__
def __exit__(self, type, value, traceback):
File "/usr/lib64/python2.7/bdb.py", line 51, in trace_dispatch
return self.dispatch_call(frame, arg)
File "/usr/lib64/python2.7/bdb.py", line 80, in dispatch_call
self.user_call(frame, arg)
File "/home/timeless/hg/py/lib/python2.7/site-packages/pudb/debugger.py", line 350, in user_call
self.interaction(frame)
File "/home/timeless/hg/py/lib/python2.7/site-packages/pudb/debugger.py", line 339, in interaction
show_exc_dialog=show_exc_dialog)
File "/home/timeless/hg/py/lib/python2.7/site-packages/pudb/debugger.py", line 2057, in call_with_ui
return f(*args, **kwargs)
File "/home/timeless/hg/py/lib/python2.7/site-packages/pudb/debugger.py", line 2267, in interaction
self.event_loop()
File "/home/timeless/hg/py/lib/python2.7/site-packages/pudb/debugger.py", line 2225, in event_loop
canvas = toplevel.render(self.size, focus=True)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 1751, in render
canv = get_delegate(self).render(size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/container.py", line 1083, in render
focus and self.focus_part == 'body')
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/decoration.py", line 225, in render
canv = self._original_widget.render(size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/container.py", line 2085, in render
focus = focus and self.focus_position == i)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/container.py", line 1526, in render
canv = w.render((maxcol, rows), focus=focus and item_focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/decoration.py", line 225, in render
canv = self._original_widget.render(size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 1751, in render
canv = get_delegate(self).render(size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/listbox.py", line 475, in render
focus_canvas = focus_widget.render((maxcol,), focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/pudb/source_view.py", line 40, in render
hscroll = self.dbg_ui.source_hscroll_start
AttributeError: 'FileSourceCodeProvider' object has no attribute 'source_hscroll_start'
** unknown exception encountered, please report by visiting
** https://mercurial-scm.org/wiki/BugTracker
** Python 2.7.8 (default, Jul 8 2015, 18:13:08) [GCC 4.9.2 20150212 (Red Hat 4.9.2-6)]
** Mercurial Distributed SCM (version 3.6.2+737-95484fbd7ad1)
** Extensions loaded:
Traceback (most recent call last):
File "/home/timeless/hg/crew/hg", line 43, in <module>
mercurial.dispatch.run()
File "/home/timeless/hg/crew/mercurial/dispatch.py", line 54, in run
sys.exit((dispatch(request(sys.argv[1:])) or 0) & 255)
File "/home/timeless/hg/crew/mercurial/dispatch.py", line 118, in dispatch
ret = _runcatch(req)
File "/home/timeless/hg/crew/mercurial/dispatch.py", line 187, in _runcatch
debugtrace[debugger]()
File "/usr/lib64/python2.7/contextlib.py", line 21, in __exit__
def __exit__(self, type, value, traceback):
File "/usr/lib64/python2.7/bdb.py", line 51, in trace_dispatch
return self.dispatch_call(frame, arg)
File "/usr/lib64/python2.7/bdb.py", line 80, in dispatch_call
self.user_call(frame, arg)
File "/home/timeless/hg/py/lib/python2.7/site-packages/pudb/debugger.py", line 350, in user_call
self.interaction(frame)
File "/home/timeless/hg/py/lib/python2.7/site-packages/pudb/debugger.py", line 339, in interaction
show_exc_dialog=show_exc_dialog)
File "/home/timeless/hg/py/lib/python2.7/site-packages/pudb/debugger.py", line 2057, in call_with_ui
return f(*args, **kwargs)
File "/home/timeless/hg/py/lib/python2.7/site-packages/pudb/debugger.py", line 2267, in interaction
self.event_loop()
File "/home/timeless/hg/py/lib/python2.7/site-packages/pudb/debugger.py", line 2225, in event_loop
canvas = toplevel.render(self.size, focus=True)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 1751, in render
canv = get_delegate(self).render(size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/container.py", line 1083, in render
focus and self.focus_part == 'body')
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/decoration.py", line 225, in render
canv = self._original_widget.render(size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/container.py", line 2085, in render
focus = focus and self.focus_position == i)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/container.py", line 1526, in render
canv = w.render((maxcol, rows), focus=focus and item_focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/decoration.py", line 225, in render
canv = self._original_widget.render(size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 1751, in render
canv = get_delegate(self).render(size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/listbox.py", line 475, in render
focus_canvas = focus_widget.render((maxcol,), focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File "/home/timeless/hg/py/lib/python2.7/site-packages/pudb/source_view.py", line 40, in render
hscroll = self.dbg_ui.source_hscroll_start
AttributeError: 'FileSourceCodeProvider' object has no attribute 'source_hscroll_start'
```
This code would work:
``` py
class NullSourceCodeProvider(SourceCodeProvider):
...
def get_lines(self, debugger_ui):
from pudb.source_view import SourceLine
return [
SourceLine(debugger_ui, "<no source code available>"),
...
]
```
This code doesn't work:
``` py
class FileSourceCodeProvider(SourceCodeProvider):
...
def get_lines(self, debugger_ui):
from pudb.source_view import SourceLine, format_source
if self.file_name == "<string>":
return [SourceLine(self, self.file_name)]
...
```
because, `self` is `FileSourceCodeProvider`, but the code was expecting `debugger_ui`
| closed | 2015-12-30T07:34:52Z | 2015-12-31T03:37:38Z | https://github.com/inducer/pudb/issues/160 | [] | jsoref | 0 |
lepture/authlib | django | 30 | Allow bypassing of https check for dev purposes | Servers like Hydra support http mode for development purposes. It'd be great if there were some way to bypass the authlib.specs.rfc6749.InsecureTransportError check for local development. | closed | 2018-02-21T18:40:16Z | 2018-02-21T18:42:30Z | https://github.com/lepture/authlib/issues/30 | [] | ashic | 1 |
seleniumbase/SeleniumBase | pytest | 2,659 | Work with `pyautogui` and `seleniumbase` in a docker container | Hello,
Thank you for your work on this project. I'm trying to run `seleniumbase` and `pyautogui` together in a linux docker container and am running into issues. On using base `selenium`, it works fine but stops working when I switch it out for `seleniumbase`. Here are my two scripts.
With selenium: This opens a new tab and goes to duckduckgo.
```python
import pyautogui
import os
import platform
import Xlib.display
from time import sleep
from selenium import webdriver
from seleniumbase import SB, Driver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options as ChromeOptions
from sbvirtualdisplay import Display
ctrl_cmd = "ctrl" if platform.system() == "Linux" else "command"
display = None
if platform.system() == "Linux":
display = Display(visible=False, size=(1920, 1080))
display.start()
pyautogui._pyautogui_x11._display = Xlib.display.Display(os.environ["DISPLAY"])
options = ChromeOptions()
options.add_argument("--no-sandbox")
browser = webdriver.Chrome(options=options)
# browser = Driver()
browser.get("https://google.com/")
pyautogui.screenshot("at_google.png")
pyautogui.hotkey(ctrl_cmd, "t")
pyautogui.hotkey(ctrl_cmd, "l")
pyautogui.typewrite("https://duckduckgo.com/")
pyautogui.hotkey("return")
sleep(2)
pyautogui.screenshot("at_ddg.png")
print(browser.title)
browser.quit()
if display:
display.stop()
```
And with seleniumbase: This does not do anything.
```python
import pyautogui
import os
import platform
import Xlib.display
from time import sleep
from selenium import webdriver
from seleniumbase import SB, Driver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options as ChromeOptions
from sbvirtualdisplay import Display
ctrl_cmd = "ctrl" if platform.system() == "Linux" else "command"
display = None
if platform.system() == "Linux":
display = Display(visible=False, size=(1920, 1080))
display.start()
pyautogui._pyautogui_x11._display = Xlib.display.Display(os.environ["DISPLAY"])
with SB(xvfb=True) as sb:
sb.driver.get("https://google.com/")
pyautogui.screenshot("at_google.png")
pyautogui.hotkey(ctrl_cmd, "t")
pyautogui.hotkey(ctrl_cmd, "l")
pyautogui.typewrite("https://duckduckgo.com/")
pyautogui.hotkey("return")
sleep(2)
pyautogui.screenshot("at_ddg.png")
print(sb.driver.title)
sb.driver.quit()
if display:
display.stop()
```
Could you please help me figure out what could be going on here. I'm happy to provide more details about anything else needed.
**NOTE**: Both of these versions work fine locally on a mac (which would not use the virtual display afaik) so maybe it could be related to that? | closed | 2024-04-03T19:14:46Z | 2024-04-04T14:16:40Z | https://github.com/seleniumbase/SeleniumBase/issues/2659 | [
"external",
"workaround exists",
"not enough info"
] | chahak13 | 10 |
neuml/txtai | nlp | 368 | Move mkdocs dependencies from docs.yml to setup.py | Currently, mkdocs dependencies are only specified in the docs build action script. This should instead be included as a dev dependency in setup.py.
txtai developers should have mkdocs installed locally so they can review documentation before checking it in.
| closed | 2022-10-17T17:20:30Z | 2022-10-17T17:23:21Z | https://github.com/neuml/txtai/issues/368 | [] | davidmezzetti | 0 |
thtrieu/darkflow | tensorflow | 885 | Output of last conv layer with 4096 size | I'm trying to get the outputs from the last convolutional layer with the 4096 high-level features of the images.
Is there a way to get them? | open | 2018-08-30T11:32:06Z | 2018-08-30T11:32:06Z | https://github.com/thtrieu/darkflow/issues/885 | [] | sgarbanti | 0 |
fastapi/fastapi | asyncio | 13,399 | Dependency Models created from Form input data are loosing metadata(field set) and are enforcing validation on default values. |
### Discussed in https://github.com/fastapi/fastapi/discussions/13380
<div type='discussions-op-text'>
<sup>Originally posted by **sneakers-the-rat** February 16, 2025</sup>
### First Check
- [X] I added a very descriptive title here.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/pydantic/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
File: fastapi_defaults_bug.py
```python
import uvicorn
from typing import Annotated
from pydantic import BaseModel, Field
from fastapi import FastAPI, Form
class ExampleJsonModel(BaseModel):
sample_field_1: Annotated[bool, Field(default=True)]
sample_field_2: Annotated[bool, Field(default=False)]
sample_field_3: Annotated[bool, Field(default=None)]
sample_field_4: Annotated[str, Field(default=0)] # This is dangerous but can be used with a validator
class ExampleFormModel(BaseModel):
sample_field_1: Annotated[bool, Form(default=True)]
sample_field_2: Annotated[bool, Form(default=False)]
sample_field_3: Annotated[bool, Form(default=None)]
sample_field_4: Annotated[str, Form(default=0)] # This is dangerous but can be used with a validator
class ResponseSampleModel(BaseModel):
fields_set: Annotated[list, Field(default_factory=list)]
dumped_fields_no_exclude: Annotated[dict, Field(default_factory=dict)]
dumped_fields_exclude_default: Annotated[dict, Field(default_factory=dict)]
dumped_fields_exclude_unset: Annotated[dict, Field(default_factory=dict)]
app = FastAPI()
@app.post("/form")
async def form_endpoint(model: Annotated[ExampleFormModel, Form()]) -> ResponseSampleModel:
return ResponseSampleModel(
fields_set=list(model.model_fields_set),
dumped_fields_no_exclude=model.model_dump(),
dumped_fields_exclude_default=model.model_dump(exclude_defaults=True),
dumped_fields_exclude_unset=model.model_dump(exclude_unset=True)
)
@app.post("/json")
async def form_endpoint(model: ExampleJsonModel) -> ResponseSampleModel:
return ResponseSampleModel(
fields_set=list(model.model_fields_set),
dumped_fields_no_exclude=model.model_dump(),
dumped_fields_exclude_default=model.model_dump(exclude_defaults=True),
dumped_fields_exclude_unset=model.model_dump(exclude_unset=True)
)
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
```
Test File: test_fastapi_defaults_bug.py
```python
import pytest
from fastapi.testclient import TestClient
from fastapi_defaults_bug import (
app,
ExampleFormModel,
ExampleJsonModel,
ResponseSampleModel
)
@pytest.fixture(scope="module")
def fastapi_client():
with TestClient(app) as test_client:
yield test_client
################
# Section 1: Tests on Form model -> no fastapi, pydantic model
################
def test_form_model_pydantic_only_defaults():
f_model = ExampleFormModel()
for field_name, field in f_model.model_fields.items():
assert getattr(f_model, field_name) == field.default
def test_form_model_pydantic_all_unset():
f_model = ExampleFormModel()
assert not f_model.model_fields_set
def test_form_model_pydantic_set_1():
f_model = ExampleFormModel(sample_field_1=True) # Those set have the same value of default
assert "sample_field_1" in f_model.model_fields_set
assert len(f_model.model_fields_set) == 1
def test_form_model_pydantic_set_2():
f_model = ExampleFormModel(sample_field_1=True, sample_field_2=False) # Those set have the same value of default
assert "sample_field_1" in f_model.model_fields_set
assert "sample_field_2" in f_model.model_fields_set
assert len(f_model.model_fields_set) == 2
def test_form_model_pydantic_set_all():
f_model = ExampleFormModel(
sample_field_1=True,
sample_field_2=False,
sample_field_3=True,
sample_field_4=""
) # Those set could have different values from default
assert not set(f_model.model_fields).difference(f_model.model_fields_set)
################
# Section 2: Same Tests of Form on Json model -> they are the same on different model
################
def test_json_model_pydantic_only_defaults():
j_model = ExampleJsonModel()
for field_name, field in j_model.model_fields.items():
assert getattr(j_model, field_name) == field.default
def test_json_model_pydantic_all_unset():
j_model = ExampleJsonModel()
assert not j_model.model_fields_set
def test_json_model_pydantic_set_1():
j_model = ExampleJsonModel(sample_field_1=True) # Those set have the same value of default
assert "sample_field_1" in j_model.model_fields_set
assert len(j_model.model_fields_set) == 1
def test_json_model_pydantic_set_2():
j_model = ExampleJsonModel(sample_field_1=True, sample_field_2=False) # Those set have the same value of default
assert "sample_field_1" in j_model.model_fields_set
assert "sample_field_2" in j_model.model_fields_set
assert len(j_model.model_fields_set) == 2
def test_json_model_pydantic_set_all():
j_model = ExampleJsonModel(
sample_field_1=True,
sample_field_2=False,
sample_field_3=True,
sample_field_4=""
) # Those set could have different values from default
assert not set(j_model.model_fields).difference(j_model.model_fields_set)
def test_form_json_model_share_same_default_behaviour():
f_model = ExampleFormModel()
j_model = ExampleJsonModel()
for field_name, field in f_model.model_fields.items():
assert getattr(f_model, field_name) == getattr(j_model, field_name)
################
# Section 3: Tests on Form model with fastapi
################
def test_submit_form_with_all_values(fastapi_client: TestClient):
form_content = {
"sample_field_1": "False",
"sample_field_2": "True",
"sample_field_3": "False",
"sample_field_4": "It's a random string"
}
response = fastapi_client.post("/form", data=form_content)
assert response.status_code == 200
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 4
assert not set(form_content).symmetric_difference(set(response_model.fields_set))
def test_submit_form_with_not_all_values(fastapi_client: TestClient):
"""
This test should pass but fails because fastapi is preloading default and pass those values
on model creation, losing the ability to know if a field has been set.
:param fastapi_client:
:return:
"""
form_content = {
"sample_field_1": "False",
"sample_field_3": "False",
"sample_field_4": "It's a random string"
}
response = fastapi_client.post("/form", data=form_content)
assert response.status_code == 200
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 3 # test will fail here and below
assert not set(form_content).symmetric_difference(set(response_model.fields_set))
def test_submit_form_with_no_values(fastapi_client: TestClient):
"""
This test should pass but fails because fastapi is preloading default and pass those values
on model creation, losing the ability to not have validation on default value.
:param fastapi_client:
:return:
"""
form_content = {}
response = fastapi_client.post("/form", data=form_content)
assert response.status_code == 200 # test will fail here and below -> will raise 422
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 0
assert not set(form_content).symmetric_difference(set(response_model.fields_set))
################
# Section 4: Tests on Json model with fastapi
################
def test_submit_json_with_all_values(fastapi_client: TestClient):
json_content = {
"sample_field_1": False,
"sample_field_2": True,
"sample_field_3": False,
"sample_field_4": "It's a random string"
}
response = fastapi_client.post("/json", json=json_content)
assert response.status_code == 200
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 4
assert not set(json_content).symmetric_difference(set(response_model.fields_set))
def test_submit_json_with_not_all_values(fastapi_client: TestClient):
"""
This test will pass but the same not happen with Form.
:param fastapi_client:
:return:
"""
json_content = {
"sample_field_1": False,
"sample_field_3": False,
"sample_field_4": "It's a random string"
}
response = fastapi_client.post("/json", json=json_content)
assert response.status_code == 200
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 3 # This time will not fail
assert not set(json_content).symmetric_difference(set(response_model.fields_set))
def test_submit_json_with_no_values(fastapi_client: TestClient):
"""
This test will pass but the same not happen with Form.
:param fastapi_client:
:return:
"""
json_content = {}
response = fastapi_client.post("/json", json=json_content)
assert response.status_code == 200 # This time will not fail
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 0
assert not set(json_content).symmetric_difference(set(response_model.fields_set))
```
### Description
This is a generalized version of the issue reported in https://github.com/fastapi/fastapi/discussions/13380 .
This issue do not affect body json data.
For models created from a form, during the parsing phase their default values are preloaded and passed to the validator to create the model.
1) This leads to a loss of information regarding which fields have been explicitly set, since default values are now considered as having been provided.
12) Consequently, validation is enforced on default values, which might not be the intended behavior and anyway different from the one from json body.
### Operating System
macOS - Linux
### Operating System Details
_No response_
### FastAPI Version
0.115.8
### Pydantic Version
2.10.6
### Python Version
Python 3.11 - Python 3.13.1 | open | 2025-02-20T14:36:29Z | 2025-03-07T03:38:28Z | https://github.com/fastapi/fastapi/issues/13399 | [
"good first issue",
"question"
] | luzzodev | 9 |
deepinsight/insightface | pytorch | 1,785 | Importing numpy in setup.py file | Since version `0.3` of the library numpy is imported in the setup file of the python-package (https://github.com/deepinsight/insightface/blob/master/python-package/setup.py) this causes issues with creating requirements files and creating environments, as installation fails if numpy was not pre-installes. Is it possible to change it in future versions? | open | 2021-10-14T09:45:04Z | 2022-04-01T18:57:18Z | https://github.com/deepinsight/insightface/issues/1785 | [] | tomaszgrygiel | 2 |
Farama-Foundation/Gymnasium | api | 1,059 | [Proposal] Default metadata in BaseMujocoEnv | ### Proposal
The current code for BaseMujocoEnv requires the env metadata dictionary to have fixed, pre-specified values. While this may be useful for future API changes, it doesn't seem very useful at the moment.
------------
assert self.metadata["render_modes"] == [
"human",
"rgb_array",
"depth_array",
], self.metadata["render_modes"]
if "render_fps" in self.metadata:
assert (
int(np.round(1.0 / self.dt)) == self.metadata["render_fps"]
), f'Expected value: {int(np.round(1.0 / self.dt))}, Actual value: {self.metadata["render_fps"]}'
------------
I propose to set the metadata attribute in BaseMujocoEnv to the current fixed values
-------------
self.metadata["render_modes"] = [
"human",
"rgb_array",
"depth_array",
]
self.metadata["render_fps"] = int(np.round(1.0 / self.dt))
-------------
This will enable classes derived from MujocoEnv to overwrite the dictionary (if required, in the future), but it will not force them to write an explicit metadata dictionary if the default values suffice, reducing redundancy and making derived classes more compact and readable.
### Motivation
_No response_
### Pitch
_No response_
### Alternatives
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| open | 2024-05-20T13:21:07Z | 2024-06-29T11:12:16Z | https://github.com/Farama-Foundation/Gymnasium/issues/1059 | [
"enhancement"
] | spiglerg | 2 |
ets-labs/python-dependency-injector | flask | 734 | Unable to initialize instance that are picklable in the container | Getting the following error while initializing the container that contains instances that cannot be pickled. Is there a requirement that all the instances created within the container should be picklable?
Version used: 4.41.0
```
class DependencyContainer(containers.DeclarativeContainer):
metric_flow_client = providers.Singleton(
MetricFlowClient,
sql_client=SnowflakeSqlClient.from_config(mf_snowflake_config)
)
if __name__ == "__main__":
container = DependencyContainer(). <---- Fails here
container.init_resources()
container.wire(modules=[__name__])
```
**Error log:**
```
Traceback (most recent call last):
File "/Users/arunkumar/Projects/unified-metrics-platform/consumption_grpc/grpc_server.py", line 79, in <module>
container = DependencyContainer()
File "src/dependency_injector/containers.pyx", line 730, in dependency_injector.containers.DeclarativeContainer.__new__
File "src/dependency_injector/providers.pyx", line 4913, in dependency_injector.providers.deepcopy
File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/Users/arunkumar/.pyenv/versions/3.9.10/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/Users/arunkumar/.pyenv/versions/3.9.10/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/Users/arunkumar/.pyenv/versions/3.9.10/lib/python3.9/copy.py", line 153, in deepcopy
y = copier(memo)
File "src/dependency_injector/providers.pyx", line 2835, in dependency_injector.providers.BaseSingleton.__deepcopy__
File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/Users/arunkumar/.pyenv/versions/3.9.10/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/Users/arunkumar/.pyenv/versions/3.9.10/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/Users/arunkumar/.pyenv/versions/3.9.10/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/Users/arunkumar/.pyenv/versions/3.9.10/lib/python3.9/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/Users/arunkumar/.pyenv/versions/3.9.10/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/Users/arunkumar/.pyenv/versions/3.9.10/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/Users/arunkumar/.pyenv/versions/3.9.10/lib/python3.9/copy.py", line 161, in deepcopy
rv = reductor(4)
TypeError: cannot pickle '_thread.lock' object
```
Thanks for the help! | open | 2023-08-03T18:49:17Z | 2023-09-25T10:18:53Z | https://github.com/ets-labs/python-dependency-injector/issues/734 | [] | arunbalasubramani | 1 |
fastapi-admin/fastapi-admin | fastapi | 135 | Static files failed to load from unpkg.com | Some time ago my admin UI started to emit errors on loading static files from unpkg.com.
Even on [demo](https://fastapi-admin.long2ice.io/admin/) one would get the following error:
```
The resource from “https://unpkg.com/@tabler/icons@2.19.0/iconfont/tabler-icons.min.css” was blocked due to MIME type (“text/plain”) mismatch (X-Content-Type-Options: nosniff)
``` | open | 2023-05-17T15:44:52Z | 2023-05-17T17:10:51Z | https://github.com/fastapi-admin/fastapi-admin/issues/135 | [] | radiophysicist | 1 |
strawberry-graphql/strawberry | asyncio | 3,414 | Uncaught exceptions lead to `on_execute` / `on_operation` lifecycle hooks completing before some resolvers | ## TL;DR
Strawberry short-circuits the HTTP response whenever there is an uncaught exception. This reduces the latency, but leads to:
~(i) (a) incomplete and (b) nondeterministic responses~ (**edit:** _established in the comments that it's expected_)
(ii) hooks being completed before some resolvers, leading to apparent violation of a contract
I wonder if it would be possible to make Strawberry run all resolves to the end, even if some of them raise uncaught exceptions?
## Describe the Bug
1. Strawberry executes all resolvers, even if there was an uncaught exception which triggered an early HTTP response with errors.
2. However, it eagerly returns a response with `errors`, as soon as an (**edit:** _incoercible_) exception is raised.
3. Finally, it completes all lifecycle hooks before return the response – including `on_execute` and `on_operation`.
This last point can lead to issues – it violates the invariant that `on_execute` / `on_operation` lifecycle hooks wrap around all resolver executions.
This can be problematic when these hooks do state management, like in the example given in [Strawberry's docs](https://strawberry.rocks/docs/guides/custom-extensions#execution-context). As a result, in addition to seeing the original uncaught exception in our observability suite, we have additional noise from knock-on failures – caused by premature completion of hooks.
Is this natural behaviour given how various async tasks are orchestrated, or is possible to tweak this a little? I'm thinking:
1. cancelling the tasks that won't be used for the response anyway (as it's been already returned)
2. waiting until all resolvers finish to return the response
~In fact, 2 may have other benefits – making the responses more (a) complete and (b) predictable. Currently, the GraphQL responses (i.e. which fields will return data and which won't) are non-deterministic (albeit a little faster thanks to the uncaught exception short-circuit).~ (**edit:** _established in the comments that the short-circuiting is expected_)
## Repro code
Schema:
```python
@strawberry.type
class Query:
@strawberry.field
@staticmethod
async def fail() -> str:
await sleep(0.5)
raise Exception(f"'fail' resolver has failed ({datetime.now()})")
@strawberry.field
@staticmethod
async def wait() -> str:
await sleep(2)
print(f"'wait' resolver is about to return ({datetime.now()})")
return "foo"
```
Logging extension:
```python
class MyCustomExtension(SchemaExtension):
@override
def on_execute(self) -> Generator[None, None, None]:
print(f"'on_execute' start ({datetime.now()})")
yield
print(f"'on_execute' end ({datetime.now()})")
@override
async def resolve(
self,
_next: Callable[..., object],
root: object,
info: GraphQLResolveInfo,
*args,
**kwargs,
) -> AwaitableOrValue[object]:
print(f"'{info.field_name}' resolver start ({datetime.now()})")
result = await await_maybe(_next(root, info, *args, **kwargs))
print(f"'{info.field_name}' resolver end ({datetime.now()})")
return result
```
Example query:
```graphql
query {
fail
wait
}
```
Example response:
```json
{
"data": null,
"errors": [
{
"message": "'fail' resolver has failed (2024-03-19 21:08:12.088337)",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": [
"fail"
]
}
]
}
```
Logs demonstrating that the resolvers continue being executed after hooks complete:
```
'on_execute' start (2024-03-19 21:08:11.587192)
'fail' resolver start (2024-03-19 21:08:11.587345)
'wait' resolver start (2024-03-19 21:08:11.587378)
'fail' resolver has failed (2024-03-19 21:08:12.088337)
GraphQL request:2:3
1 | query {
2 | fail
| ^
3 | wait
Traceback (most recent call last):
File "/Users/kkom/Repos/isometric/python/services/backend/.venv/lib/python3.12/site-packages/graphql/execution/execute.py", line 528, in await_result
return_type, field_nodes, info, path, await result
^^^^^^^^^^^^
File "/Users/kkom/Repos/isometric/python/services/backend/backend/api/graphql/extensions/extensions.py", line 30, in resolve
result = await await_maybe(_next(root, info, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kkom/Repos/isometric/python/services/backend/.venv/lib/python3.12/site-packages/strawberry/utils/await_maybe.py", line 12, in await_maybe
return await value
^^^^^^^^^^^
File "/Users/kkom/Repos/isometric/python/services/backend/.venv/lib/python3.12/site-packages/strawberry/schema/schema_converter.py", line 682, in _async_resolver
return await await_maybe(
^^^^^^^^^^^^^^^^^^
File "/Users/kkom/Repos/isometric/python/services/backend/.venv/lib/python3.12/site-packages/strawberry/utils/await_maybe.py", line 12, in await_maybe
return await value
^^^^^^^^^^^
File "/Users/kkom/Repos/isometric/python/services/backend/backend/api/graphql/schemas/public.py", line 55, in fail
raise Exception(f"'fail' resolver has failed ({datetime.now()})")
Exception: 'fail' resolver has failed (2024-03-19 21:08:12.088337)
'on_execute' end (2024-03-19 21:08:12.096968)
INFO: 127.0.0.1:58138 - "POST /graphql HTTP/1.1" 200 OK
'wait' resolver is about to return (2024-03-19 21:08:13.588281)
'wait' resolver end (2024-03-19 21:08:13.588422)
```
## System Information
- Strawberry version (if applicable): `0.220.0`
## Additional Context
<!-- Add any other relevant information about the problem here. --> | open | 2024-03-19T21:06:36Z | 2025-03-20T15:56:38Z | https://github.com/strawberry-graphql/strawberry/issues/3414 | [
"bug"
] | kkom | 12 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 756 | How can I query key's value in a text column from my table? | Hi,all.
There is a problem that has bothered me in recent days. I designed an org_score table using the MySql 5.7 database. In the table I designed "range" column of text type, using json string to store time.
The contents of this field are as follows:
`
{“timeStart”:"2017-01-01","timeEnd":"2017-02-01"}
`
I can use the sql of `SELECT * FROM org_score WHERE range->'$.timeStart' >= '2017-01-01';`
This way I can directly query the timeStart key under range. But I don't know how to use the ORM, statement of flask-sqlalchemy. Go check this result.
| closed | 2019-06-24T07:00:20Z | 2020-12-05T20:21:47Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/756 | [] | SuFIND | 1 |
apache/airflow | data-science | 47,786 | [Regression]Asset schedule info details are only showing up after DAG is executed once | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Noticed below Asset schedule info is only showing up after first DAGRUN
<img width="1393" alt="Image" src="https://github.com/user-attachments/assets/366a6d82-18b6-44dd-99e2-a86ad661c713" />
**AF2**
In AF2 we were able to this this prior only
<img width="812" alt="Image" src="https://github.com/user-attachments/assets/e2a6410f-42fe-4817-aebb-c19c675b50ba" />
### What you think should happen instead?
_No response_
### How to reproduce
1. Try clicking on `Asset` in the schedule column for downstream DAG before any DAG RUN. Notice no schedule info is displayed.
2. Get Donwstream DAG trigger
3. Retry #1 again
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-14T14:19:52Z | 2025-03-24T12:04:15Z | https://github.com/apache/airflow/issues/47786 | [
"kind:bug",
"priority:low",
"area:core",
"area:UI",
"area:datasets",
"affected_version:3.0.0beta"
] | vatsrahul1001 | 4 |
tox-dev/tox | automation | 3,034 | package = wheel not respected for individual env if prior env uses package = editable | ## Issue
Disclaimer - I'm using tox-pdm plugin but I don't think it matters as it doesn't use pdm to install the main package - it only handles dependencies.
Some parts of the config:
```
[tox]
envlist = prepare,py310,quality,cli,ci
[testenv]
package = editable
```
I have a few envs, and one of them overrides this with "package = wheel":
```
[testenv:cli]
depends = prepare
package = wheel
```
When I run tox, if there's another env before that (I use "depends" as you can see) which has no override and therefore uses editable package, then this env called "cli" also gets editable package installed, despite the override.
If I run the env separately with "tox -e cli" then I get expected behavior (not an editable wheel but a normal wheel), but if I run all envs I get incorrect behavior, cli env gets editable wheel installed.
## Environment
Provide at least:
- OS: Linux
- `pip list` of the host Python where `tox` is installed:
```console
Package Version
--------------------------------- ---------
... list is pretty long so showing what's important:
tox 4.4.8
tox-pdm 0.6.1
...
```
## Output of running tox
Provide the output of `tox -rvv`:
```console
$ grep editable tox.out
Backend: Wrote response {'return': {'get_requires_for_build_sdist': True, 'prepare_metadata_for_build_wheel': True, 'get_requires_for_build_wheel': True, 'build_editable': True, 'get_requires_for_build_editable': True, 'prepare_metadata_for_build_editable': True}} to /tmp/pep517__optional_hooks-c58s8t1i.json
.pkg: 12313 W get_requires_for_build_editable> python /home/user/.shiv/tools.pyz_7b47253112c59d7d9564833f351bcff8e94f39b278af5838f13df13f179ac62c/site-packages/pyproject_api/_backend.py True setuptools.build_meta [tox/tox_env/api.py:428]
Backend: run command get_requires_for_build_editable with args {'config_settings': None}
Backend: Wrote response {'return': ['wheel']} to /tmp/pep517_get_requires_for_build_editable-s9qogo0m.json
.pkg: 12683 W install_requires_for_build_editable> python -I -m pip install wheel [tox/tox_env/api.py:428]
.pkg: 14194 W build_editable> python /home/user/.shiv/tools.pyz_7b47253112c59d7d9564833f351bcff8e94f39b278af5838f13df13f179ac62c/site-packages/pyproject_api/_backend.py True setuptools.build_meta [tox/tox_env/api.py:428]
Backend: run command build_editable with args {'wheel_directory': '/home/user/Repos/python-native-build-test-cli/.tox/.pkg/dist', 'config_settings': {'--build-option': []}, 'metadata_directory': None}
running editable_wheel
adding '__editable__.python_native_build_test_cli-0.0.1.pth'
creating '/home/user/Repos/python-native-build-test-cli/.tox/.pkg/dist/.tmp-wj05idmr/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl' and adding '/tmp/tmp9o0vhv1opython_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl' to it
Backend: Wrote response {'return': 'python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl'} to /tmp/pep517_build_editable-9ophnhgz.json
.pkg: 14274 D package .tmp/package/5/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl links to .pkg/dist/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl (/home/user/Repos/python-native-build-test-cli/.tox) [tox/util/file_view.py:36]
py310: 68116 W install_package> python -I -m pip install --force-reinstall --no-deps /home/user/Repos/python-native-build-test-cli/.tox/.tmp/package/5/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl [tox/tox_env/api.py:428]
Processing ./.tox/.tmp/package/5/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl
py310: 68875 I exit 0 (0.76 seconds) /home/user/Repos/python-native-build-test-cli> python -I -m pip install --force-reinstall --no-deps /home/user/Repos/python-native-build-test-cli/.tox/.tmp/package/5/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl pid=15449 [tox/execute/api.py:275]
.pkg: 78990 D package .tmp/package/6/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl links to .pkg/dist/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl (/home/user/Repos/python-native-build-test-cli/.tox) [tox/util/file_view.py:36]
quality: 131484 W install_package> python -I -m pip install --force-reinstall --no-deps /home/user/Repos/python-native-build-test-cli/.tox/.tmp/package/6/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl [tox/tox_env/api.py:428]
Processing ./.tox/.tmp/package/6/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl
quality: 132270 I exit 0 (0.78 seconds) /home/user/Repos/python-native-build-test-cli> python -I -m pip install --force-reinstall --no-deps /home/user/Repos/python-native-build-test-cli/.tox/.tmp/package/6/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl pid=16296 [tox/execute/api.py:275]
.pkg: 143165 D package .tmp/package/7/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl links to .pkg/dist/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl (/home/user/Repos/python-native-build-test-cli/.tox) [tox/util/file_view.py:36]
cli: 192020 W install_package> python -I -m pip install --force-reinstall --no-deps /home/user/Repos/python-native-build-test-cli/.tox/.tmp/package/7/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl [tox/tox_env/api.py:428]
Processing ./.tox/.tmp/package/7/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl
cli: 192801 I exit 0 (0.78 seconds) /home/user/Repos/python-native-build-test-cli> python -I -m pip install --force-reinstall --no-deps /home/user/Repos/python-native-build-test-cli/.tox/.tmp/package/7/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl pid=17257 [tox/execute/api.py:275]
.pkg: 216109 D delete package /home/user/Repos/python-native-build-test-cli/.tox/.tmp/package/5/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl [tox/tox_env/python/virtual_env/package/pyproject.py:179]
.pkg: 216109 D delete package /home/user/Repos/python-native-build-test-cli/.tox/.tmp/package/7/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl [tox/tox_env/python/virtual_env/package/pyproject.py:179]
.pkg: 216109 D delete package /home/user/Repos/python-native-build-test-cli/.tox/.tmp/package/6/python_native_build_test_cli-0.0.1-0.editable-py3-none-any.whl [tox/tox_env/python/virtual_env/package/pyproject.py:179]
```
| closed | 2023-06-15T22:47:04Z | 2023-06-17T01:09:10Z | https://github.com/tox-dev/tox/issues/3034 | [
"help:wanted"
] | f3flight | 10 |
comfyanonymous/ComfyUI | pytorch | 6,759 | Hay comfyui developer's | comfyui developers, please make comfyui smooth on AMD graphics cards. To be honest, MBDI is too expensive for performance. By the way, AMD is offering high performance and affordable prices. So please don't make it too available for MBDI graphics cards, but make it available for AMD | closed | 2025-02-09T20:16:13Z | 2025-02-09T22:09:40Z | https://github.com/comfyanonymous/ComfyUI/issues/6759 | [] | dznstd | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.