repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
zappa/Zappa | flask | 903 | [Migrated] [ERROR] RuntimeError: populate() isn't reentrant | Originally from: https://github.com/Miserlou/Zappa/issues/2165 by [rafrasenberg](https://github.com/rafrasenberg)
I am trying to deploy a Django project with Zappa and a PostgreSQL database on Amazon AWS RDS but I am running into this error:
```
$ zappa manage dev create_db
[START] RequestId: ac91cbb6-9026-44d5-9136-3e4db8c0878c Version: $LATEST
[DEBUG] 2020-09-22T11:16:57.834Z ac91cbb6-9026-44d5-9136-3e4db8c0878c Zappa Event: {'manage': 'create_db'}
[ERROR] RuntimeError: populate() isn't reentrant
Traceback (most recent call last):
File "/var/task/handler.py", line 609, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 243, in lambda_handler
return handler.handler(event, context)
File "/var/task/handler.py", line 404, in handler
app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)
File "/var/task/zappa/ext/django_zappa.py", line 20, in get_django_wsgi
return get_wsgi_application()
File "/var/task/django/core/wsgi.py", line 12, in get_wsgi_application
django.setup(set_prefix=False)
File "/var/task/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/var/task/django/apps/registry.py", line 83, in populate
raise RuntimeError("populate() isn't reentrant")[END] RequestId: ac91cbb6-9026-44d5-9136-3e4db8c0878c
[REPORT] RequestId: ac91cbb6-9026-44d5-9136-3e4db8c0878c
Duration: 2.28 ms
Billed Duration: 100 ms
Memory Size: 512 MB
Max Memory Used: 101 MB
Error: Unhandled error occurred while invoking command.
```
Pretty meaningless error. I google'd this and tried some of the solutions in other tickets but could not resolve this. I tried `psycopg2` and `psycopg2-binary` but both no luck. Did anyone ever solve this? Because all the articles/issues covering this are old and outdated.
Zappa settings:
```
{
"dev": {
"aws_region": "eu-central-1",
"django_settings": "dserverless.settings",
"profile_name": "default",
"project_name": "dserverless",
"runtime": "python3.8",
"s3_bucket": "django-serverless",
"vpc_config": {
"SubnetIds": ["subnet-2cef773346", "subnet-023527a", "subnet-6agbcb21"],
"SecurityGroupIds": ["sg-87325d"]
}
}
}
```
The DB command:
```
class Command(BaseCommand):
help = 'Creates the initial database'
def handle(self, *args, **options):
self.stdout.write(self.style.SUCCESS('Starting db creation'))
dbname = settings.DATABASES['default']['NAME']
user = settings.DATABASES['default']['USER']
password = settings.DATABASES['default']['PASSWORD']
host = settings.DATABASES['default']['HOST']
con = None
con = connect(dbname=dbname, user=user, host=host, password=password)
con.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
cur = con.cursor()
cur.execute('CREATE DATABASE ' + dbname)
cur.close()
con.close()
self.stdout.write(self.style.SUCCESS('All Done'))
```
Pip freeze:
```
appdirs==1.4.3
argcomplete==1.12.0
asgiref==3.2.10
autopep8==1.5.4
boto3==1.15.2
botocore==1.18.2
CacheControl==0.12.6
certifi==2019.11.28
cfn-flip==1.2.3
chardet==3.0.4
click==7.1.2
colorama==0.4.3
contextlib2==0.6.0
distlib==0.3.0
distro==1.4.0
Django==3.1.1
django-s3-storage==0.13.4
durationpy==0.5
future==0.18.2
hjson==3.0.2
html5lib==1.0.1
idna==2.8
ipaddr==2.2.0
jmespath==0.10.0
kappa==0.6.0
lockfile==0.12.2
msgpack==0.6.2
packaging==20.3
pep517==0.8.2
pip-tools==5.3.1
placebo==0.9.0
progress==1.5
psycopg2-binary==2.8.6
pycodestyle==2.6.0
pyparsing==2.4.6
python-dateutil==2.6.1
python-slugify==4.0.1
pytoml==0.1.21
pytz==2020.1
PyYAML==5.3.1
requests==2.22.0
retrying==1.3.3
s3transfer==0.3.3
six==1.14.0
sqlparse==0.3.1
text-unidecode==1.3
toml==0.10.1
tqdm==4.49.0
troposphere==2.6.2
urllib3==1.25.8
webencodings==0.5.1
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.51.0
```
| closed | 2021-02-20T13:03:34Z | 2022-07-16T05:10:36Z | https://github.com/zappa/Zappa/issues/903 | [] | jneves | 2 |
littlecodersh/ItChat | api | 108 | 怎么获取每个好友唯一的标识 | 在运行中发觉,每次重新登录或者重新启动程序。获取好友的ActualUserName都不一样。有什么属性可以永远都标识同一个好友吗
| closed | 2016-10-21T07:31:12Z | 2016-10-22T13:53:17Z | https://github.com/littlecodersh/ItChat/issues/108 | [
"question"
] | kh13 | 1 |
desec-io/desec-stack | rest-api | 813 | readthedocs build (sometimes?) fails | The readthedocs build (sometimes?) fails due missing config:
> The configuration file required to build documentation is missing from your project. Add a configuration file to your project to make it build successfully. Read more at https://docs.readthedocs.io/en/stable/config-file/v2.html
Source: https://readthedocs.org/projects/desec/builds/22049067/ | closed | 2023-09-27T14:23:21Z | 2023-11-03T16:09:12Z | https://github.com/desec-io/desec-stack/issues/813 | [] | Rotzbua | 0 |
Farama-Foundation/PettingZoo | api | 1,251 | [Feat] Create template environment similar to Gymnasium's | ### Proposal
As of now, the PettingZoo documentation gives instructions on how to create a custom environment. However, Gymnasium uses copier to clone a template environment which is better.
The goal of this proposal is to implement a template PettingZoo environment and create documentation to use it, like with Gymnasium.
### Motivation
_No response_
### Pitch
_No response_
### Alternatives
_No response_
### Additional context
_No response_
### Checklist
- [x] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/PettingZoo/issues) in the repo
| open | 2024-12-09T13:46:44Z | 2024-12-09T21:11:02Z | https://github.com/Farama-Foundation/PettingZoo/issues/1251 | [
"enhancement"
] | David-GERARD | 0 |
sgl-project/sglang | pytorch | 4,090 | Questions about the calculation of `max_req_num` |
Hi, from the source code I see that **sglang** has the ability to automatically calculate [**max_req_num**](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/model_executor/model_runner.py#L647):
```python
if max_num_reqs is None:
max_num_reqs = min(
max(
int(
self.max_total_num_tokens / self.model_config.context_len * 512
),
2048,
),
4096,
)
```
Regarding this calculation process, I have the following questions:
1. Will setting **max_req_num** directly affect GPU memory usage?
2. The value of **max_req_num** seems to range between 2048 and 4096. Why is that? What’s the reasoning behind this design? Is it based on empirical values?
3. When calculating **max_req_num** based on **max_total_num_tokens** and **context_len**, why multiply by the coefficient 512? Where does this coefficient come from?
| open | 2025-03-05T08:51:45Z | 2025-03-05T09:08:54Z | https://github.com/sgl-project/sglang/issues/4090 | [] | tingjun-cs | 0 |
babysor/MockingBird | deep-learning | 266 | "AssertionError" when starting web.py | 运行web.py时出现"AssertionError"错误 | When I started "web.py", I met the error message ended of "AssertionError".:
我用python运行"web.py"时,遇到了以下问题,最后一行是“AssertionError”:
(mockingbird) C:\Users\yisheng_zhou\Downloads\MockingBird> python web.py
Loaded synthesizer models: 2
Loaded encoder "pretrained.pt" trained to step 1564501
Building Wave-RNN
Trainable Parameters: 4.481M
Loading model weights at vocoder\saved_models\pretrained\pretrained.pt
Building hifigan
Traceback (most recent call last):
File "C:\Users\yisheng_zhou\Downloads\MockingBird\web.py", line 6, in <module>
app = webApp()
File "C:\Users\yisheng_zhou\Downloads\MockingBird\web\__init__.py", line 35, in webApp
gan_vocoder.load_model(Path("vocoder/saved_models/pretrained/g_hifigan.pt"))
File "C:\Users\yisheng_zhou\Downloads\MockingBird\vocoder\hifigan\inference.py", line 44, in load_model
state_dict_g = load_checkpoint(
File "C:\Users\yisheng_zhou\Downloads\MockingBird\vocoder\hifigan\inference.py", line 18, in load_checkpoint
assert os.path.isfile(filepath)
AssertionError
Was someone same to me? How to solve?
各位大神有遇到和我一样问题的吗?怎么解决的? | open | 2021-12-12T07:17:34Z | 2021-12-26T03:21:54Z | https://github.com/babysor/MockingBird/issues/266 | [] | GreenApple-King | 1 |
jazzband/django-oauth-toolkit | django | 1,478 | Minor/patch release cycle with bugfixes | <!-- What is your question? -->
Hi! Do you have any plans to release another minor or patch version before the major upgrade to 3? There are a couple of smaller non-breaking fixes that would be great to have in, such as https://github.com/jazzband/django-oauth-toolkit/pull/1476 and https://github.com/jazzband/django-oauth-toolkit/pull/1465 which fixes [this CVE](https://github.com/advisories/GHSA-3pgj-pg6c-r5p7). 🙏 | closed | 2024-09-04T10:25:01Z | 2024-09-05T14:25:12Z | https://github.com/jazzband/django-oauth-toolkit/issues/1478 | [
"question",
"help-wanted",
"dependencies"
] | cristiprg | 4 |
pyg-team/pytorch_geometric | deep-learning | 9,225 | The link to the Karate Club paper is broken | ### 📚 Describe the documentation issue
Hello.
In [this](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.datasets.KarateClub.html#torch_geometric.datasets.KarateClub) documentation, the link to [“An Information Flow Model for Conflict and Fission in Small Groups”](http://www1.ind.ku.dk/complexLearning/zachary1977.pdf) is broken.
### Suggest a potential alternative/fix
We should link to https://www.journals.uchicago.edu/doi/abs/10.1086/jar.33.4.3629752 instead.
Here is where we need to change.
https://github.com/pyg-team/pytorch_geometric/blob/ed170342eb2b174fd16b910c735758edbd4e78fd/torch_geometric/datasets/karate.py#L11 | closed | 2024-04-22T13:23:02Z | 2024-04-26T10:27:54Z | https://github.com/pyg-team/pytorch_geometric/issues/9225 | [
"documentation"
] | 1taroh | 0 |
3b1b/manim | python | 1,288 | % signs result in crop or errors in manim text. | Here is an exmaple from Manim that tries to write `%`:
```
class WriteStuff(Scene):
def construct(self):
example_text = TextMobject(
"This is a some % text",
tex_to_color_map={"text": YELLOW}
)
example_tex = TexMobject(
"\\sum_{k=1}^\\infty % {1 \\over k^2} = {\\pi^2 \\over 6}",
)
group = VGroup(example_text, example_tex)
group.arrange(DOWN)
group.set_width(FRAME_WIDTH - 2 * LARGE_BUFF)
self.play(Write(example_text))
self.play(Write(example_tex))
self.wait()
```
In the final animation, everything after `%` is cropped or not written in both `TextMobject` and `TexMobject`. This is the missing text bug.
If the percentage sign, moves between any opening or closing `{}`. It results in compilation error:
```
class WriteStuff(Scene):
def construct(self):
example_text = TextMobject(
"This is a some % text",
tex_to_color_map={"text": YELLOW}
)
example_tex = TexMobject(
"\\sum_{k=1 % }^\\infty {1 \\over k^2} = {\\pi^2 \\over 6}",
)
group = VGroup(example_text, example_tex)
group.arrange(DOWN)
group.set_width(FRAME_WIDTH - 2 * LARGE_BUFF)
self.play(Write(example_text))
self.play(Write(example_tex))
self.wait()
```
What causes this error? | closed | 2020-12-10T06:11:01Z | 2020-12-10T11:04:07Z | https://github.com/3b1b/manim/issues/1288 | [] | baljeetrathi | 0 |
huggingface/transformers | tensorflow | 36,411 | [i18n-zh] Translating `kv_cache` into zh-hans | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
## Generation
- [x] [kv_cache.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/kv_cache.md)
<!--
Keep on adding more as you go 🔥
-->
| closed | 2025-02-26T08:53:32Z | 2025-02-26T16:05:22Z | https://github.com/huggingface/transformers/issues/36411 | [
"WIP"
] | neofung | 1 |
autogluon/autogluon | data-science | 4,406 | Improve CPU training times for catboost | Related to https://github.com/catboost/catboost/issues/2722
Problem: Catboost takes 16x more time to train than a similar Xgboost model.
```
catboost: 1.2.5
xgboost: 2.0.3
autogluon: 1.1.1
Python: 3.10.14
OS: Windows 11 Pro (10.0.22635)
CPU: Intel(R) Core(TM) i7-1165G7
GPU: Integrated Graphics
RAM: 16 GB
```
Example with data:
```python
from autogluon.tabular import TabularDataset, TabularPredictor
import numpy as np
from sklearnex import patch_sklearn
patch_sklearn()
# data
label = 'signature'
data_url = 'https://raw.githubusercontent.com/mli/ag-docs/main/knot_theory/'
train_data = TabularDataset(f'{data_url}train.csv')
test_data = TabularDataset(f'{data_url}test.csv')
# train
np.random.seed(2024)
predictor = TabularPredictor(label=label, problem_type='multiclass', eval_metric='log_loss')
predictor.fit(train_data, included_model_types=['XGB', 'CAT'])
# report
metrics = ['model', 'score_test', 'score_val', 'eval_metric', 'pred_time_test', 'fit_time']
predictor.leaderboard(test_data)[metrics]
```
model | score_test | score_val | eval_metric | pred_time_test | fit_time
-- | -- | -- | -- | -- | --
WeightedEnsemble_L2 | -0.155262 | -0.138425 | log_loss | 0.649330 | 263.176814
CatBoost | -0.158654 | -0.150310 | log_loss | 0.237857 | 247.344303
XGBoost | -0.171801 | -0.144754 | log_loss | 0.398456 | 15.676711
| closed | 2024-08-18T03:19:42Z | 2024-08-20T04:36:45Z | https://github.com/autogluon/autogluon/issues/4406 | [
"enhancement",
"wontfix",
"module: tabular"
] | crossxwill | 1 |
jupyter-book/jupyter-book | jupyter | 1,794 | [BUG] In Firefox links to references stored in a dropdown do not work unless the dropdown is opened | ### Describe the bug
related to https://github.com/agahkarakuzu/oreoni/issues/4
**context**
When I click on a link to a reference that is "stored" in a dropdown section on the same page.
**expectation**
I expected that the reference dropdown will open and the screen will move to the line of the reference..
**bug**
This behavior works fine in Chrome but in Firefox nothing happens.
### Reproduce the bug
Open this link in Firefox VS Chrome:
https://remi-gau.github.io/oreoni/01/introduction.html#id21
In Chrome the link works (opens the reference dropdown and moves the screen to it) but not in Firefox
### List your environment
https://github.com/agahkarakuzu/oreoni/blob/main/requirements.txt
jupyter-book==0.12.2
| open | 2022-08-01T11:15:11Z | 2022-08-01T11:16:39Z | https://github.com/jupyter-book/jupyter-book/issues/1794 | [
"bug"
] | Remi-Gau | 2 |
chaoss/augur | data-visualization | 3,054 | Facade Error: insert_facade_contributors: TypeError('sequence item 1: expected a bytes-like object, NoneType found') | Since core got unblocked we have 1,000+ of this error:
Exception:
> TypeError('sequence item 1: expected a bytes-like object, NoneType found')
Traceback (most recent call last):
> File "/opt/venv/lib/python3.9/site-packages/celery/backends/redis.py", line 520, in on_chord_part_return resl = [unpack(tup, decode) for tup in resl] File "/opt/venv/lib/python3.9/site-packages/celery/backends/redis.py", line 520, in <listcomp> resl = [unpack(tup, decode) for tup in resl] File "/opt/venv/lib/python3.9/site-packages/celery/backends/redis.py", line 426, in _unpack_chord_result raise ChordError(f'Dependency {tid} raised {retval!r}') celery.exceptions.ChordError: Dependency 5bc5e62c-3879-46c5-86b9-3ffe53a5367d raised FileNotFoundError(2, 'No such file or directory') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/venv/lib/python3.9/site-packages/celery/app/trace.py", line 518, in trace_task task.backend.mark_as_done( File "/opt/venv/lib/python3.9/site-packages/celery/backends/base.py", line 164, in mark_as_done self.on_chord_part_return(request, state, result) File "/opt/venv/lib/python3.9/site-packages/celery/backends/redis.py", line 539, in on_chord_part_return return self.chord_error_from_stack(callback, exc) File "/opt/venv/lib/python3.9/site-packages/celery/backends/base.py", line 309, in chord_error_from_stack return backend.fail_from_current_stack(callback.id, exc=exc) File "/opt/venv/lib/python3.9/site-packages/celery/backends/base.py", line 316, in fail_from_current_stack self.mark_as_failure(task_id, exc, exception_info.traceback) File "/opt/venv/lib/python3.9/site-packages/celery/backends/base.py", line 172, in mark_as_failure self.store_result(task_id, exc, state, File "/opt/venv/lib/python3.9/site-packages/celery/backends/base.py", line 528, in store_result self._store_result(task_id, result, state, traceback, File "/opt/venv/lib/python3.9/site-packages/celery/backends/base.py", line 956, in _store_result current_meta = self._get_task_meta_for(task_id) File "/opt/venv/lib/python3.9/site-packages/celery/backends/base.py", line 978, in _get_task_meta_for meta = self.get(self.get_key_for_task(task_id)) File "/opt/venv/lib/python3.9/site-packages/celery/backends/base.py", line 856, in get_key_for_task return key_t('').join([ TypeError: sequence item 1: expected a bytes-like object, NoneType found
--
| open | 2025-03-12T23:53:40Z | 2025-03-20T19:57:58Z | https://github.com/chaoss/augur/issues/3054 | [
"bug"
] | cdolfi | 1 |
d2l-ai/d2l-en | pytorch | 2,343 | Why do you use your own API | I wonder why are you using an API, instead of regular pytorch code.
It makes everything look unfamiliar and impractical. It's like a new language. Like having to learn everything again. | open | 2022-11-16T12:03:08Z | 2023-05-15T13:51:59Z | https://github.com/d2l-ai/d2l-en/issues/2343 | [
"question"
] | g-i-o-r-g-i-o | 11 |
arogozhnikov/einops | tensorflow | 28 | Why "Only lower-case latin letters allowed in names, not ..." | Is there a reason that einops does not support upper latin letters?
I would like to use upper and lower letters. | closed | 2019-02-14T10:12:06Z | 2020-09-11T06:03:40Z | https://github.com/arogozhnikov/einops/issues/28 | [] | boeddeker | 12 |
httpie/cli | python | 714 | Program name results to sys.argv[0] when executing httpie module as a script | Getting:
```
$ python -m httpie -h
usage: __main__.py [--json] [--form] [--pretty {all,colors,format,none}]
[--style STYLE] [--print WHAT] [--headers] [--body]
[--verbose] [--all] [--history-print WHAT] [--stream]
[--output FILE] [--download] [--continue]
[--session SESSION_NAME_OR_PATH | --session-read-only SESSION_NAME_OR_PATH]
[--auth USER[:PASS]] [--auth-type {basic,digest}]
[--proxy PROTOCOL:PROXY_URL] [--follow]
[--max-redirects MAX_REDIRECTS] [--timeout SECONDS]
[--check-status] [--verify VERIFY]
[--ssl {ssl2.3,tls1,tls1.1,tls1.2}] [--cert CERT]
[--cert-key CERT_KEY] [--ignore-stdin] [--help] [--version]
[--traceback] [--default-scheme DEFAULT_SCHEME] [--debug]
[METHOD] URL [REQUEST_ITEM [REQUEST_ITEM ...]]
__main__.py: error: the following arguments are required: URL
```
Expected:
```
$ python -m httpie -h
usage: http [--json] [--form] [--pretty {all,colors,format,none}]
[--style STYLE] [--print WHAT] [--headers] [--body] [--verbose]
[--all] [--history-print WHAT] [--stream] [--output FILE]
[--download] [--continue]
[--session SESSION_NAME_OR_PATH | --session-read-only SESSION_NAME_OR_PATH]
[--auth USER[:PASS]] [--auth-type {basic,digest}]
[--proxy PROTOCOL:PROXY_URL] [--follow]
[--max-redirects MAX_REDIRECTS] [--timeout SECONDS]
[--check-status] [--verify VERIFY]
[--ssl {ssl2.3,tls1,tls1.1,tls1.2}] [--cert CERT]
[--cert-key CERT_KEY] [--ignore-stdin] [--help] [--version]
[--traceback] [--default-scheme DEFAULT_SCHEME] [--debug]
[METHOD] URL [REQUEST_ITEM [REQUEST_ITEM ...]]
http: error: the following arguments are required: URL
``` | closed | 2018-09-22T12:59:32Z | 2018-10-30T17:41:57Z | https://github.com/httpie/cli/issues/714 | [] | matusf | 1 |
dynaconf/dynaconf | django | 1,129 | [RFC]typed: Cast dict to its Dictvalue from schema. | related to #1127
Currently, dicts are loaded purely from the loaders, regardless if it has a schema defined.
```python
class Person(DictValue):
name: str
team: str
class Settings(Dynaconf):
person: Person
settings = Settings(person={"name: "foo", "team": "A"})
```
Then
```
assert settings.person == {"name: "foo", "team": "A"} # True
assert isintance(settings.person, Person) # False
```
What is the desired behavior?
```python
assert settings.person == {"name: "foo", "team": "A"} # True
assert isintance(settings.person, Person) # True
```
So, `settings.person` must be a `dict` and at the same time a `Person`, it means that `DictValue` will have to inherit from `UserDict` and provide the proper `__eq__` methods and also implement access lookup both via subscription `settings.person["name"]` and also `settings.person.name`.
This will allow:
- keep the autocompletion working for subtypes
- static type to validate on code level
- isintance checks to be performed
- To replace `Box` completely
## Implementation
On `.set` method, it will lookup for the schema defined type, instantiate it and assign.
## Challenges
How will it work with Lazy evaluated values?
```python
class Person(DictValue)
number: int
```
```bash
export DYNACONF_PERSON__number="@int @jinja {{ 2 + 2 }}"
```
The `number: int` would need to accept `Lazy("@int @jinja {{ 2 + 2 }}")` instance,
we probably can make it happen on the validation process, by replacing the type from `int` to `Union[int, Lazy]`
Or alternativelly:
`Person` would strictly require `number: int` but the instantiation of `person` will be delayed to before validation is performed, there will be a intermediate state for a `DictValue` that will be a `NotEvaluated(Person, kwargs)`
Or maybe
`DictValue` will only require the presence of keys, but will not perform any validation including type validation, postponing the validation to the `.validate` call, that will anyway trigger lazy evaluation.
Requires investigation
| open | 2024-07-06T14:19:09Z | 2024-07-08T18:37:57Z | https://github.com/dynaconf/dynaconf/issues/1129 | [
"Not a Bug",
"RFC",
"typed_dynaconf"
] | rochacbruno | 1 |
microsoft/JARVIS | pytorch | 227 | windows 执行报错 | 执行 awesome_chat.py,配置文件是这样的inference_mode: huggingface
local_deployment: minimal ,选择远程加载模型
报错信息如下:

| open | 2023-12-08T02:25:19Z | 2023-12-08T02:25:19Z | https://github.com/microsoft/JARVIS/issues/227 | [] | 827648313 | 0 |
FactoryBoy/factory_boy | django | 787 | TypeError: generate() missing 1 required positional argument: 'params' | After upgrading from 3.0.1 to 3.1 I suddenly get a `TypeError: generate() missing 1 required positional argument: 'params'`.
I have a `factory.LazyAttribute` that calls `factory.Faker('safe_email').generate()` conditionally. This worked before and now raises a `TypeError`.
This seems to be caused by commit f0a4ef008f07f8d42221565d8c33b88083f0be6d. Would it be possible to make `params` optional?
| closed | 2020-10-05T08:52:27Z | 2020-10-06T07:26:06Z | https://github.com/FactoryBoy/factory_boy/issues/787 | [
"Q&A",
"Doc",
"BadMagic"
] | jaap3 | 4 |
slackapi/python-slack-sdk | asyncio | 991 | Can a Slack app also have a preview for uploaded file using files_upload? | Hi Everyone,
Using `files_upload` from the SDK Web Client,
Is there a way for Slack App to also have a preview for the file uploaded just like a normal user gets when uploading it ??
<img width="430" alt="Screenshot 2021-04-06 at 7 13 08 PM" src="https://user-images.githubusercontent.com/42064744/113720301-215bd400-970c-11eb-8152-17d1be15b5c7.png">
Thanks in advance!
| closed | 2021-04-06T13:48:05Z | 2021-04-07T05:36:15Z | https://github.com/slackapi/python-slack-sdk/issues/991 | [
"question"
] | Harshg999 | 2 |
InstaPy/InstaPy | automation | 6,093 | File could not be opened error when attempting to start session. | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
Session parameters should be accepted and the program should proceed onto launching the web browser. (I must add that I am not very experienced nor very intelligent so please correct me if I am doing or saying something blatantly wrongly)
## Current Behavior
The program halts on the line "session = InstaPy(username=insta_username, password=insta_password, headless_browser=True)" with the error "file could not be opened"
## Possible Solution (optional)
## InstaPy configuration
latest version of InstaPy(0.6.13) on python 3.9.1. All requirements were installed. System is Mac Os Big Sur on a non m1 mac
| closed | 2021-02-26T21:43:17Z | 2021-02-27T15:38:41Z | https://github.com/InstaPy/InstaPy/issues/6093 | [] | ghost | 6 |
sammchardy/python-binance | api | 1,247 | Trailing stop loss on spot market | Hi,
Since Binance now allows us to use trailing stop loss, do you have any plan to implement this? | open | 2022-09-11T12:49:34Z | 2022-09-11T12:49:34Z | https://github.com/sammchardy/python-binance/issues/1247 | [] | wiseryfendy | 0 |
chezou/tabula-py | pandas | 351 | Try to install tabula-py | I tried to install tabula-py on Windows 10 and install java 8 and set up path correctly. But I still get ```
Java version:
`java -version` faild. `java` command is not found from this Pythonprocess. Please ensure Java is installed and PATH is set for `java`
tabula-py version: 2.7.0
```
Any suggestions how to solve it? | closed | 2023-07-17T06:51:45Z | 2023-07-17T06:51:58Z | https://github.com/chezou/tabula-py/issues/351 | [] | ribery77 | 1 |
JaidedAI/EasyOCR | deep-learning | 479 | Model deployment on mobile phones | Hello everyone,
I need to deploy easyOCR and use it on an Android device and I couldn't find any resources for that.
I have seen the custom_model.md but not sure if this would help since I don't want to train my custom model.
Thanks
| closed | 2021-07-04T14:02:39Z | 2022-07-11T09:02:00Z | https://github.com/JaidedAI/EasyOCR/issues/479 | [] | rasha-salim | 2 |
deezer/spleeter | deep-learning | 548 | spleeter.separator not found when installing with pip | ## Description
Pip installing seems to be missing the separator module.
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
1. Pip installed ffmpeg and spleeter
2. Ran this code
```
from spleeter.separator import Separator
sep = Separator('spleeter:2stems')
path = "C:\\Users\\Sebastian\\Documents\\VScode\\Personal Projects\\Music Recognition Project"
song_path = path + "Automatic Stop.mp3"
sep.separate_to_file(song_path,path)`
```
3. Got this error `ImportError: DLL load failed: The specified module could not be found.`
## Output
```
2021-01-04 20:05:42.196029: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2021-01-04 20:05:42.201066: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "c:/Users/Sebastian/Documents/VScode/Personal Projects/Music Recognition Project/pokesong2.py", line 1, in <module>
from spleeter.separator import Separator
File "C:\Users\Sebastian\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\spleeter\separator.py", line 27, in <module>
from librosa.core import stft, istft
File "C:\Users\Sebastian\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\librosa\__init__.py", line 211, in <module>
from . import core
File "C:\Users\Sebastian\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\librosa\core\__init__.py", line 5, in <module>
from .convert import * # pylint: disable=wildcard-import
File "C:\Users\Sebastian\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\librosa\core\convert.py", line 7, in <module>
from . import notation
File "C:\Users\Sebastian\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\librosa\core\notation.py", line 8, in <module>
from ..util.exceptions import ParameterError
File "C:\Users\Sebastian\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\librosa\util\__init__.py", line 87, in <module>
from ._nnls import * # pylint: disable=wildcard-import
File "C:\Users\Sebastian\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\librosa\util\_nnls.py", line 13, in <module>
import scipy.optimize
File "C:\Users\Sebastian\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\scipy\optimize\__init__.py", line 389, in <module>
from .optimize import *
File "C:\Users\Sebastian\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\scipy\optimize\optimize.py", line 37, in <module>
from .linesearch import (line_search_wolfe1, line_search_wolfe2,
File "C:\Users\Sebastian\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\scipy\optimize\linesearch.py", line 18, in <module>
from scipy.optimize import minpack2
ImportError: DLL load failed: The specified module could not be found.
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows |
| Installation type | pip |
| RAM available | 20GB |
| Hardware spec | GTX 1060 / i5 6600k |
| Python version | 3.8.0 | | closed | 2021-01-05T04:21:29Z | 2021-01-08T13:13:20Z | https://github.com/deezer/spleeter/issues/548 | [
"bug",
"invalid"
] | SebastianCardenasEscoto | 1 |
huggingface/pytorch-image-models | pytorch | 2,296 | AttributeError: 'ImageDataset' object has no attribute 'parser' | timm: '1.0.9'
AttributeError: 'ImageDataset' object has no attribute 'parser' | closed | 2024-10-07T08:04:59Z | 2024-11-22T04:21:58Z | https://github.com/huggingface/pytorch-image-models/issues/2296 | [
"bug"
] | riyajatar37003 | 4 |
google-research/bert | tensorflow | 461 | Reduce prediction time for question answering | Hi,
i am executing BERT solution on machine with GPU (Tesla K80 - 12 GB) . for question answering prediction for single question is taking more than 5 seconds. Can we reduce it to below 1 second.
Do we need to configure any thing to make it possible ?
Thank you | open | 2019-02-28T09:28:09Z | 2019-09-19T04:36:36Z | https://github.com/google-research/bert/issues/461 | [] | shivamani-ans | 9 |
waditu/tushare | pandas | 1,561 | share_float查单只股票的时候数据显示不全 | 输入参数只有股票代码时 ex: pro.share_float(ts_code='600278.SZ')
有的只能查前几年的,近两年的就没有数据了
ID:368465 | open | 2021-06-22T07:12:16Z | 2021-06-22T07:12:16Z | https://github.com/waditu/tushare/issues/1561 | [] | zzdqilei | 0 |
scikit-learn-contrib/metric-learn | scikit-learn | 39 | Recreate "Twin Peaks" result from MLKR paper | Replicate the experiment on the synthetically created "Twin Peaks" dataset in this [paper](http://www.cs.cornell.edu/~kilian/papers/weinberger07a.pdf) using MLKR algorithm. #28 can be used for reference.
@perimosocordiae I've raised this just to keep better track of what is left to do in MLKR.
| open | 2016-10-30T11:06:54Z | 2016-10-30T11:06:54Z | https://github.com/scikit-learn-contrib/metric-learn/issues/39 | [] | devashishd12 | 0 |
open-mmlab/mmdetection | pytorch | 11,705 | 无法debug到模型源码 | 你好,我是个新手,最近我看到网上好多以前版本的教程都在用mmlab里面的各个模块学习深度学习模型,但是我发现无法我在我要用的模型里面打断点无法进入,而是执行编译好的mmdet里面去了,这样很不适合我这样新手学习模型的每个模块,有什么方法可以解决这个问题吗
例如我最近在学习mask2former,我想看看我数据输入到模型里面,模型的处理细节,就没法做到 | open | 2024-05-11T11:37:31Z | 2024-05-28T16:19:58Z | https://github.com/open-mmlab/mmdetection/issues/11705 | [] | whj-tech | 2 |
apache/airflow | machine-learning | 47,971 | Retry exponential backoff max float overflow | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### If "Other Airflow 2 version" selected, which one?
2.10.3
### What happened?
Hello,
I encountered with a bug. My DAG configs were: retries=1000, retry_delay=5 min (300 seconds), max_retry_delay=1h (3600 seconds). My DAG failed ~1000 times and after that Scheduler broke down. After that retries exceeded 1000 and stopped on 1017 retry attempt.
I did my research on this problem and found that this happened due to formula **min_backoff = math.ceil(delay.total_seconds() * (2 ** (self.try_number - 1)))** in **taskinstance.py** file. So retry_exponential_backoff has no limit of try_number and during calculations it can overflow max Float value. So even if max_retry_delay is set formula is still calculating. And during calculations on very large retry number it crashes.
Please fix bug.
I also did pull request with my possible solution:
https://github.com/apache/airflow/pull/48057
https://github.com/apache/airflow/pull/48051
From Airflow logs:
2024-12-09 02:16:39.825 OverflowError: cannot convert float infinity to integer
2024-12-09 02:16:39.825 min_backoff = int(math.ceil(delay.total_seconds() * (2 ** (self.try_number - 2))))
2024-12-08 09:29:14.583 [2024-12-08T06:29:14.583+0000] {scheduler_job_runner.py:705} INFO - Executor reports execution of mydag.spark_submit run_id=manual__2024-11-02T10:19:30.618008+00:00 exited with status up_for_retry for try_number 470
Configs:
_with DAG(
dag_id=DAG_ID,
start_date=MYDAG_START_DATE,
schedule_interval="@daily",
catchup=AIRFLOW_CATCHUP,
default_args={
'depends_on_past': True,
"retries": 1000,
"retry_delay": duration(minutes=5),
"retry_exponential_backoff": True,
"max_retry_delay": duration(hours=1),
},
) as dag:_
<img width="947" alt="Image" src="https://github.com/user-attachments/assets/f3307b23-0307-4b4d-b968-3e1984fbe93c" />
<img width="1050" alt="Image" src="https://github.com/user-attachments/assets/f161329c-155c-4d92-b3b4-cf442d6ed036" />
### What you think should happen instead?
My pull request:
https://github.com/apache/airflow/pull/48057
https://github.com/apache/airflow/pull/48051
### How to reproduce
Use configs from above. Example:
with DAG(
dag_id=DAG_ID,
start_date=MY_AIRFLOW_START_DATE,
schedule_interval="@daily",
catchup=AIRFLOW_CATCHUP,
default_args={
'depends_on_past': True,
"retries": 1000,
"retry_delay": duration(minutes=5),
"retry_exponential_backoff": True,
"max_retry_delay": duration(hours=1),
},
) as dag
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-19T18:27:31Z | 2025-03-24T11:57:37Z | https://github.com/apache/airflow/issues/47971 | [
"kind:bug",
"area:Scheduler",
"good first issue",
"area:core",
"needs-triage"
] | alealandreev | 5 |
ydataai/ydata-profiling | jupyter | 831 | Correlation options in "Advanced Usage" not works as expected | Trying to run profiling with:
profile = ProfileReport(
postgres_db_table, title=db_parameter_dict["tableName"], html={"style": {"full_width": True}},
sort=None, minimal=None, interactions={'continuous': False}, orange_mode=True,
correlations={
"pearson": {"calculate": True,"warn_high_correlations":True,"threshold":0.9},
"spearman": {"calculate": False},
"kendall": {"calculate": False},
"phi_k": {"calculate": False},
"cramers": {"calculate": False},
}
)
parameters but no correlation visualizations shows up on report html. So i want to run just "Pearson" correlation but i can't.
When i try parameters below:
ProfileReport(
postgres_db_table, title=db_parameter_dict["tableName"], html={"style": {"full_width": True}},
sort=None, minimal=None, interactions={'continuous': False}, orange_mode=True,
correlations={"pearson": {"calculate": True}} )
Only "Phik, Cramers V" tabs shows up in profiling report html.
To Reproduce
Data:
Famous Titanic dataset with 889 records and ['id', 'survived', 'pclass', 'name', 'sex', 'age', 'sibsp', 'parch',
'ticket', 'fare', 'embarked'] columns
Version information:
python: 3.7.0
Environment: Jupyter Notebook
<details><summary>Click to expand <strong><em>Version information</em></strong></summary>
<p>
absl-py==0.13.0
adal==1.2.6
alembic==1.4.1
altair==4.1.0
amqp==2.6.1
apispec==3.3.2
appdirs==1.4.4
astroid==2.3.1
astunparse==1.6.3
atomicwrites==1.4.0
attrs==20.3.0
autopep8==1.5
azure-common==1.1.26
azure-graphrbac==0.61.1
azure-mgmt-authorization==0.61.0
azure-mgmt-containerregistry==2.8.0
azure-mgmt-keyvault==2.2.0
azure-mgmt-resource==12.0.0
azure-mgmt-storage==11.2.0
azureml-core==1.23.0
Babel==2.8.0
backcall==0.1.0
backoff==1.10.0
backports.tempfile==1.0
backports.weakref==1.0.post1
bcrypt==3.2.0
beautifulsoup4==4.9.0
billiard==3.6.3.0
bleach==3.1.0
bokeh==2.3.1
Boruta==0.3
boto==2.49.0
boto3==1.12.9
botocore==1.15.9
Bottleneck==1.3.2
Brotli==1.0.9
bs4==0.0.1
bson==0.5.9
cached-property==1.5.2
cachelib==0.1.1
cachetools==4.2.1
celery==4.4.7
certifi==2019.9.11
cffi==1.14.3
chardet==3.0.4
chart-studio==1.1.0
clang==5.0
click==8.0.0
cloudpickle==1.6.0
colorama==0.4.1
colorcet==2.0.6
colorlover==0.3.0
colour==0.1.5
confuse==1.4.0
contextlib2==0.6.0.post1
croniter==0.3.34
cryptography==3.2
cssselect==1.1.0
cufflinks==0.17.3
cx-Oracle==7.2.3
cycler==0.10.0
d6tcollect==1.0.5
d6tstack==0.2.0
dash==1.16.1
dash-core-components==1.12.1
dash-html-components==1.1.1
dash-renderer==1.8.1
dash-table==4.10.1
databricks-cli==0.14.2
dataclasses==0.6
decorator==4.4.0
defusedxml==0.6.0
dnspython==2.0.0
docker==4.4.4
docopt==0.6.2
docutils==0.15.2
dtreeviz==1.3
email-validator==1.1.1
entrypoints==0.3
et-xmlfile==1.0.1
exitstatus==1.4.0
extratools==0.8.2.1
fake-useragent==0.1.11
feature-selector===N-A
findspark==1.4.2
Flask==1.1.1
Flask-AppBuilder==3.0.1
Flask-Babel==1.0.0
Flask-Caching==1.9.0
Flask-Compress==1.5.0
Flask-Cors==3.0.10
Flask-JWT-Extended==3.24.1
Flask-Login==0.4.1
Flask-Migrate==2.5.3
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.4.4
flask-talisman==0.7.0
Flask-WTF==0.14.3
flatbuffers==1.12
future==0.18.2
gast==0.4.0
gensim==3.8.1
geographiclib==1.50
geopy==2.0.0
gitdb==4.0.5
GitPython==3.1.14
google-api-core==1.26.0
google-auth==1.27.0
google-auth-oauthlib==0.4.6
google-cloud-core==1.6.0
google-cloud-storage==1.36.1
google-crc32c==1.1.2
google-pasta==0.2.0
google-resumable-media==1.2.0
googleapis-common-protos==1.53.0
graphviz==0.17
great-expectations==0.13.19
grpcio==1.40.0
gunicorn==20.0.4
h5py==3.1.0
htmlmin==0.1.12
humanize==2.6.0
idna==2.8
ImageHash==4.2.0
imageio==2.9.0
imbalanced-learn==0.5.0
imblearn==0.0
imgkit==1.2.2
importlib-metadata==1.7.0
iniconfig==1.1.1
instaloader==4.7.1
ipykernel==5.1.2
ipython==7.8.0
ipython-genutils==0.2.0
ipywidgets==7.5.1
isodate==0.6.0
isort==4.3.21
itsdangerous==1.1.0
jdcal==1.4.1
jedi==0.15.1
jeepney==0.6.0
Jinja2==2.11.2
jmespath==0.9.5
joblib==1.0.0
json5==0.8.5
jsonpatch==1.32
jsonpickle==2.0.0
jsonpointer==2.1
jsonschema==3.0.2
jupyter==1.0.0
jupyter-client==6.1.11
jupyter-console==6.2.0
jupyter-contrib-core==0.3.3
jupyter-contrib-nbextensions==0.5.1
jupyter-core==4.7.0
jupyter-highlight-selected-word==0.2.0
jupyter-latex-envs==1.4.6
jupyter-nbextensions-configurator==0.4.1
jupyterlab==1.1.3
jupyterlab-server==1.0.6
jupyterthemes==0.20.0
karateclub==1.0.11
keras==2.6.0
Keras-Preprocessing==1.1.2
kiwisolver==1.1.0
kombu==4.6.11
kubernetes==12.0.1
lazy-object-proxy==1.4.2
lesscpy==0.14.0
lightgbm==2.2.3
llvmlite==0.35.0
lxml==4.5.0
Mako==1.1.3
Markdown==3.2.2
MarkupSafe==1.1.1
marshmallow==3.8.0
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.23.1
matplotlib==3.4.1
mccabe==0.6.1
MechanicalSoup==0.12.0
metakernel==0.27.5
missingno==0.4.2
mistune==0.8.4
mleap==0.16.1
mlxtend==0.17.3
msgpack==1.0.0
msrest==0.6.21
msrestazure==0.6.4
multimethod==1.4
natsort==7.0.1
nbconvert==5.6.0
nbformat==4.4.0
ndg-httpsclient==0.5.1
networkx==2.4
notebook==6.0.1
numba==0.52.0
numpy==1.19.5
oauthlib==3.1.0
openpyxl==3.0.6
opt-einsum==3.3.0
packaging==20.9
pandas==1.1.5
pandas-profiling==3.0.0
pandocfilters==1.4.2
param==1.10.1
paramiko==2.7.2
parse==1.15.0
parsedatetime==2.6
parso==0.5.1
pathlib2==2.3.5
pathspec==0.8.1
patsy==0.5.1
pexpect==4.8.0
phik==0.11.2
pickleshare==0.7.5
Pillow==8.2.0
plotly==4.14.3
pluggy==0.13.1
ply==3.11
polyline==1.4.0
prefixspan==0.5.2
prison==0.1.3
prometheus-client==0.7.1
prometheus-flask-exporter==0.18.1
prompt-toolkit==2.0.9
protobuf==3.15.4
psutil==5.7.0
psycopg2==2.8.6
ptyprocess==0.6.0
py==1.10.0
pyarrow==3.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycodestyle==2.5.0
pycparser==2.20
pyct==0.4.8
pydantic==1.8.2
pydot==1.4.2
pyee==7.0.2
Pygments==2.4.2
PyGSP==0.5.1
PyJWT==1.7.1
pylint==2.4.2
pymssql==2.1.5
PyNaCl==1.4.0
pyodbc==4.0.27
pyOpenSSL==20.0.1
pyparsing==2.4.2
pyppeteer==0.2.2
pyquery==1.4.1
pyrsistent==0.15.4
pysftp==0.2.9
PySocks==1.7.1
pytest==6.2.4
python-dateutil==2.8.1
python-dotenv==0.14.0
python-editor==1.0.4
python-louvain==0.13
python3-openid==3.2.0
pytz==2019.2
PyWavelets==1.1.1
pywin32==227
pywinpty==0.5.5
PyYAML==5.3
pyzmq==18.1.0
qtconsole==5.0.1
QtPy==1.9.0
querystring-parser==1.2.4
requests==2.25.1
requests-html==0.10.0
requests-oauthlib==1.3.0
retrying==1.3.3
rsa==4.7.2
ruamel.yaml==0.16.12
ruamel.yaml.clib==0.2.2
s3transfer==0.3.3
scikit-image==0.18.1
scikit-learn==0.23.2
scikit-plot==0.3.7
scipy==1.6.0
seaborn==0.11.1
SecretStorage==3.3.1
Send2Trash==1.5.0
shap==0.36.0
Shapely==1.7.1
six==1.15.0
sklearn==0.0
slicer==0.0.7
smart-open==1.9.0
smmap==3.0.5
sortedcontainers==2.3.0
soupsieve==2.0
spylon==0.3.0
spylon-kernel==0.4.1
SQLAlchemy==1.3.19
SQLAlchemy-Utils==0.36.8
sqlparse==0.4.1
statsmodels==0.9.0
tabulate==0.8.9
tangled-up-in-unicode==0.1.0
tensorboard==2.6.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
tensorflow==2.6.0
tensorflow-estimator==2.6.0
termcolor==1.1.0
terminado==0.8.2
testpath==0.4.2
threadpoolctl==2.1.0
tifffile==2021.4.8
toml==0.10.2
toolz==0.11.1
tornado==6.0.3
tqdm==4.60.0
traitlets==4.3.2
tweepy==3.8.0
twitter-scraper==0.4.2
typed-ast==1.4.0
typing-extensions==3.7.4.3
tzlocal==2.1
urllib3==1.25.9
vine==1.3.0
virtualenv==16.7.9
visions==0.7.1
w3lib==1.22.0
waitress==1.4.4
wcwidth==0.1.7
webencodings==0.5.1
websocket-client==0.58.0
websockets==8.1
Werkzeug==1.0.0
widgetsnbextension==3.5.1
wrapt==1.12.1
WTForms==2.3.3
xgboost==1.1.1
xlrd==1.2.0
XlsxWriter==1.2.2
yellowbrick==0.7
zipp==3.1.0
</p>
</details>
| open | 2021-09-21T13:45:44Z | 2021-09-21T13:45:44Z | https://github.com/ydataai/ydata-profiling/issues/831 | [] | enesMesut | 0 |
aleju/imgaug | machine-learning | 414 | AssertionError install tests for 0.2.9 build on NixOS | Hi Team,
I was trying to enable the test cases for pythonPackages.imgaug https://github.com/NixOS/nixpkgs/pull/67494
During this process i am able to execute the test cases but facing **AssertionError** and this is causing 5 failures.
Summary of test run:
`============ **5 failed, 383 passed, 3 warnings in 199.71s (0:03:19)** =============`
detailed log :
[imgaug_test_failures.txt](https://github.com/aleju/imgaug/files/3604110/imgaug_test_failures.txt)
Please suggest. Thanks. | closed | 2019-09-12T07:00:08Z | 2020-01-07T08:43:34Z | https://github.com/aleju/imgaug/issues/414 | [] | Rakesh4G | 37 |
babysor/MockingBird | deep-learning | 365 | 将模型文件导入无法运行 | 我将模型文件导入后,总是报错:Error: Model files not found. Please download the models
难道模型文件不是放在:C:\MockingBird-main\synthesizer\saved_models里面吗???(我的文件放在C盘根目录下面) | open | 2022-02-01T06:30:40Z | 2022-02-06T02:36:32Z | https://github.com/babysor/MockingBird/issues/365 | [] | zhang065 | 5 |
LAION-AI/Open-Assistant | machine-learning | 2,800 | 500 - Open Assistent EROR | Sorry, we encountered a server error. We're not sure what went wrong.
Very often an error began to appear with smaller dialogs
If you tried to open a web page and saw an Internal Server Error 500, you can do the following.
Wait...
Notify administrator...
Check htaccess file. ...
Check error log...
Check contents of CGI scripts...
Check plugins and components...
Increase server RAM
Очень часто стала появляться ошибка при меньших диалогах
Если вы попытались открыть веб-страницу, но увидели Internal Server Error 500, можно сделать следующее.
Подождать ...
Сообщить администратору ...
Проверить файл htaccess. ...
Проверить лог ошибок ...
Проверить содержимое CGI-скриптов ...
Проверить плагины и компоненты ...
Увеличить объем оперативной памяти сервера | closed | 2023-04-21T05:17:33Z | 2023-04-21T08:05:03Z | https://github.com/LAION-AI/Open-Assistant/issues/2800 | [] | buddhadhammaliveexpedition | 1 |
opengeos/streamlit-geospatial | streamlit | 51 | streamlit-geospatial site down | The URL https://streamlit.geemap.org/ has been showing an "Oh no. Error running application" message for two days or so now. This wonderful application has been a fun tool in letting students explore remote sensing data, so we hope to make use of it again soon! | closed | 2022-06-30T04:26:50Z | 2022-06-30T12:00:18Z | https://github.com/opengeos/streamlit-geospatial/issues/51 | [] | frizatch | 2 |
huggingface/text-generation-inference | nlp | 2,900 | Support XGrammar backend as an alternative to Outlines | ### Feature request
Support the use of XGrammar instead of Outlines for the backend Structured-Output generation.
### Motivation
XGrammar has been shown to be much faster than Outlines for generation structured output
BlogPost:
https://blog.mlc.ai/2024/11/22/achieving-efficient-flexible-portable-structured-generation-with-xgrammar
"As shown in Figure 1, XGrammar outperforms existing structured generation solutions by up to 3.5x on the JSON schema workload and more than 10x on the CFG workload. Notably, the gap in CFG-guided generation is larger. This is because many JSON schema specifications can be expressed as regular expressions, bringing more optimizations that are not directly applicable to CFGs."

"Figure 2 shows end-to-end inference performance on LLM serving tasks. We can find the trend again that the gap on CFG-guided settings is larger, and the gap grows on larger batch sizes. This is because the GPU throughput is higher on larger batch sizes, putting greater pressure on the grammar engine running on CPUs. Note that the main slowdown of vLLM comes from its structured generation engine, which can be potentially eliminated by integrating with XGrammar. In all cases, XGrammar enables high-performance generation in both settings without compromising flexibility and efficiency."

Paper / Technical Report from XGrammar:
https://arxiv.org/abs/2411.15100
### Your contribution
XGrammar Repo:
https://github.com/mlc-ai/xgrammar
SGLang Repo:
https://github.com/sgl-project/sglang/
SGLang Docs on Structured Output generation including using XGrammar:
https://sgl-project.github.io/backend/openai_api_completions.html#Structured-Outputs-(JSON,-Regex,-EBNF)
Note: Those docs do note that XGrammar does not support regular expressions | open | 2025-01-10T18:03:07Z | 2025-01-13T18:35:59Z | https://github.com/huggingface/text-generation-inference/issues/2900 | [] | 2016bgeyer | 0 |
chaos-genius/chaos_genius | data-visualization | 343 | Updation date in UI currently shows date for which data was last available | Currently the UI has a field called Last Updated Date, instead of showing the last time Anomaly/RCA was run, it shows the last date for which data is available.
We should fix this to show the last time Anomaly/RCA was run. But knowing the last available date for the dataset should also be useful. Should we show both in the UI? | closed | 2021-10-27T08:08:44Z | 2022-03-15T11:01:48Z | https://github.com/chaos-genius/chaos_genius/issues/343 | [
"✨ enhancement",
"🖥️ frontend"
] | kartikay-bagla | 2 |
hzwer/ECCV2022-RIFE | computer-vision | 122 | Scale 0.5 at 4K looks noticably worse than rife-ncnn-vulkan's UHD mode | Setting the scale to 0.5 for 4K content doesn't seem to improve results over 1.0, and looks worse than the UHD mode in RIFE-NCNN.

| closed | 2021-02-28T15:10:03Z | 2021-03-02T03:05:13Z | https://github.com/hzwer/ECCV2022-RIFE/issues/122 | [] | n00mkrad | 1 |
Yorko/mlcourse.ai | matplotlib | 355 | Assignment 9 | https://www.kaggle.com/kashnitsky/assignment-9-time-series-analysis
1) web form is not corresponding to the questions in the task. At least 1st question is missing
2) I doubt that in the 1st question there is a correct answer. Simple operations lead to result 3426.195682 which is not listed in the possible answers. Kernel related. In q2-4 the situation is the same. Practically no coding, but copypasting from the lecture. Still not the results listed in the possible answers. Although may be I didn't understand the tasks correctly
[kernel (2).zip](https://github.com/Yorko/mlcourse.ai/files/2411140/kernel.2.zip)
3) For some reason numpy is not imported when it is definetly needed
| closed | 2018-09-24T14:03:24Z | 2018-11-20T16:48:14Z | https://github.com/Yorko/mlcourse.ai/issues/355 | [] | Vozf | 2 |
wger-project/wger | django | 1,578 | Distance (km, mi) logging bugs | Hello! It seems like when logging distance decimal values cannot be input, returning an error "please enter a valid value. The two nearest valid values are 1 and 2" say when inputting 1.5 mi.
There are a few other bugs around the mileage I've noticed but that one is the most important as it pretty much makes that feature unusable.
A few other bugs I've noticed that are annoying but don't make it unusable:
- I've set up the exercise in my workout routine with Miles as the rep unit, and mph as the weight unit, yet each time I log I still have to manually change the units (still defaults to reps/lbs)
- Even when inputting as miles/mph, it often displays in logs as km instead. This looks like this only happens on the mobile app because it has a different menu for displaying logs
## Steps to Reproduce
<!-- Please include as many steps to reproduce so that we can replicate the problem. -->
1. ... Either in the mobile app or on the website, log a workout exercise "run"
2. ... Change the reps to distance (miles or km) and enter a decimal value (eg 1.5)
3. ... Click "Save"
**Expected results:** <!-- what did you expect to see? -->
That a decimal distance value would be saved to the program.
**Actual results:** <!-- what did you see? -->
"Please enter a valid value. The two nearest valid values are 1 and 2."
<details>

Minor bugs:
1. Defaults to reps/weight even when different units are chosen for that exercise


2. Results display as km even when mi is chosen, at least on the mobile app

<!--
Any logs you think would be useful (if you have a local instance)
I'm not sure what's relevant but happy to upload anything if need be.
-->
```bash
```
</details>
Thanks for any help!
| closed | 2024-02-02T00:32:45Z | 2025-03-21T22:34:50Z | https://github.com/wger-project/wger/issues/1578 | [
"bug"
] | ddakotac | 2 |
jackmpcollins/magentic | pydantic | 414 | Test and add docs for usage with other logging/tracing providers | Test that magentic works with the most common LLM tracing / logging providers and add docs to configure these together.
- https://log10.io/
- https://github.com/Arize-ai/phoenix | open | 2025-02-02T02:52:12Z | 2025-02-02T02:52:12Z | https://github.com/jackmpcollins/magentic/issues/414 | [] | jackmpcollins | 0 |
deeppavlov/DeepPavlov | nlp | 1,107 | Document everything useful that we have in files.deeppavlov.ai | closed | 2019-12-19T09:32:45Z | 2020-05-13T09:31:46Z | https://github.com/deeppavlov/DeepPavlov/issues/1107 | [
"enhancement",
"Documentation"
] | yoptar | 0 | |
NullArray/AutoSploit | automation | 421 | Unhandled Exception (9e51c3117) | Autosploit version: `3.0`
OS information: `Linux-4.15.0-43-generic-x86_64-with-Ubuntu-18.04-bionic`
Running context: `autosploit.py`
Error meesage: `argument of type 'NoneType' is not iterable`
Error traceback:
```
Traceback (most recent call):
File "/home/meddy/Téléchargements/crack/Autosploit/autosploit/main.py", line 117, in main
terminal.terminal_main_display(loaded_tokens)
File "/home/meddy/Téléchargements/crack/Autosploit/lib/term/terminal.py", line 474, in terminal_main_display
if "help" in choice_data_list:
TypeError: argument of type 'NoneType' is not iterable
```
Metasploit launched: `True`
| closed | 2019-01-28T21:27:30Z | 2019-01-29T15:37:20Z | https://github.com/NullArray/AutoSploit/issues/421 | [] | AutosploitReporter | 0 |
3b1b/manim | python | 1,497 | Manim write text out of the window | ### Describe the error
When I use the Scene TextExample found [here](https://3b1b.github.io/manim/getting_started/example_scenes.html), only part of the text show up. I need to zoom out to see everything, and when I save the video, I can only see like I haven't zoom out.
### Code and Error
**Code**:
[This](https://3b1b.github.io/manim/getting_started/example_scenes.html) Scene TextExample.
**Error**:
Can't see some words.
### Environment
**OS System**: Windows 10
**manim version**: Release 1.0.0
**python version**: 3.9.4
| closed | 2021-04-23T20:43:05Z | 2021-06-18T19:14:32Z | https://github.com/3b1b/manim/issues/1497 | [] | Leoriem-code | 2 |
comfyanonymous/ComfyUI | pytorch | 7,235 | VHS_LoadImagesPath not working | "No directory found" in VHS_LoadImagesPath node
I am copy pasting the path from windows folder structure where those image seq is located
But getting this error "No directory found" | closed | 2025-03-14T16:31:02Z | 2025-03-16T14:59:07Z | https://github.com/comfyanonymous/ComfyUI/issues/7235 | [] | Arup-art | 1 |
jowilf/starlette-admin | sqlalchemy | 383 | Bug: SQLAlchemy models with Mixin Classes raises error. | **Describe the bug**
I have over 60 models, they all have share same columns like `id`, `date_created`, `date_updated`. So I have to implement my own mixins and subclass them in my models like the example below. This error raised after I updated to `0.12.0`.
**To Reproduce**
I have wrote an example here:
```python
from sqlalchemy import (
ForeignKey,
create_engine,
)
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, sessionmaker, relationship
from starlette.applications import Starlette
from starlette.responses import HTMLResponse
from starlette.routing import Route
from starlette_admin.contrib.sqla import Admin, ModelView
from starlette_admin.fields import (
HasMany,HasOne,
IntegerField,
StringField,
)
class Base(DeclarativeBase):
pass
class IDMixin:
id: Mapped[int] = mapped_column(primary_key=True)
class Source(Base, IDMixin):
__tablename__ = "source"
name: Mapped[str] = mapped_column()
items: Mapped[list["Item"]] = relationship(
"Item",
back_populates="source"
)
class Item(Base, IDMixin):
__tablename__ = "item"
name: Mapped[str] = mapped_column()
source_id: Mapped[int] = mapped_column(ForeignKey("source.id"))
source: Mapped[Source] = relationship("Source", back_populates="items")
class SourceView(ModelView):
label = "Sources"
name = "Source"
fields = [
IntegerField(
name="id",
label="ID",
help_text="ID of the record.",
read_only=True,
),
StringField(
name="name",
label="Name",
),
HasMany(
name="items",
label="Item",
),
]
class ItemView(ModelView):
label = "Items"
name = "Item"
fields = [
IntegerField(
name="id",
label="ID",
help_text="ID of the record.",
read_only=True,
),
StringField(
name="name",
label="Name",
),
HasOne(
name="source",
label="Source",
),
]
engine = create_engine(
"sqlite:///db.sqlite3",
connect_args={"check_same_thread": False},
echo=True,
)
session = sessionmaker(bind=engine, autoflush=False)
def init_database() -> None:
Base.metadata.create_all(engine)
app = Starlette(
routes=[
Route(
"/",
lambda r: HTMLResponse('<a href="/admin/">Click me to get to Admin!</a>'),
)
],
on_startup=[init_database],
)
# Create admin
admin = Admin(engine, title="Mixins Error")
# Add views
admin.add_view(SourceView(model=Source))
admin.add_view(ItemView(model=Item))
# Mount admin
admin.mount_to(app)
```
Error:
```console
AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object associated with Source.items has an attribute 'primary_key'
```
**Environment (please complete the following information):**
- Starlette-Admin version: 0.12.0
- ORM/ODMs: SQLAlchemy, MongoEngine
| closed | 2023-11-06T09:12:48Z | 2023-11-06T21:21:20Z | https://github.com/jowilf/starlette-admin/issues/383 | [
"bug"
] | hasansezertasan | 1 |
kaliiiiiiiiii/Selenium-Driverless | web-scraping | 311 | error in Linux | Same code, error in Linux
Previously worked
It still works under Win10
modify UA, it won't work
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36

| closed | 2025-02-05T10:36:31Z | 2025-02-05T12:34:09Z | https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/311 | [
"invalid",
"wontfix"
] | fhwhite | 1 |
pydantic/logfire | pydantic | 891 | ModuleNotFoundError: No module named 'opentelemetry.sdk._events' | ### Description
In the pyproject.toml it looks like the version requirement for opentelemetry-sdk needs to be bumped.
### Python, Logfire & OS Versions, related packages (not required)
```TOML
``` | closed | 2025-02-24T21:22:24Z | 2025-02-25T14:48:15Z | https://github.com/pydantic/logfire/issues/891 | [] | mwildehahn | 1 |
jazzband/django-oauth-toolkit | django | 1,193 | Why the request returns 'invalid_client'? | Hello everyone, everything good?
I'm trying to follow the tutorial for Django Rest Framework that is in the [documentation](https://django-oauth-toolkit.readthedocs.io/en/latest/rest-framework/getting_started.html). I am not able to get a valid answer. I've seen some issues in the repo, some may even be the same case, but it's not clear to me where I'm going wrong.
Basically, I'm sending the request using curl with the following data:
`curl -X POST -d "grant_type=password&username=<admin>&password=<admin>" -u"<fEsQnQBsokTk5NB1xaZeLRaLgvbjZ3INGgFhgmFn>:<pbkdf2_sha256$320000$tYHgBpw9Sq6WZFV6u8ruDK$8Boq6kWNqGChvRbw33FbfduwnYz4vUhDcY8RzE1HuqE=>" http://localhost:8000/o/token/`
The Authorization grant type is: **Resource owner password-based**
The django-oauth-toolkit version is: 2.1.0
The response I get from the server is:
`{"error": "invalid_client"}`
Sorry if this seems repetitive. But it wasn't clear to me where I went wrong.
PS: The data presented here are not sensitive, as they are data to exemplify the error. | closed | 2022-08-09T22:20:15Z | 2023-09-06T12:51:42Z | https://github.com/jazzband/django-oauth-toolkit/issues/1193 | [
"question"
] | linneudm | 7 |
pytorch/vision | machine-learning | 8,629 | Detection lr_scheduler.step() called every step instead of every epoch | ### 🐛 Describe the bug
The `lr_scheduler.step()` function should be called once per epoch, not every step as seen in engine.py:
https://github.com/pytorch/vision/blob/main/references/detection/engine.py#L54
Reference:
https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
The `--lr-steps` argument provided to the `train.py` script don't make much sense: `[16, 22]`, because these quickly change in the 2nd epoch and remain static for the rest of the training job.
Instead, I might suggest:
```python
def train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq, scaler=None):
model.train()
metric_logger = utils.MetricLogger(delimiter=" ")
metric_logger.add_meter("lr", utils.SmoothedValue(window_size=1, fmt="{value:.6f}"))
header = f"Epoch: [{epoch}]"
lr_scheduler = None
if epoch == 0:
warmup_factor = 1.0 / 1000
warmup_iters = min(1000, len(data_loader) - 1)
lr_scheduler = torch.optim.lr_scheduler.LinearLR(
optimizer, start_factor=warmup_factor, total_iters=warmup_iters
)
for images, targets in metric_logger.log_every(data_loader, print_freq, header):
images = list(image.to(device) for image in images)
targets = [{k: v.to(device) if isinstance(v, torch.Tensor) else v for k, v in t.items()} for t in targets]
with torch.cuda.amp.autocast(enabled=scaler is not None):
loss_dict = model(images, targets)
losses = sum(loss for loss in loss_dict.values())
# reduce losses over all GPUs for logging purposes
loss_dict_reduced = utils.reduce_dict(loss_dict)
losses_reduced = sum(loss for loss in loss_dict_reduced.values())
loss_value = losses_reduced.item()
if not math.isfinite(loss_value):
print(f"Loss is {loss_value}, stopping training")
print(loss_dict_reduced)
sys.exit(1)
optimizer.zero_grad()
if scaler is not None:
scaler.scale(losses).backward()
scaler.step(optimizer)
scaler.update()
else:
losses.backward()
optimizer.step()
# Only step() the special LinearLR scheduler for the first epoch
if lr_scheduler is not None and isinstance(lr_scheduler, torch.optim.lr_scheduler.LinearLR) and epoch == 0:
lr_scheduler.step()
metric_logger.update(loss=losses_reduced, **loss_dict_reduced)
metric_logger.update(lr=optimizer.param_groups[0]["lr"])
# one step for each epoch
if lr_scheduler is not None:
lr_scheduler.step()
return metric_logger
```
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.14 (main, Jun 12 2024, 11:12:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-193-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Laptop GPU
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 80
Model name: AMD Ryzen 9 5900HX with Radeon Graphics
Stepping: 0
Frequency boost: enabled
CPU MHz: 3916.790
CPU max MHz: 3300.0000
CPU min MHz: 1200.0000
BogoMIPS: 6587.59
Virtualization: AMD-V
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 4 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] cjm-pytorch-utils==0.0.6
[pip3] cjm-torchvision-tfms==0.0.11
[pip3] numpy==1.26.3
[pip3] onnx==1.16.1
[pip3] onnxruntime==1.18.1
[pip3] onnxscript==0.1.0.dev20240628
[pip3] torch==2.4.0
[pip3] torchaudio==2.3.1+cu121
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] Could not collect
| closed | 2024-09-03T20:45:04Z | 2024-09-04T09:04:32Z | https://github.com/pytorch/vision/issues/8629 | [] | david-csnmedia | 1 |
mljar/mljar-supervised | scikit-learn | 675 | Bug: Need retrain | File "/usr/local/lib/python3.11/site-packages/supervised/base_automl.py", line 2511, in _need_retrain
change = np.abs((old_score - new_score) / old_score)
numpy.core._exceptions._UFuncNoLoopError: ufunc 'subtract' did not contain a loop with signature matching types (dtype('<U19'), dtype('float64')) -> None
| open | 2023-11-12T16:26:38Z | 2023-11-12T16:26:38Z | https://github.com/mljar/mljar-supervised/issues/675 | [] | strukevych | 0 |
thtrieu/darkflow | tensorflow | 462 | the loss is 4~6 | hi,I really want to know why my loss is just between 4 and 6,and wont go down anymore? | open | 2017-12-06T02:46:38Z | 2017-12-29T16:30:16Z | https://github.com/thtrieu/darkflow/issues/462 | [] | QueenJuliaZxx | 7 |
public-apis/public-apis | api | 3,535 | https://github.com/octocat/Hello-World/pull/6 | closed | 2023-06-11T23:51:22Z | 2023-06-11T23:51:49Z | https://github.com/public-apis/public-apis/issues/3535 | [] | Shaukyhamdan94 | 0 | |
pydantic/pydantic-settings | pydantic | 480 | Setting a default value for ZoneInfo does not work | When setting a default value for ZoneInfo a different timezone is used.
```python
Python 3.12.1 (main, Jan 8 2024, 05:53:39) [Clang 17.0.6 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pydantic_settings
>>> from zoneinfo import ZoneInfo
>>> from pydantic_settings import BaseSettings
>>> class Test(BaseSettings):
... tz: ZoneInfo = ZoneInfo("Europe/London")
...
>>> Test()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/acuthbert2/dev/ves/scripts/.venv/lib/python3.12/site-packages/pydantic_settings/main.py", line 167, in __init__
super().__init__(
File "/home/acuthbert2/dev/ves/scripts/.venv/lib/python3.12/site-packages/pydantic/main.py", line 212, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for Test
tz
invalid timezone: :Asia/Shanghai [type=zoneinfo_str, input_value=':Asia/Shanghai', input_type=str]
>>> pydantic_settings.__version__
'2.6.1'
```
- pydantic-settings version: 2.6.1
- python version: 3.12.1
- platform: DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.2 LTS" | closed | 2024-11-21T05:58:54Z | 2024-11-22T06:00:36Z | https://github.com/pydantic/pydantic-settings/issues/480 | [
"unconfirmed"
] | ac3673 | 3 |
onnx/onnx | scikit-learn | 5,794 | What is the correct way to make a tensor with a dynamic dimension for a Reshape operator? | # Ask a Question
### Question
I am trying to add a Reshape node to a BERT onnx model that works with dynamic shapes. The reshape op should reshape a rank 3 tensor to a rank 2. The input to the reshape is of shape [unk__2,unk__3,768] and I need to collapse the first two dynamic dimensions into one and keep the last fixed dimension such as [[unk__2 * unk__3], 768]. How can I specify a dynamic dimension when making a tensor with the onnx helper?
### Further information
When running the code snippet I provided below, I get the following error:
```
raise TypeError(f"'{value}' is not an accepted attribute value.")
TypeError: 'name: "shape"
type {
tensor_type {
elem_type: 7
shape {
dim {
dim_value: -1
}
dim {
dim_value: 768
}
}
}
}
' is not an accepted attribute value.
```
- Is this issue related to a specific model?
**Model name**: bert-base
**Model opset**: 18
### Notes
Code snippet:
```
# Create a Constant node that contains the target shape
shape_tensor = helper.make_tensor_value_info(name='shape', elem_type=onnx.TensorProto.INT64, shape=(-1,768))
shape_node = helper.make_node(
'Constant',
inputs=[],
outputs=[f'shape_{i}_output'],
value=shape_tensor,
name=f'shape_{i}'
)
# Create a Reshape node
reshape_node = helper.make_node(
'Reshape',
inputs=[mm_node.input[0], f'shape_{i}_output'],
outputs=[f'reshaped_output_{i}'],
name=f'Reshape_{i}'
)
```
| closed | 2023-12-07T06:51:41Z | 2025-01-02T06:44:37Z | https://github.com/onnx/onnx/issues/5794 | [
"question",
"stale"
] | ria143 | 1 |
wemake-services/django-test-migrations | pytest | 4 | Add a link to wemake-django-tempate that is using this tool | Related: https://github.com/wemake-services/wemake-django-template/issues/976 | closed | 2019-11-21T16:46:26Z | 2019-11-25T14:41:43Z | https://github.com/wemake-services/django-test-migrations/issues/4 | [
"documentation",
"enhancement"
] | sobolevn | 0 |
aiortc/aiortc | asyncio | 539 | "No start code is found" when decode frames with large resolution | Hi,
I use aiortc to receive stream from unreal engine, and there are no error when the res is 1024\*768, while errors occurred when the res higher than 1920\*1080, and the decoded frames are stuck.
The error is as follows:
No start code is found.
Error splitting the input into NAL units.
But when I use Chrome, the video is smooth.
And I checked the network usage. In chrome, the received speed is around 10\~20Mb/s whatever the res is. When use aiortc, it is 500KB\~1.5MB/s.
I guess the error or delay is due to low bandwidth, if I missed some configuration of bandwidth or something?
Thank you so much! | closed | 2021-06-20T13:23:32Z | 2021-07-01T07:04:12Z | https://github.com/aiortc/aiortc/issues/539 | [] | jwzxgy2007 | 3 |
PaddlePaddle/PaddleHub | nlp | 2,276 | fastspeech2_baker 语音能否发男声,女声,童声 | fastspeech2_baker 语音能否发男声,女声,童声?
| open | 2023-07-13T03:36:25Z | 2024-02-26T04:59:21Z | https://github.com/PaddlePaddle/PaddleHub/issues/2276 | [] | shihzenq | 0 |
matplotlib/matplotlib | matplotlib | 28,816 | [MNT]: Refactor data limit handling | ### Summary
Currently, the approach is calling `_AxesBase.update_datalim` in the plot factory functions with an array of points (often with an already reduced list `[(xmin, ymin), (xmax, ymax)]` but not always. The information is ad-hoc generated in the factory function.
### Proposed fix
We should switch to pushing the logic for data limit evaluation into the Artists. Then the plot factory functions only query the created Artist for their limits and pass that on. | open | 2024-09-13T12:49:04Z | 2024-10-19T17:20:52Z | https://github.com/matplotlib/matplotlib/issues/28816 | [
"Maintenance"
] | timhoffm | 4 |
ultralytics/ultralytics | deep-learning | 18,724 | JupyterNotebook Ram Memory + data validating question | I'm using a pre-trained segmentation model on a custom dataset. My goal is to verify the performance of the model.
**First question:**
Does model.val() use the best.pt weights from the training folder? Or should i load the best.pt into a model and do a model.val() off of it?
(Example 1)
**Second question:**
When the model is training, does it use the val data or does it JUST focus on the train data. Is the way I'm splitting the train+val data into kfolds and validating model 1 (trained on kfold1) on val-data 1 correct?
**Final Question/Problem:**
I run the clear memory and delete all the variables I've created at the end of every training. My goal is to clear all the memory from training.
The problem that I face (I run the script in JupyterNotebooks but also tried in just python cmd in linux) after the first training is finished, the ram memory is not refreshed. It only goes to zero, if I manually reset the kernel for the notebook. Is there a better way to dispose of the whole training data before starting new training? **After around 10 kfolds (each 100 epochs) the script stops running.**
**My folder structure looks like this:**
Root
->train (full dataset - including ground truth)
->val (ful dataset - including ground truth)
->kfold
--->kfold1
------>train
------>val
--->kfold2
------>train
------>val
--->kfold3
------>train
------>val
**Config:**
config1.yaml
names:
0: object
path: root/kfold/kfold1
train: train/images
val: val/images
config2.yaml
names:
0: object
path: root/kfold/kfold2
train: train/images
val: val/images
**Code:**
```
def clear_memory():
gc.collect()
if torch.cuda.is_available():
torch.cuda.empty_cache()
torch.cuda.reset_peak_memory_stats()
torch.cuda.synchronize()
#load pretrained model
model = YOLO('yolov8x-seg.pt')
------------------------------------------------------------------------
#train model on kfold1
model_results = model.train(data=os.path.join(ROOT_DIR, 'config1.yaml'), imgsz=512, batch=16, deterministic=True, plots=True,
close_mosaic = 0,
optimizer = "SGD",
epochs = 120,
# disable built-in augmentation, instead use Albumentations Library
augment=False, hsv_h=0, hsv_s=0, hsv_v=0, degrees=0, translate=0,
scale=0, shear=0.0, perspective=0, flipud=0, fliplr=0, bgr=0,
mosaic=0, mixup=0, copy_paste=0, erasing=0, crop_fraction=0)
**#Example 1**
#evaluate model's performance on validation dataset
val_results= model.val()
print(val_results)
#should be the same as above right?
model = YOLO("path/to/trainingfolder/weights/best.pt")
val_results_best = model.val()
------------------------------------------------------------------------
del model
del model_results
del val_results
clear_memory()
#Training of kfold 2
------------------------------------------------------------------------
#load pretrained model
model = YOLO('yolov8x-seg.pt')
#train model on kfold1
model_results = model.train(data=os.path.join(ROOT_DIR, 'config1.yaml'), imgsz=512, batch=16, deterministic=True, plots=True,
close_mosaic = 0,
optimizer = "SGD",
epochs = 120,
# disable built-in augmentation, instead use Albumentations Library
augment=False, hsv_h=0, hsv_s=0, hsv_v=0, degrees=0, translate=0,
scale=0, shear=0.0, perspective=0, flipud=0, fliplr=0, bgr=0,
mosaic=0, mixup=0, copy_paste=0, erasing=0, crop_fraction=0)
**#Example 1**
#evaluate model's performance on validation dataset
val_results= model.val()
print(val_results)
#should be the same as above right?
model = YOLO("path/to/trainingfolder/weights/best.pt")
val_results_best = model.val()
------------------------------------------------------------------------
del model
del model_results
del val_results
clear_memory()
#Training of kfold 3...
```
| closed | 2025-01-17T05:02:54Z | 2025-01-17T19:44:50Z | https://github.com/ultralytics/ultralytics/issues/18724 | [
"question",
"segment",
"Notebook"
] | armanivers | 5 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 480 | Saving hparams in model files | Having spent several hours to get the Swedish model (#257) to work, I think it is a good idea to save the hparams along with the models. Maybe even load them at run time from the model files.
Then we can mix and match in the toolbox, and it can check for compatibility, e.g. `speaker_embedding_size` in encoder and synth, `sample_rate` between the synth and vocoder. Then we can make helpful error messages to replace the python exceptions that occur when models are incompatible.
Since we have to re-release models when #472 gets merged to master, it is an opportunity to implement this new checkpoint format. In addition to "model_state" and "optimizer_state", we can also save a dictionary of "model_parameters" which would contain something like:
#### Encoder
* `sample_rate`
* `speaker_embedding_size`
#### Synthesizer
* symbols
* language?
* `speaker_embedding_size`
* `sample_rate`, `n_mels`, `n_fft`, etc.
#### Vocoder
* `sample_rate`, `n_mels`, `n_fft`, etc.
I'll put this in a new issue and see if anyone wants this feature.
_Originally posted by @blue-fish in https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/472#issuecomment-671403282_ | closed | 2020-08-10T14:54:59Z | 2020-08-11T13:08:41Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/480 | [
"enhancement"
] | ghost | 2 |
pydantic/FastUI | pydantic | 234 | Single value literal doesn't work in select form | To reproduce this issue, just change the ToolEnum to Literal of ToolEnum.hammer in demo source code.

Error trace
```
File "/Users/manimozaffar/Desktop/fastui/src/python-fastui/fastui/json_schema.py", line 157, in json_schema_obj_to_fields
yield from json_schema_any_to_fields(value, loc + [key], title, key in required, defs)
File "/Users/manimozaffar/Desktop/fastui/src/python-fastui/fastui/json_schema.py", line 166, in json_schema_any_to_fields
if schema_is_field(schema):
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/manimozaffar/Desktop/fastui/src/python-fastui/fastui/json_schema.py", line 379, in schema_is_field
return schema['type'] in {'string', 'number', 'integer', 'boolean'}
~~~~~~^^^^^^^^
KeyError: 'type'
``` | open | 2024-03-04T10:02:26Z | 2024-05-02T18:39:35Z | https://github.com/pydantic/FastUI/issues/234 | [] | ManiMozaffar | 0 |
predict-idlab/plotly-resampler | plotly | 157 | Look into github code scanning | As `LGTM` code quality is nonexistent, we should look into new alternatives for code quality badges. | closed | 2023-01-11T11:09:17Z | 2023-07-24T11:47:39Z | https://github.com/predict-idlab/plotly-resampler/issues/157 | [
"enhancement"
] | jonasvdd | 0 |
litestar-org/litestar | api | 3,397 | Bug: Using asyncio.create_subprocess_shell / _exec in lifespan raises NotImplementedError | ### Description
Not a whole lot more to say. I want to have a long running process that updates some data periodically in the background. With FastAPI I could use the lifespan-function as presented below, but with Litestar the following exception is raised:
```
| Traceback (most recent call last):
| File "C:\<disguised>\app.py", line 112, in update_data
| data = await exec_shell_command()
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\<disguised>\app.py", line 17, in exec_shell_command
| proc = await asyncio.create_subprocess_shell(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\Appl\Python\Lib\asyncio\subprocess.py", line 208, in create_subprocess_shell
| transport, protocol = await loop.subprocess_shell(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\Appl\Python\Lib\asyncio\base_events.py", line 1661, in subprocess_shell
| transport = await self._make_subprocess_transport(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\Appl\Python\Lib\asyncio\base_events.py", line 502, in _make_subprocess_transport
| raise NotImplementedError
| NotImplementedError
+------------------------------------
```
I think it has something to do with the event-loop being used, but I don't know enough to find out the cause fully on my own.
### URL to code causing the issue
_No response_
### MCVE
```python
from contextlib import asynccontextmanager
from typing import AsyncGenerator
from litestar import Litestar, get
import anyio
async def update_data() -> None:
while True:
proc = await asyncio.create_subprocess_shell(
"ls -l",
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, _ = await proc.communicate()
asyncio.sleep(60)
@asynccontextmanager
async def lifespan(_: Litestar) -> AsyncGenerator[None, None]:
async with anyio.create_task_group() as tg:
tg.start_soon(update_data)
yield
tg.cancel_scope.cancel()
@get("/")
async def root() -> dict[str, str]:
return {"Hello": "World"}
app = Litestar(route_handlers=[root], lifespan=[lifespan])
```
### Steps to reproduce
```bash
Just run the code presented above
```
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
2.8.2
### Platform
- [ ] Linux
- [ ] Mac
- [X] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-04-16T20:48:33Z | 2025-03-20T15:54:36Z | https://github.com/litestar-org/litestar/issues/3397 | [
"Question",
"Compatibility",
"Needs MCVE",
"Needs Response :warning:"
] | tim-hilt | 3 |
zappa/Zappa | django | 503 | [Migrated] Refactor Django/Flask-specific behaviors into separate modules | Originally from: https://github.com/Miserlou/Zappa/issues/1318 by [rgov](https://github.com/rgov)
It's really powerful that Django and Flask apps are automatically detected and adapted to work on AWS out of the box.
The functionality for this is woven throughout `core.py`, `cli.py`, and `utilities.py` which increases complexity for testing, debugging, adding new features, etc.
An improved design would probably involve refactoring the behaviors into separate modules that are less strongly-coupled. With a defined interface for how to tune Zappa for different frameworks, it would be easier to add support for others. Plus, the Zappa core and the framework plugins could be tested separately. | closed | 2021-02-20T09:43:35Z | 2024-04-13T16:36:40Z | https://github.com/zappa/Zappa/issues/503 | [
"good-idea",
"no-activity",
"auto-closed"
] | jneves | 3 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 3,223 | Config `extraNodeAffinity` and `matchNodePurpose` combines incorrectly (OR instead of AND) | ### Bug description
The below configuration option doesn't do what one would expect and it seems like a bug.
```
singleuser:
extraNodeAffinity:
required:
- matchExpressions:
- key: "kubernetes.azure.com/scalesetpriority"
operator: NotIn
values: [spot]
scheduling:
userPods:
nodeAffinity:
matchNodePurpose: require
```
This generates 2 different match expressions as you can see below. If either of them match the pod is scheduled.
```
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.azure.com/scalesetpriority
operator: In
values:
- spot
- matchExpressions:
- key: hub.jupyter.org/node-purpose
operator: In
values:
- user
```
Naively I would expect this to generate a single match expression that is the union of the two like so:
```
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.azure.com/scalesetpriority
operator: In
values:
- spot
- key: hub.jupyter.org/node-purpose
operator: In
values:
- user
```
I'm running 3.0.2 on AKS 1.27.3. | open | 2023-09-18T13:53:51Z | 2023-09-18T19:03:01Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3223 | [
"bug"
] | jabbera | 5 |
Farama-Foundation/Gymnasium | api | 930 | [Bug Report] Documentation TOCtree spaces/utility link | ### Describe the bug
`spaces/utility funcions` in the TOCtree points to `VectorEnv/utility Functions`

@mgoulao
### Code example
_No response_
### System info
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2024-02-17T11:11:55Z | 2024-02-21T11:46:02Z | https://github.com/Farama-Foundation/Gymnasium/issues/930 | [
"bug"
] | Kallinteris-Andreas | 0 |
browser-use/browser-use | python | 1,069 | Separating out the vision api call | ### Problem Description
Right now the design seems to use the same model for both the text and vision side. There are 2 issues:
1. Vision calls with Anthropic and Open AI are expensive
2. Deepseek-R1 does not seem to come with vision support
### Proposed Solution
I think there should be the flexibility to separate out the vision calls to a different LLM provider and have the rest of the planning and execution done using another LLM provider. This will open up the ability to use powerful open source reasoning models with cheaper vision models such as moondream.
### Alternative Solutions
_No response_
### Additional Context
_No response_ | open | 2025-03-19T09:45:47Z | 2025-03-19T09:45:47Z | https://github.com/browser-use/browser-use/issues/1069 | [
"enhancement"
] | moreshk | 0 |
BayesWitnesses/m2cgen | scikit-learn | 584 | XGBoost exported to C generates wrong indices for input array | As the title says, seems that there is an issue when generating a XGBoost to C:
For instance f100 should have been just 100, otherwise the compilation fails.
```
void predict(double * input, double * output) {
double var0;
if ((input[f100]) >= (0.9064144)) {
if ((input[f116]) >= (1.9821854)) {
if ((input[f17]) >= (0.3522055)) {
if ((input[f1]) >= (0.96535176)) {
var0 = 0.04736842;
} else {
var0 = -0.17782988;
}
} else {
var0 = 0.5538461;
}
} else {
``` | open | 2023-09-29T06:35:16Z | 2023-09-29T06:35:16Z | https://github.com/BayesWitnesses/m2cgen/issues/584 | [] | vladBaciu | 0 |
qubvel-org/segmentation_models.pytorch | computer-vision | 124 | c++ libtorch | I have successfully loaded the traced model in c ++ with vs2019, but the program reports an error during prediction. | closed | 2019-12-19T06:49:15Z | 2021-03-10T08:42:55Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/124 | [] | hhhh0 | 3 |
RobertCraigie/prisma-client-py | pydantic | 266 | Add tests for all supported database providers | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
We currently do not run integration tests on all of the database providers we support as Prisma tests them themselves, however we should still run our own tests as there may be database specific functionality that causes errors with our client.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Add new integration tests for all supported database providers: https://www.prisma.io/docs/reference/database-reference/supported-databases
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
For potential contributors: https://prisma-client-py.readthedocs.io/en/stable/contributing/contributing/#integration-tests
Please only submit pull requests with *one* database provider at a time. Thank you :) | closed | 2022-02-03T01:21:45Z | 2022-11-06T21:43:40Z | https://github.com/RobertCraigie/prisma-client-py/issues/266 | [
"kind/improvement",
"topic: internal",
"level/intermediate",
"priority/medium"
] | RobertCraigie | 1 |
akfamily/akshare | data-science | 5,748 | AKShare 接口问题报告 | ak.stock_profit_forecast_ths | **详细问题描述** | Detailed Problem Description
2. 操作系统版本:64 位操作系统
3. Python 版本:3.13
4. AKShare 版本:1.16.3
5. 接口的名称:ak.stock_profit_forecast_ths
调用代码:
df = ak.stock_profit_forecast_ths(symbol="300193", indicator="业绩预测详表-详细指标预测")
print(df)
6. 接口报错的截图或描述:
-》数据格式和原来或其他股票的不同
研究机构 研究员 预测值 评级
0 东北证券 刘俊奇 16.28亿 -
-》原来或正确的是类似如下格式:
预测指标 2021-实际值 2022-实际值 2023-实际值 预测2024-平均 预测2025-平均 预测2026-平均
0 营业收入(元) 1878.69亿 1889.88亿 2039.79亿 2084.68亿 2209.75亿 2332.71亿
1 营业收入增长率 11.24% 0.26% 7.82% 2.20% 5.97% 5.51%
2 利润总额(元) 268.03亿 272.17亿 328.16亿 366.15亿 395.25亿 425.80亿
3 净利润(元) 230.64亿 245.07亿 290.17亿 318.54亿 343.73亿 370.04亿
4 净利润增长率 4.01% 6.26% 18.41% 10.28% 8.00% 7.56%
5 每股现金流(元) 0.32 5.09 10.02 7.73 6.84 8.29
6 每股净资产(元) 17.53 17.18 20.74 24.39 28.49 32.81
7 净资产收益率 21.34% 24.19% 26.53% 22.85% 21.13% 19.82%
8 市盈率(动态) 10.25 9.35 7.94 7.26 6.73 6.26
7. 期望获得的正确结果:原格式的数据;
| closed | 2025-03-01T00:48:46Z | 2025-03-01T08:06:14Z | https://github.com/akfamily/akshare/issues/5748 | [
"bug"
] | LIKEHAM | 1 |
vi3k6i5/flashtext | nlp | 145 | Do different languages affect the extraction results | eg:
different language:

same language:

Is there any way I can deal with this problem? thanks~ | open | 2024-01-09T08:46:46Z | 2024-01-09T08:46:46Z | https://github.com/vi3k6i5/flashtext/issues/145 | [] | RicardoScofileld | 0 |
zihangdai/xlnet | nlp | 256 | Number of training epochs in original publication | Hi,
I'm trying to collect batch sizes and number of training epochs for a couple of model publications for a small meta study. I'm not completely sure if I interpret the XLNet publication correctly.
I'm referring to following 2 paragraphs:
> Following BERT [10], we use the BooksCorpus [40] and English Wikipedia as part of our pretraining data, which have 13GB plain text combined. In addition, we include Giga5 (16GB text) [26], ClueWeb 2012-B (extended from [5]), and Common Crawl [6] for pretraining. We use heuristics to aggressively filter out short or low-quality articles for ClueWeb 2012-B and Common Crawl, which results in 19GB and 110GB text respectively. After tokenization with SentencePiece [17], we obtain 2.78B, 1.09B, 4.75B, 4.30B, and 19.97B subword pieces for Wikipedia, BooksCorpus, Giga5, ClueWeb, and Common Crawl respectively, which are 32.89B in total.
> Our largest model XLNet-Large has the same architecture hyperparameters as BERT-Large, which results in a similar model size. During pretraining, we always use a full sequence length of 512. Firstly, to provide a fair comparison with BERT (section 3.2), we also trained XLNet-Large-wikibooks on BooksCorpus and Wikipedia only, where we reuse all pretraining hyper-parameters as in the original BERT. Then, we scale up the training of XLNet-Large by using all the datasets described above. Specifically, we train on 512 TPU v3 chips for 500K steps with an Adam weight decay optimizer, linear learning rate decay, and a batch size of 8192, which takes about 5.5 days.
Can I therefore say the number of epochs was:
500000 steps * 8192 sequences/step * 512 subword pieces/sequence / 32.89*10^9 subword pieces ~= 64 epochs? | open | 2020-01-04T13:22:55Z | 2020-01-04T13:22:55Z | https://github.com/zihangdai/xlnet/issues/256 | [] | jjedele | 0 |
itamarst/eliot | numpy | 79 | Document the release process | This will be easier once #73 is implemented:
1. Tag release.
2. `python setup.py sdist upload`
3. Update on RTD.
Probably should share credentials for above with someone as part of this issue, though.
| open | 2014-05-15T18:26:35Z | 2018-09-22T20:59:13Z | https://github.com/itamarst/eliot/issues/79 | [] | itamarst | 0 |
plotly/dash-core-components | dash | 288 | [Dev] - Replace `self.wait_for_element_by_css_selector` with selenium's methods | I wrote some testing utils on top of selenium to do things like "wait for an element to appear". they're kind of hacky and sometimes they seem unreliable. So, sometimes we end up putting `time.sleep` in our tests instead of some proper wait for statement.
Turns out that selenium has its own "wait for" methods. We should replace our utils with these new methods and start using these methods in our new tests.
Here is an example of the official selenium "wait" API:
https://github.com/plotly/dash-component-boilerplate/blob/1d2ee1d9fc1b6fa834fda3dc38064297f8154085/tests/test_render.py#L4-L23
cc @plotly/dash | open | 2018-08-28T22:31:22Z | 2018-08-28T22:57:19Z | https://github.com/plotly/dash-core-components/issues/288 | [] | chriddyp | 1 |
pyjanitor-devs/pyjanitor | pandas | 464 | [BUG] Problem building docs on Window using "make html" | # Brief Description
Following the instructions on the [contributions](https://pyjanitor.readthedocs.io/contributing.html) page fails when trying to `make html` a symbolic link privilege not held error
# System Information
<!-- System information helps us. To keep things simple, just let us know the OS and Python version first.
You can provide the optional information later. -->
- Operating system: Windows <!-- delete the appropriate ones -->
- OS details (optional): 10 Enterprise <!-- e.g. version, or Linux distro -->
- Python version (required): Python: 3.7.3 (default, Apr 24 2019, 15:29:51) [MSC v.1915 64 bit (AMD64)]
# Minimally Reproducible Code
<!-- If you provide minimal code that reproduces the problem, this makes it easier for us to debug what's going on.
Minimal code should be trivially copy/pastable into a Python interpreter in its entirety. Be sure to include imports.
-->
after activating the env and traversing to the ..\pyjanitor-dev\docs folder
`make html`
# Error Messages
<!-- If you get an error message, please paste it between the backticks here. -->
```
Traceback (most recent call last):
File "C:\conda3x64\envs\pyjanitor-dev\lib\site-packages\sphinx\config.py", line 361, in eval_config_file
execfile_(filename, namespace)
File "C:\conda3x64\envs\pyjanitor-dev\lib\site-packages\sphinx\util\pycompat.py", line 86, in execfile_
exec(code, _globals)
File "c:\workspace\cjmayers\CODE\git_workspaces_external\pyjanitor\docs\conf.py", line 31, in <module>
notebooks.symlink_to("../examples/notebooks")
File "C:\conda3x64\envs\pyjanitor-dev\lib\pathlib.py", line 1330, in symlink_to
self._accessor.symlink(target, self, target_is_directory)
OSError: symbolic link privilege not held
```
| closed | 2019-07-14T19:52:20Z | 2019-07-14T22:41:43Z | https://github.com/pyjanitor-devs/pyjanitor/issues/464 | [
"bug"
] | cjmayers | 3 |
gevent/gevent | asyncio | 1,515 | TypeError: prepare watchers are not currently supported in libuv. If you need them, please contact the maintainers. | I'm trying to use `prepare`+`check` handlers to execute code around the event loop, but it seems the `libuv` wrapper doesn't current support it, any chances that this can be added?
I actually just want to know if I have CPU starvation, and the idea was to measure the time spent blocking on the polling backend. The approach I took was to install a `prepare` and `check` handler to get the time before and after the poll, this however doesn't work on `libev` because of the order in which the `check` watchers are executed (I tried to change the priority without success). I tried switching to libuv as a test, since the order of the `checker`s seems to be better defined in that library. Perhaps there is a better way of measuring the time spent waiting on the poll function?
Edit: I'm using the monitoring thread, the problem is that not a single thread is using the cpu for an extended period of time, so I want to check if the collection of all threads is, and measuring the time waiting on the poll backend seems lika a nice proxy for that. | closed | 2020-01-17T14:22:27Z | 2020-01-18T00:25:11Z | https://github.com/gevent/gevent/issues/1515 | [
"Type: Question",
"Loop: libuv"
] | hackaugusto | 5 |
amdegroot/ssd.pytorch | computer-vision | 232 | How can I evaluate/test on COCO data? | I finished to train the data on COCO. But how can i evaluate/test on COCO? | open | 2018-09-04T04:35:53Z | 2019-07-08T03:42:17Z | https://github.com/amdegroot/ssd.pytorch/issues/232 | [] | hailey94 | 2 |
litestar-org/litestar | api | 3,949 | Enhancement: Using installed debugger post mortem | ### Summary
First of all, thank you all for this amazing project. I was wondering if you guys consider replacing `pdb.post_mortem` with a debugger package of already installed ones. This way, we can continue with the terminals we are used to.
In my mind, it is something like this:
middleware/_internal/__init__.py :
```
def get_post_mortem():
for package in ["pdbr", "pudb", "ipdb", "pdbpp"]:
try:
module = __import__(package, fromlist=["post_mortem"])
return module.post_mortem
except ImportError:
continue
import pdb
return pdb.post_mortem
```
middleware/_internal/exceptions/middleware.py :
```
if litestar_app.pdb_on_exception:
from .. import get_post_mortem
get_post_mortem()()
```
### Basic Example

### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_ | open | 2025-01-13T15:56:08Z | 2025-01-21T13:56:49Z | https://github.com/litestar-org/litestar/issues/3949 | [
"Enhancement"
] | cansarigol | 4 |
DistrictDataLabs/yellowbrick | matplotlib | 803 | Matplotlib>= 2.0 has default property cycle in Hex, while color_palette() and get_color_cycle() return RGB | **Example**
The Anscombe example subplots in anscombe.py are correctly generated but there is a warning from matplotlib (3.0.3). For context, the argument to the `c` parameter is a 3-tuple.
`'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.`
The call in anscombe.py is
`ax.scatter(x, y, c=color)`
<!-- If you have a question, note that you can email us via our listserve:
https://groups.google.com/forum/#!forum/yellowbrick -->
<!-- This line alerts the Yellowbrick maintainers, feel free to use this
@ address to alert us directly in follow up comments -->
@DistrictDataLabs/team-oz-maintainers
| open | 2019-03-31T23:50:52Z | 2019-08-28T23:50:24Z | https://github.com/DistrictDataLabs/yellowbrick/issues/803 | [
"type: technical debt"
] | nickpowersys | 2 |
coqui-ai/TTS | pytorch | 4,159 | [Bug] Incorrect pronunciation | Mistake, sorry. Please remove it. | closed | 2025-02-25T10:06:20Z | 2025-02-25T10:09:33Z | https://github.com/coqui-ai/TTS/issues/4159 | [
"bug"
] | rudolphreti | 0 |
fastapi/sqlmodel | sqlalchemy | 289 | Switch from `str` to `EmailStr` errors `alembic revision --autogenerate` with exit code 1 and no stack trace | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from pydantic import EmailStr
from sqlmodel import AutoString, Column, Field, SQLModel
class User(SQLModel, table=True):
"""User database model."""
__tablename__ = "users"
id: int | None = Field(default=None, primary_key=True)
email: str
email2: str = Field(sa_column=Column("email2", AutoString, nullable=False))
```
### Description
When I use the `str` type alembic runs just fine. But when I change the type to pydantic's `EmailStr` type alembic errors somehow.
As the type annotation, coming from pydantic, in a SQLModel causes an error with alembic I guess that there is somehow a communication / translation error from SQLModel to alembic.
I looked into the code of sqlmodel and here are my findings:
- The function [get_sqlachemy_type](https://github.com/tiangolo/sqlmodel/blob/main/sqlmodel/main.py#L377) should return the correct type for `EmailStr` as it subclasses `str`
- When looking at [get_column_from_field](https://github.com/tiangolo/sqlmodel/blob/main/sqlmodel/main.py#L419) all the options setting is skipped when I set `sa_column`. So for `email2` I set a column with a type, but the type annotation still somehow causes the error.
Maybe it is somehow related to #212.
### Reproduction steps
* Setup alembic like described [here](https://testdriven.io/blog/fastapi-sqlmodel/)
* Put code snippet in a python file and import it in alembic's migriations/env.py file
* Run `alembic revision --autogenerate -m "init"`
* See no error and successfully created migration script in migrations/versions directory
* Change any or both `email: str` to `email: EmailStr` / `email2: str = ...` to `email2: EmailStr = ...`
* Run `alembic revision --autogenerate -m "init"` again
* See alembic exit with exit code 1 and no stack trace
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.10.1
### Additional Context
_No response_ | open | 2022-03-29T12:54:36Z | 2024-04-11T14:24:26Z | https://github.com/fastapi/sqlmodel/issues/289 | [
"question"
] | Cielquan | 3 |
iMerica/dj-rest-auth | rest-api | 56 | Rest-auth :Exlamation mark comes before password |
I am using rest-auth for authentication. after registration when I try to login it show
{
"non_field_errors": [
"Unable to log in with provided credentials."
]
}
and when I saw the user on admin panel an an '!' mark comed before password for example password ='!72wlGF0RiGRraz69sveb63FUrebNkAW9xmOoL16C' please help | closed | 2020-05-02T17:28:00Z | 2020-05-04T03:10:37Z | https://github.com/iMerica/dj-rest-auth/issues/56 | [] | Pranay9752 | 2 |
aiogram/aiogram | asyncio | 1,367 | web_app example "Uknown error" | Hello,
I simply tried running the provided [web_app example](https://github.com/aiogram/aiogram/tree/dev-3.x/examples/web_app)
When I press Send Hello world I receive "Unknown error"
I noticed in the code
```
$('#btn_status').text('Sending...').removeClass('ok err').show();
$.ajax('/demo/sendMessage', {
type: 'POST',
data: {
_auth: initData,
msg_id: msg_id || '',
with_webview: !initDataUnsafe.receiver && with_webview ? 1 : 0
},
dataType: 'json',
success: function (result) {
$('button').prop('disabled', false);
if (result.response) {
if (result.response.ok) {
$('#btn_status').html('Message sent successfully!').addClass('ok').show();
} else {
$('#btn_status').text(result.response.description).addClass('err').show();
alert(result.response.description);
}
} else {
$('#btn_status').text('Unknown error').addClass('err').show();
alert('Unknown error');
}
},
```
But I'd like to understand the cause as I'm guessing is not expected to happen.
I tested both in my phone and computer.
Thank you! | closed | 2023-11-16T23:34:41Z | 2024-12-02T09:10:07Z | https://github.com/aiogram/aiogram/issues/1367 | [] | adriangalilea | 3 |
biolab/orange3 | data-visualization | 6,380 | Signals in example workflows are identified by strings | Before releasing the next version, somebody should load and save example workflows, so the signals are identified by id's and can be loaded by Slovenian Orange. And (s)he should do so on the latest master which includes #6346, Data Table with a single input. | closed | 2023-03-31T10:15:17Z | 2023-04-12T09:10:13Z | https://github.com/biolab/orange3/issues/6380 | [] | janezd | 0 |
agronholm/anyio | asyncio | 414 | Pass kwargs to run_sync? | It would be nice if [to_process.run_sync](https://anyio.readthedocs.io/en/stable/api.html#anyio.to_process.run_sync), [to_thread.run_sync](https://anyio.readthedocs.io/en/stable/api.html#anyio.to_thread.run_sync), and [from_thread.run_sync](https://anyio.readthedocs.io/en/stable/api.html#anyio.from_thread.run_sync) could accept kwargs.
Is there a reason for not supporting it, or was it just not implemented until now? | closed | 2022-01-16T17:20:08Z | 2022-01-16T17:43:49Z | https://github.com/agronholm/anyio/issues/414 | [] | davidbrochart | 4 |
tqdm/tqdm | pandas | 1,122 | tqdm v4.56 not updated for jupyterlab 3.0.x | tqdm not functioning at all in jupyterlab 3.0.x, the bar appears but no movement or iterations | closed | 2021-02-05T19:10:15Z | 2021-02-06T01:00:31Z | https://github.com/tqdm/tqdm/issues/1122 | [] | VanWieren | 0 |
facebookresearch/fairseq | pytorch | 4,761 | How can I add a histogram of weights to tensorboard | Hi
I would like to add a vector of weights to tensorboard summary during training. However, I am having a hard time figuring out how to do that. From what I understand, logging to tensorboard is handled by `fairseq.logging.progress_bar.TensorboardProgressBarWrapper._log_to_tensorboard` which receives the output of the criterion's `forward` function. However, there is no way to `add_histogram` instead, as all output stats are either passed to TB with `add_scalar` or ignored.
Is there an underlying reason to preventing adding histograms that I am missing?
If not, what is my best course of action for adding this functionality? I guess it starts with subclassing the criterion to return the correct additional quantities to log and subclassing `TensorboardProgressBarWrapper` and overwite `_log_to_tensorboard`.
Also, how can I make sure that the histogram values are only used in TB and not in stdout? Finally, is there support for custom progress bar or should I modify the train script as well?
Thank you very much
| open | 2022-10-06T08:33:52Z | 2022-10-06T08:36:23Z | https://github.com/facebookresearch/fairseq/issues/4761 | [
"question",
"needs triage"
] | qmeeus | 0 |
marcomusy/vedo | numpy | 1,070 | Multi-panel plot in Jupyter Lab | I am trying to create a multi-panel interactive plot in Jupyter Lab in Ubuntu 20. The code visualizes multiple meshes and points in the space on each panel. The goal is to be able to visualize and interact with all the panels while the points update, i.e. rotate them at the same time.
I tried running it with the vtk default_backend, but I can't interact with it, and I cannot close it, which is problematic. That said, it works at displaying the changing points, and I can interact with it after all updates are done, but I can't interact during the updates.

```
import vedo
vedo.settings.default_backend= 'vtk'
path = "some/path/meshes/"
mesh_file_names = os.listdir(path)
# this is one instance of the class Plotter
vp1 = vedo.Plotter(N = len(mesh_file_names), title='Mesh Particle Visualizer')
meshes = []
for i in range(num_meshes):
fname = new_meshes[i]
mesh_i = vedo.Mesh(fname)
mesh_i.alpha(0.8)
meshes.append(mesh_i)
vp1.show(os.path.basename(fname), at=i)
vp1.show(meshes[i], at=i)
# Adding points
for i in range(num_meshes):
points = vedo.Points(some_pts[i], c=colors, r=5)
vp1.add(points, at=i)
```
Then I update the points multiple times using
```
# Updating points
for some iterations:
for i in range(num_meshes):
some_pts[I] = some_change(some_pts[I])
sleep(100)
vp1.render().reset_camera()
```
I thought the k3d default_backend would help me circumvent the window-not-closing issue by having the visualization in the notebook output, but I'm getting the following error when running vp1.add(points, at=i):
```
File ~/anaconda3/envs/some_conda_env/lib/python3.9/site-packages/vedo/plotter.py:815, in Plotter.at(self, nren, yren)
812 vedo.logger.error(f"at({nren, yren}) is malformed!")
813 raise RuntimeError
--> 815 self.renderer = self.renderers[nren]
816 self.camera = self.renderer.GetActiveCamera()
817 return self
IndexError: list index out of range
```
I believe this is because the k3d backend does not support multi-panel visualization. The Vedo documentation says that Vedo can run on other backends like ipyvtklink and trame. Does anybody know if this will work for me?
Update: I conda installed ipyvtklink and set the backend to ipyvtk and no visualization showed and also no error. Still can't figure it out.
All in all, I'm unsure how to get this to work. Again, my goal is to be able to visualize N meshes and M changing points in their panels while interacting with them during the changes. Would also be great if I could close the window by clicking the x of the vtkplotter. Any help or pointers would be appreciated! | closed | 2024-03-06T21:28:55Z | 2024-05-15T02:06:13Z | https://github.com/marcomusy/vedo/issues/1070 | [] | HeavenlyBerserker | 9 |
keras-team/keras | machine-learning | 20,604 | RFC consider pre-commit for linters | Would you consider adding `pre-commit` config to the repo? This wouldn't mean people have to enable it, but would make it possible for people who are happy to use it, to enable it and have a nicer time dealing with linting issues.
It could be a variation of this:
```yaml
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.8.0
hooks:
- id: ruff
args: ["--fix", "--output-format=full", "--select=I"]
- id: ruff-format
```
To enable, the user would do:
```sh
pip install pre-commit
pre-commit install
``` | open | 2024-12-06T10:00:45Z | 2025-03-06T09:42:36Z | https://github.com/keras-team/keras/issues/20604 | [
"type:feature"
] | adrinjalali | 0 |
cupy/cupy | numpy | 8,184 | cupy.ReductionKernel broke from cupy v12 to cupy v13 | ### Description
In our code we use a ReductionKernel to calculate the square sum over a 3D array. This seems to have broken with the update from cupy v12.3.0 to v13.0.0.
### To Reproduce
We use a kernel very similar to what is available in the [cupy docs](https://docs.cupy.dev/en/stable/user_guide/kernel.html#reduction-kernels):
```py
square_sum_kernel = cp.ReductionKernel(
'T x', # input params
'T y', # output params
'x * x', # pre-processing expression
'a + b', # reduction operation
'y = a', # post-reduction output processing
'0', # identity value
'square sum' # kernel name
)
x = cp.ones((300,)*3)
xx = square_sum_kernel(x)
```
This fails with the following error:
```
---------------------------------------------------
--- JIT compile log for cupy_jitify_exercise ---
---------------------------------------------------
cub/util_cpp_dialect.cuh(143): warning #161-D: unrecognized #pragma
std/barrier(16): catastrophic error: #error directive: "CUDA synchronization primitives are only supported for sm_70 and up."
1 catastrophic error detected in the compilation of "cupy_jitify_exercise".
Compilation terminated.
---------------------------------------------------
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[9], line 1
----> 1 xx = square_sum_kernel(x)
File cupy/_core/_reduction.pyx:828, in cupy._core._reduction.ReductionKernel.__call__()
File cupy/_core/_reduction.pyx:370, in cupy._core._reduction._AbstractReductionKernel._call()
File cupy/_core/_cub_reduction.pyx:689, in cupy._core._cub_reduction._try_to_call_cub_reduction()
File cupy/_core/_cub_reduction.pyx:526, in cupy._core._cub_reduction._launch_cub()
File cupy/_core/_cub_reduction.pyx:461, in cupy._core._cub_reduction._cub_two_pass_launch()
File cupy/_util.pyx:64, in cupy._util.memoize.decorator.ret()
File cupy/_core/_cub_reduction.pyx:240, in cupy._core._cub_reduction._SimpleCubReductionKernel_get_cached_function()
File cupy/_core/_cub_reduction.pyx:223, in cupy._core._cub_reduction._create_cub_reduction_function()
File cupy/_core/core.pyx:2254, in cupy._core.core.compile_with_cache()
File /data2/mchaillet/programs/tm_gpu_test/env/lib/python3.12/site-packages/cupy/cuda/compiler.py:484, in _compile_module_with_cache(source, options, arch, cache_dir, extra_source, backend, enable_cooperative_groups, name_expressions, log_stream, jitify)
480 return _compile_with_cache_hip(
481 source, options, arch, cache_dir, extra_source, backend,
482 name_expressions, log_stream, cache_in_memory)
483 else:
--> 484 return _compile_with_cache_cuda(
485 source, options, arch, cache_dir, extra_source, backend,
486 enable_cooperative_groups, name_expressions, log_stream,
487 cache_in_memory, jitify)
File /data2/mchaillet/programs/tm_gpu_test/env/lib/python3.12/site-packages/cupy/cuda/compiler.py:562, in _compile_with_cache_cuda(source, options, arch, cache_dir, extra_source, backend, enable_cooperative_groups, name_expressions, log_stream, cache_in_memory, jitify)
560 if backend == 'nvrtc':
561 cu_name = '' if cache_in_memory else name + '.cu'
--> 562 ptx, mapping = compile_using_nvrtc(
563 source, options, arch, cu_name, name_expressions,
564 log_stream, cache_in_memory, jitify)
565 if _is_cudadevrt_needed(options):
566 # for separate compilation
567 ls = function.LinkState()
File /data2/mchaillet/programs/tm_gpu_test/env/lib/python3.12/site-packages/cupy/cuda/compiler.py:319, in compile_using_nvrtc(source, options, arch, filename, name_expressions, log_stream, cache_in_memory, jitify)
316 with open(cu_path, 'w') as cu_file:
317 cu_file.write(source)
--> 319 return _compile(source, options, cu_path,
320 name_expressions, log_stream, jitify)
321 else:
322 cu_path = '' if not jitify else filename
File /data2/mchaillet/programs/tm_gpu_test/env/lib/python3.12/site-packages/cupy/cuda/compiler.py:284, in compile_using_nvrtc.<locals>._compile(source, options, cu_path, name_expressions, log_stream, jitify)
280 def _compile(
281 source, options, cu_path, name_expressions, log_stream, jitify):
283 if jitify:
--> 284 options, headers, include_names = _jitify_prep(
285 source, options, cu_path)
286 else:
287 headers = include_names = ()
File /data2/mchaillet/programs/tm_gpu_test/env/lib/python3.12/site-packages/cupy/cuda/compiler.py:233, in _jitify_prep(source, options, cu_path)
231 if not _jitify_header_source_map_populated:
232 from cupy._core import core
--> 233 jitify._init_module()
234 jitify._add_sources(core._get_header_source_map())
235 _jitify_header_source_map_populated = True
File cupy/cuda/jitify.pyx:212, in cupy.cuda.jitify._init_module()
File cupy/cuda/jitify.pyx:233, in cupy.cuda.jitify._init_module()
File cupy/cuda/jitify.pyx:209, in cupy.cuda.jitify._init_cupy_headers()
File cupy/cuda/jitify.pyx:192, in cupy.cuda.jitify._init_cupy_headers_from_scratch()
File cupy/cuda/jitify.pyx:264, in cupy.cuda.jitify.jitify()
RuntimeError: Runtime compilation failed
```
Moreover, these run fine:
```
xx = square_sum_kernel(x, axis=0)
xx = square_sum_kernel(x, axis=1)
xx = square_sum_kernel(x, axis=(0,1))
xx = square_sum_kernel(x, axis=(0,2))
```
But these fail:
```
xx = square_sum_kernel(x, axis=2)
xx = square_sum_kernel(x, axis=(1,2))
```
### Installation
Source (`pip install cupy`)
### Environment
```
OS : Linux-4.18.0-372.26.1.el8_6.x86_64-x86_64-with-glibc2.28
Python Version : 3.12.1
CuPy Version : 13.0.0
CuPy Platform : NVIDIA CUDA
NumPy Version : 1.26.4
SciPy Version : 1.12.0
Cython Build Version : 0.29.37
Cython Runtime Version : None
CUDA Root : /opt/apps/cuda-11.8
nvcc PATH : /opt/apps/cuda-11.8/bin/nvcc
CUDA Build Version : 11080
CUDA Driver Version : 11080
CUDA Runtime Version : 11080 (linked to CuPy) / 11080 (locally installed)
cuBLAS Version : (available)
cuFFT Version : 10900
cuRAND Version : 10300
cuSOLVER Version : (11, 4, 1)
cuSPARSE Version : (available)
NVRTC Version : (11, 8)
Thrust Version : 200200
CUB Build Version : 200200
Jitify Build Version : <unknown>
cuDNN Build Version : None
cuDNN Version : None
NCCL Build Version : None
NCCL Runtime Version : None
cuTENSOR Version : None
cuSPARSELt Build Version : None
Device 0 Name : NVIDIA GeForce GTX 1080 Ti
Device 0 Compute Capability : 61
Device 0 PCI Bus ID : 0000:02:00.0
Device 1 Name : NVIDIA GeForce GTX 1080 Ti
Device 1 Compute Capability : 61
Device 1 PCI Bus ID : 0000:03:00.0
Device 2 Name : NVIDIA GeForce GTX 1080 Ti
Device 2 Compute Capability : 61
Device 2 PCI Bus ID : 0000:82:00.0
Device 3 Name : NVIDIA GeForce GTX 1080 Ti
Device 3 Compute Capability : 61
Device 3 PCI Bus ID : 0000:83:00.0
```
### Additional Information
I here show the report for v13, but when forcing installation of v12 with `pip install cupy=12.3.0` the code that I show all runs fine. | closed | 2024-02-14T11:39:40Z | 2024-02-29T10:29:49Z | https://github.com/cupy/cupy/issues/8184 | [
"cat:bug",
"prio:high"
] | McHaillet | 4 |
pbugnion/gmaps | jupyter | 318 | MapOptions Parameter | Hi,
Is it possible to add MapOptions parameter? If it's already added, how can i use it? I didn't find it or couldn't setup.
I'm trying to remove all map controls(street view, zoom, map style) from my map. | open | 2019-09-04T17:19:53Z | 2019-09-04T17:19:53Z | https://github.com/pbugnion/gmaps/issues/318 | [] | marcusvcr | 0 |
biolab/orange3 | scikit-learn | 6,778 | More detailed Workflow Info | **What's your use case?**
Having a description is cool but it's not easy to build a monitoring of your assets when all the information are in the same field.
**What's your proposed solution?**
Different fields such as :
-author(s)
-organisation
-version
-link to some documentation
**Are there any alternative solutions?**
making a template for documentation | closed | 2024-03-29T14:09:52Z | 2024-04-12T15:02:37Z | https://github.com/biolab/orange3/issues/6778 | [] | simonaubertbd | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 249 | prediction of a multiclass | Hi. I try to train multiclass segmentation with an example from [repository](https://github.com/catalyst-team/catalyst/blob/master/examples/notebooks/segmentation-tutorial.ipynb).
In the inference, the sigmoid calculation for a single class when tensor looks like [1, w, h]
`
mask_ = torch.from_numpy(logits[0]).sigmoid()
mask = utils.detach(mask_ > threshold).astype("float")
`
How can I calculate for a multiclass mask when tensor looks like [9, w, h]?
My example [here](https://github.com/stanislavkuskov/otus_cv_cource/blob/master/src/otus_project_dev/face_segmentation.ipynb)
| closed | 2020-09-06T21:49:12Z | 2020-11-12T13:00:16Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/249 | [] | stanislavkuskov | 5 |
sczhou/CodeFormer | pytorch | 235 | Missing File Output | I usually perform enhancements and always succeed, but when I try to enhance a long-duration video (over 2 minutes), sometimes the resulting output video is missing even though the enhancement process is completed. Has anyone ever experienced this? | open | 2023-05-30T09:45:18Z | 2023-05-30T09:46:04Z | https://github.com/sczhou/CodeFormer/issues/235 | [] | vianseto | 1 |
aidlearning/AidLearning-FrameWork | jupyter | 197 | 文档中存在错误 | 文档中
- 快速开始
- apt命令
- 安装软件包
存在错误
>检查室友有损坏的依赖包 | closed | 2021-12-06T16:38:50Z | 2022-10-04T10:35:38Z | https://github.com/aidlearning/AidLearning-FrameWork/issues/197 | [
"question"
] | yzyyz1387 | 2 |
PaddlePaddle/PaddleHub | nlp | 1,402 | 有关PaddleHub问题 | 1. 将PaddleHub预训练模型下载至本地进行迁移学习,然后使用hub serving进行部署使用的模型,部署的模型会是训练后得到的吗?
2. 使用PaddleHub迁移学习得到的best_model能否直接用于hub serving部署,如果可以该怎么做?
3. 如果best_model不能直接用于部署,需要经过什么步骤才能进行部署?
我使用的是paddlehub2.0版本 | closed | 2021-05-10T00:44:16Z | 2021-05-13T09:48:06Z | https://github.com/PaddlePaddle/PaddleHub/issues/1402 | [
"serving"
] | Gray-web | 3 |
OFA-Sys/Chinese-CLIP | computer-vision | 262 | load_from_name function logic problem | If you enter the name parameter according to the logic of load_from_name, you can only download the weights, but cannot load the local weights. | open | 2024-02-28T11:31:20Z | 2024-02-28T11:31:20Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/262 | [] | JohnnMa | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.