repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
litestar-org/litestar | api | 3,644 | Bug: openapi parameter order doesn't match the order in the path | ### Description
Unconsumed parameters [appear to be added last to the openapi spec](https://github.com/litestar-org/litestar/blob/ffaf5616b19f6f0f4128209c8b49dbcb41568aa2/litestar/_openapi/parameters.py#L226), which causes strange parameter ordering in other generated code. It looks like some effort was made to account for this at https://github.com/litestar-org/litestar/blame/ffaf5616b19f6f0f4128209c8b49dbcb41568aa2/tests/unit/test_openapi/test_schema.py#L613. I'm wondering if this is really the desired behavior. I would have expected the order in the openapi spec to match the order in the path. As it stands, it means that consuming a dependency will change the openapi spec and code generated from it.
### URL to code causing the issue
_No response_
### MCVE
_No response_
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.9.1
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2024-07-24T03:36:27Z | 2025-03-20T15:54:50Z | https://github.com/litestar-org/litestar/issues/3644 | [
"Bug :bug:",
"OpenAPI"
] | ashanbrown | 0 |
joeyespo/grip | flask | 215 | markdown commenting | Grip doesn't seem to observe markdown commenting (text not to be included in the generated doc)
The most universally accepted does not seem to work
"[//]: # "
"(empty line)[comment]: # "
I verified the first of those actually does work on github
http://stackoverflow.com/questions/4823468/comments-in-markdown
| closed | 2016-10-30T11:10:59Z | 2016-10-30T19:54:19Z | https://github.com/joeyespo/grip/issues/215 | [
"not-a-bug"
] | chrisamow | 3 |
huggingface/datasets | deep-learning | 7,430 | Error in code "Time to slice and dice" from course "NLP Course" | ### Describe the bug
When we execute code
```
frequencies = (
train_df["condition"]
.value_counts()
.to_frame()
.reset_index()
.rename(columns={"index": "condition", "condition": "frequency"})
)
frequencies.head()
```
answer should be like this
condition | frequency
birth control | 27655
depression | 8023
acne | 5209
anxiety | 4991
pain | 4744
but he is different
frequency | count
birth control | 27655
depression | 8023
acne | 5209
anxiety | 4991
pain | 4744
this is not correct, correct code
```
frequencies = (
train_df["condition"]
.value_counts()
.to_frame()
.reset_index()
.rename(columns={"index": "condition", "count": "frequency"})
)
````
### Steps to reproduce the bug
```
frequencies = (
train_df["condition"]
.value_counts()
.to_frame()
.reset_index()
.rename(columns={"index": "condition", "condition": "frequency"})
)
frequencies.head()
```
### Expected behavior
condition | frequency
birth control | 27655
depression | 8023
acne | 5209
anxiety | 4991
pain | 4744
### Environment info
Google Colab | closed | 2025-02-28T11:36:10Z | 2025-03-05T11:32:47Z | https://github.com/huggingface/datasets/issues/7430 | [] | Yurkmez | 2 |
Yorko/mlcourse.ai | numpy | 753 | There might be an error in Topic 5, Part 1: Bagging 4. Out-of-Bag Error subheading | [link_to_notebook](https://github.com/Yorko/mlcourse.ai/blob/main/jupyter_english/topic05_ensembles_random_forests/topic5_part1_bagging.ipynb)
In Topic 5, Part 1: Bagging and under the heading 4. Out-of-Bag Error, It starts as:
_> Looking ahead, in case of Random Forest, there is no need to use cross-validation or hold-out samples in order to get an unbiased error estimation. Why? Because, in ensemble techniques, the error estimation takes place internally._
From what I understood, this must have been in the case of Bagging. Not Random Forest. Because in bagging trees, we use bootstrapping as a sampling method, approximately 1/3 of the data is not used in a single tree so this is like an internal validation set. However, Random Forest is about selecting a subset of features randomly so some of the stronger features won't dominate the model in the first splits.
If I am wrong, can you please update me? Thanks. | closed | 2023-08-04T14:44:54Z | 2024-08-19T16:16:00Z | https://github.com/Yorko/mlcourse.ai/issues/753 | [] | fatih-boyar | 1 |
streamlit/streamlit | streamlit | 10,884 | implement Backdrop component to block user interaction before some long tasks are finished | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Display a full screen Backdrop to prevent user from interacting with the current page until certain operation is finished. Here is how it looks like: https://mui.com/material-ui/react-backdrop/?spm=5176.28103460.0.0.18b3451euvGgfP
### Why?
Developer may want to stop user from interacting with the app before a long running task is finished in the current page. A common UX design for such situation is to show a [Backdrop](https://mui.com/material-ui/react-backdrop/?spm=5176.28103460.0.0.18b3451euvGgfP) component when task is running. A lot of community discussions mentions similar requirements, for example
* https://discuss.streamlit.io/t/disable-the-entire-page-until-the-end-of-the-script-execution/84619
* https://discuss.streamlit.io/t/is-that-a-way-to-block-the-streamlit-app-re-run-before-the-previous-run-finished/7905
* https://discuss.streamlit.io/t/disable-buttons-checkboxes-and-other-widgets-during-code-execution/44037
* https://discuss.streamlit.io/t/disable-interactables-after-suitable-interaction/45604/2
I think it is worth implementing this feature in streamlit instead of as a third party component.
### How?
Potential solutions to implement this feature including add extra options or API to existing components, or implement a new component.
Personally I prefer add extra options to `st.progress` or/and `st.spinner` to support this feature.
For example, implement a new option named `backdrop` whose default value is `False` in `st.spinner`.
So that developer can use the following code to block user interaction until long run tasks is finish:
```python
with st.spinner(backdrop=True):
while not task.finished:
sleep(5)
```
In the case of `st.progress`,
```python
my_bar = st.progress(0, text=progress_text, backdrop=True)
for percent_complete in range(100):
time.sleep(0.01)
my_bar.progress(percent_complete + 1, text=progress_text)
time.sleep(1)
my_bar.empty()
```
### Additional Context
_No response_ | open | 2025-03-24T07:32:37Z | 2025-03-24T07:34:09Z | https://github.com/streamlit/streamlit/issues/10884 | [
"type:enhancement"
] | link89 | 1 |
aimhubio/aim | tensorflow | 3,133 | Import an offline wandb run | ## Is it possible to import an offline wandb run directly to aim without starting a wandb server?
| closed | 2024-04-16T09:23:35Z | 2024-06-03T09:50:30Z | https://github.com/aimhubio/aim/issues/3133 | [
"type / question"
] | edwardsp | 1 |
modelscope/modelscope | nlp | 813 | KeyError: 'speaker-diarization-inference is not in the pipelines registry group speaker-diarization. Please make sure the correct version of ModelScope library is used.' | 运行实例https://modelscope.cn/models/iic/speech_diarization_eend-ola-en-us-callhome-8k/summary
报错信息:
KeyError: 'speaker-diarization-inference is not in the pipelines registry group speaker-diarization. Please make sure the correct version of ModelScope library is used.'
版本信息:modelscope:'1.12.0' funasr: '1.0.19'
Model related: @wenmengzhou @tastelikefeet
Pipeline related: @Firmament-cyou @wenmengzhou
| closed | 2024-03-28T08:46:26Z | 2024-05-19T01:50:49Z | https://github.com/modelscope/modelscope/issues/813 | [
"Stale"
] | yangyyt | 2 |
pennersr/django-allauth | django | 3,047 | ACCOUNT_EMAIL_VERIFICATION = "mandatory" not preventing unverified email login | I just found out that a registered email that is yet to be verified is allowed to log in even though ACCOUNT_EMAIL_VERIFICATION is set to "mandatory" in the settings. I don't know if this is a bug with the latest allauth, but I am certain this was working fine before.
Here are my codes.
````
#Settings.py
ACCOUNT_USER_MODEL_USERNAME_FIELD = None
ACCOUNT_EMAIL_REQUIRED = True
SOCIALACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_UNIQUE_EMAIL = True
ACCOUNT_USERNAME_REQUIRED = False
ACCOUNT_AUTHENTICATION_METHOD = "email"
ACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION = False
ACCOUNT_EMAIL_VERIFICATION = "mandatory"
ACCOUNT_CONFIRM_EMAIL_ON_GET = True
ACCOUNT_EMAIL_CONFIRMATION_AUTHENTICATED_REDIRECT_URL = None
ACCOUNT_EMAIL_CONFIRMATION_ANONYMOUS_REDIRECT_URL = None
ACCOUNT_EMAIL_CONFIRMATION_EXPIRE_DAYS = 3
REST_FRAMEWORK = {
"EXCEPTION_HANDLER": "users.exceptions.custom_exception_handler",
"DEFAULT_AUTHENTICATION_CLASSES": (
"rest_framework.authentication.SessionAuthentication",
"knox.auth.TokenAuthentication",
),
}
````
NB: I am using django rest-knox to log in to generate a token, which I don't think has anything to do with ACCOUNT_EMAIL_VERIFICATION in the settings, as other settings are working just fine.
````
from allauth.account.utils import perform_login
from knox.views import LoginView as KnoxLoginView
from knox.models import AuthToken
class LoginView(KnoxLoginView):
permission_classes = (permissions.AllowAny,)
authentication_classes = [SessionAuthentication]
def post(self, request, format=None):
standard_data = {**request.data,
"username": request.data["email"].lower()}
serializer = AuthTokenSerializer(data=standard_data)
serializer.is_valid(raise_exception=True)
user = serializer.validated_data["user"]
perform_login(request, user, "none")
response = super(LoginView, self).post(request, format=None)
return response
````
Please can someone help me with this? | closed | 2022-03-07T23:28:27Z | 2023-06-14T20:05:53Z | https://github.com/pennersr/django-allauth/issues/3047 | [] | eakenbor | 4 |
jupyter-book/jupyter-book | jupyter | 1,549 | Incomplete documentation on publishing to Read The Docs | ### Describe the problem
The Jupyter Book documentation covers also [publishing to Read The Docs](https://jupyterbook.org/publish/readthedocs.html). But going through the suggested steps makes the build on `readthedocs.io` fail with an error about a missing dependency, `ModuleNotFoundError: No module named 'sphinx_togglebutton'`.
The issue can be resolved by adding two steps to the procedure in the Jupyter Book documentation, which I figured by comparing [my test repo](https://github.com/pamoroso/jbrtd-test) with [a sample repo](https://github.com/astrojuanlu/jupyterbook-on-read-the-docs) linked from a [Read The Docs blog post](https://blog.readthedocs.com/jupyter-book-read-the-docs/).
The first step is to add to the directory of the documentation repo holding the manuscript files the file [`requirements.txt`](https://github.com/astrojuanlu/jupyterbook-on-read-the-docs/blob/main/book/requirements.txt), which should include the following dependency:
```
# requirements.txt
https://github.com/executablebooks/jupyter-book/archive/refs/heads/master.zip # After merging https://github.com/executablebooks/jupyter-book/pull/1422
```
The other step is to add to the repo’s root directory the file [`.readthedocs.yaml`](https://github.com/astrojuanlu/jupyterbook-on-read-the-docs/blob/main/.readthedocs.yaml) with the following directive:
```
# .readthedocs.yaml
python:
install:
- requirements: docs/requirements.txt
```
### Link to your repository or website
https://github.com/pamoroso/jbrtd-test
### Steps to reproduce
1. `jupyter-book config sphinx docs`
2. `sphinx-build docs docs/_build/html -b html`
### The version of Python you're using
3.8.12
### Your operating system
Ubuntu Linux
### Versions of your packages
```
Jupyter Book : 0.12.1
External ToC : 0.2.3
MyST-Parser : 0.15.2
MyST-NB : 0.13.1
Sphinx Book Theme : 0.1.7
Jupyter-Cache : 0.4.3
NbClient : 0.5.9
```
### Additional context
The Python and operating system versions I provided above refer to my local [build environment](https://replit.com/@PaoloAmoroso/jbrtd-test) running on [Replit](https://replit.com). However, as noted, it’s `readthedocs.io` that carries out the build of [my published Jupyter Book documentation](https://jbrtd-test.readthedocs.io).
By the way, Jupyter Book is an awesome system. Thanks all for the beautiful tool!
| open | 2021-11-28T12:07:51Z | 2021-11-28T12:11:27Z | https://github.com/jupyter-book/jupyter-book/issues/1549 | [
"bug"
] | pamoroso | 1 |
seleniumbase/SeleniumBase | pytest | 2,774 | Fetch Requests Data When Open a Page | I want to fetch all requests similar https://www.dilatoit.com/2020/12/17/how-to-capture-http-requests-using-selenium.html
```
from seleniumwire import webdriver # Import from seleniumwire
# Create a new instance of the Firefox driver
driver = webdriver.Firefox()
# Go to the Google home page
driver.get('https://www.google.com')
# Access and print requests via the `requests` attribute
for request in driver.requests:
if request.response:
print(
request.url,
request.response.status_code,
request.response.headers['Content-Type'])
```
How can i do with seleniumbase.?
I try to use
```
for request in sb.driver.requests:
if request.response:
print(f"{request.url}\n{request.response.status_code}\n{request.response.headers['Content-Type']}")
```
But it give me error
`AttributeError: 'Chrome' object has no attribute 'requests'` | closed | 2024-05-14T17:53:35Z | 2024-05-14T18:48:42Z | https://github.com/seleniumbase/SeleniumBase/issues/2774 | [
"question"
] | adarmawan117 | 1 |
dynaconf/dynaconf | django | 676 | [RFC] Allow validators to operate on list items - was: `AttributeError: 'BoxList' object has no attribute 'get'`on calling validate() | settings.toml
```toml
[[PROVIDERS]]
PROVIDER_NAME = "A"
TYPE = "1"
[[PROVIDERS]]
PROVIDER_NAME = "B"
TYPE = 1
```
app.py
```python
from dynaconf import Dynaconf, Validator
settings = Dynaconf()
settings.validators.register(Validator('PROVIDERS.TYPE', is_type_of=str))
settings.validators.validate()
```
throws `AttributeError: 'BoxList' object has no attribute 'get'`
Expected behaviour would be, that dynaconf runs the defined validator on all values inside the list of dicts of PROVIDERS. | open | 2021-10-12T16:26:37Z | 2023-03-25T07:09:14Z | https://github.com/dynaconf/dynaconf/issues/676 | [
"wontfix",
"Not a Bug",
"RFC"
] | sla-te | 4 |
kochlisGit/ProphitBet-Soccer-Bets-Predictor | seaborn | 87 | Running a main.py getting an error with import tf_utils | 
| open | 2024-08-07T12:48:08Z | 2024-08-07T12:48:08Z | https://github.com/kochlisGit/ProphitBet-Soccer-Bets-Predictor/issues/87 | [] | konli90 | 0 |
joke2k/django-environ | django | 403 | "Invalid line" error when line starts with spaces and then a comment follows | We use comments, nested with 4 spaces. This works fine in environ 0.4.5 , but now generates an error in environ 0.9.0:
> Invalid line: # Domain, without prefix ('www.') and suffix ('.nl'/'.com')
in both .env and the error message, 4 spaces preceed the hashtag. Sorry, I did not succeed in finding the right way to enter four spaces in this edit field.
Thanks for looking into this!
| closed | 2022-06-27T13:33:51Z | 2023-07-06T21:21:13Z | https://github.com/joke2k/django-environ/issues/403 | [
"bug"
] | wimfeijen | 3 |
ets-labs/python-dependency-injector | asyncio | 118 | Add validation of provided type for Singleton provider | Possible syntax:
``` python
class ServiceProvider(providers.Singleton):
"""Service provider."""
provided_type = BaseService
"""Provided type.
:type: type | None
"""
```
| closed | 2015-12-11T17:59:14Z | 2015-12-13T21:32:35Z | https://github.com/ets-labs/python-dependency-injector/issues/118 | [
"feature"
] | rmk135 | 0 |
awesto/django-shop | django | 835 | Update version announcement in README.md | ### Version 1.1 has been released!
... a few minor versions ago.
Just a reminder before the next release, whenever it comes. | closed | 2020-10-28T04:23:04Z | 2020-10-28T07:48:15Z | https://github.com/awesto/django-shop/issues/835 | [] | greyhare | 0 |
deepspeedai/DeepSpeed | pytorch | 6,507 | [BUG] Why is LoRA much slower than Freeze? | Model:Qwen2-72B
Machine: 8 * A100 80G
Environment: Deepspeed zero 3
# Freeze Method Log:
- Set trainable layers: 77,78,79
- trainable params: 2633054208 || all params: 72706203648 || trainable%: 3.6215
- epochs:10
- train_runtime = 20:36:14.47
- train_samples_per_second = 1.051
- train_steps_per_second = 0.008
# LoRA Method Log:
- trainable params: 16384000 || all params: 72722587648 || trainable%: 0.0225
- epochs:6
- train_runtime = 1 day, 8:57:44.99
- train_samples_per_second = 0.394
- train_steps_per_second = 0.002
In my experiment, LoRA method is much slower than Freeze method. Is this normal and why? | closed | 2024-09-09T08:35:48Z | 2024-09-09T08:58:00Z | https://github.com/deepspeedai/DeepSpeed/issues/6507 | [
"bug",
"training"
] | gugugu-469 | 1 |
AirtestProject/Airtest | automation | 920 | test | :bulb:**相关项目:**
**标题:** test
**AirtestIDE版本:** 1
**未使用本地Pyhton环境运行脚本**
**报错描述:**
BUG信息
**相关截图:**
无
**报错Log:**
**连接设备信息:**
**提供最小可复现此BUG的代码:**
```
无
``` | closed | 2021-06-17T06:25:03Z | 2021-06-17T09:44:16Z | https://github.com/AirtestProject/Airtest/issues/920 | [] | yimelia | 0 |
pytest-dev/pytest-django | pytest | 904 | How to hide the migrations output? | If I run one test via PyCharm I see the migrations output twice:
```
/home/guettli/projects/lala-env/bin/python /snap/pycharm-professional/230/plugins/python/helpers/pycharm/_jb_pytest_runner.py --target test_models.py::test_address_is_complete
Testing started at 11:42 ...
Launching pytest with arguments test_models.py::test_address_is_complete in /home/guettli/projects/lala-env/src/lala/lala/tests
============================= test session starts ==============================
platform linux -- Python 3.8.5, pytest-6.2.0, py-1.10.0, pluggy-0.13.1 -- /home/guettli/projects/lala-env/bin/python
cachedir: .pytest_cache
django: settings: mysite.settings (from ini)
rootdir: /home/guettli/projects/lala-env/src/lala, configfile: pytest.ini
plugins: django-4.1.0
collecting ... collected 1 item
test_models.py::test_address_is_complete Operations to perform:
Synchronize unmigrated apps: allauth, colorfield, debug_toolbar, google, messages, staticfiles
Apply all migrations: account, admin, auth, contenttypes, lala, sessions, sites, socialaccount
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying account.0001_initial... OK
Applying account.0002_email_max_length... OK
... [cut] ....
Creating test database for alias 'default' ('test_lala')...
Got an error creating the test database: database "test_lala" already exists
Destroying old test database for alias 'default' ('test_lala')...
FAILED [100%]
lala/tests/test_models.py:18 (test_address_is_complete)
user = <User: Dr. Foo>
def test_address_is_complete(user):
address = user.address
> assert address.is_complete
E assert False
E + where False = <Address: Address object (1)>.is_complete
test_models.py:21: AssertionError
Destroying test database for alias 'default' ('test_lala')...
Assertion failed
Assertion failed
=================================== FAILURES ===================================
___________________________ test_address_is_complete ___________________________
user = <User: Dr. Foo>
def test_address_is_complete(user):
address = user.address
> assert address.is_complete
E assert False
E + where False = <Address: Address object (1)>.is_complete
test_models.py:21: AssertionError
---------------------------- Captured stdout setup -----------------------------
Operations to perform:
Synchronize unmigrated apps: allauth, colorfield, debug_toolbar, google, messages, staticfiles
Apply all migrations: account, admin, auth, contenttypes, lala, sessions, sites, socialaccount
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying account.0001_initial... OK
Applying account.0002_email_max_length... OK
... [cut] ...
---------------------------- Captured stderr setup -----------------------------
Creating test database for alias 'default' ('test_lala')...
Got an error creating the test database: database "test_lala" already exists
Destroying old test database for alias 'default' ('test_lala')...
--------------------------- Captured stderr teardown ---------------------------
Destroying test database for alias 'default' ('test_lala')...
=========================== short test summary info ============================
FAILED test_models.py::test_address_is_complete - assert False
============================== 1 failed in 2.89s ===============================
Process finished with exit code 1
Assertion failed
Assertion failed
```
How to hide the output which gets created by the migrations?
And why do I see the exception twice? | open | 2021-02-07T10:48:54Z | 2023-10-01T07:50:29Z | https://github.com/pytest-dev/pytest-django/issues/904 | [
"needs-info"
] | guettli | 2 |
lucidrains/vit-pytorch | computer-vision | 152 | Training vit on Imagenet 1k got bad performance. | I am using vit to train ImageNet 1k from scratch. The accuracy of SOTA is about 70% to 80%. But I can only reach 30%. I don't know why it doesn't work. I use the following configuration.
``` python
model = ViT(
image_size=224,
patch_size=32,
num_classes=args['n_class'],
dim=768,
depth=args['depth'],
heads=12,
mlp_dim=3072,
dropout=0.1,
emb_dropout=0.1
)
optimizer = torch.optim.Adam(
model.parameters(),
lr=1e-3,
betas=(0.9, 0.999),
weight_decay=0.0001
)
scheduler = CosineAnnealingLR(optimizer, T_max=1270, eta_min=1e-5)
```
The ```batch_size``` is 1024, and I adjust the learning rate after each batch. | open | 2021-08-31T02:32:15Z | 2022-04-08T09:58:40Z | https://github.com/lucidrains/vit-pytorch/issues/152 | [] | songlei00 | 2 |
rio-labs/rio | data-visualization | 17 | Add Security policy | closed | 2024-05-17T09:33:03Z | 2024-05-30T12:15:06Z | https://github.com/rio-labs/rio/issues/17 | [
"documentation"
] | Sn3llius | 0 | |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 709 | test results are blurred and seem worse than the training results | Thanks for sharing such a wonderful project! When I trained the model with facades, the training results were good seen with carnal eyes, but when I tested the model ,the results are vaguer, also with some noises. Is that normal? | closed | 2019-07-18T05:41:07Z | 2019-07-30T03:06:45Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/709 | [] | hyang23333 | 11 |
explosion/spaCy | data-science | 12,411 | Displacy visualiser only sometimes shows labels | In many cases, I do not see the label at the beginning of a span in displacy. This should not be an issue with a particular label since I sometimes see it when using different text examples.
Is there something I should look out for to avoid this?
```
colors = {#omitted in this example}
options = {"spans_key": "sentences", "colors": colors}
displacy.serve(doc, style="span", options=options)
```

Thanks!
| open | 2023-03-13T21:15:01Z | 2023-04-20T14:14:12Z | https://github.com/explosion/spaCy/issues/12411 | [
"feat / visualizers"
] | goonhoon | 8 |
marimo-team/marimo | data-visualization | 3,920 | Error in the config file | ### Describe the bug
Hi team,
Just a small bug: when you run marimo config describe, the experimental stuff is under AI, value dict[str, any]. However, the config won't parse if you set that in the toml, and if you follow the template (where experimental is in its own section), it doesn't work.
### Environment
<details>
```
Replace this line with the output of marimo env. Leave the backticks in place.
```
</details>
### Code to reproduce
_No response_ | closed | 2025-02-26T02:38:25Z | 2025-02-26T15:54:38Z | https://github.com/marimo-team/marimo/issues/3920 | [
"bug"
] | arthrod | 0 |
scanapi/scanapi | rest-api | 488 | Remove `if not matches:` from StringEvaluator | ## Refactor
### Description
Remove unuseful lines:
- https://github.com/scanapi/scanapi/blob/main/scanapi/evaluators/string_evaluator.py#L27-L28
- https://github.com/scanapi/scanapi/blob/main/scanapi/evaluators/string_evaluator.py#L27-L28
`matches` variable is an `callable_iterator object`, so it will never be null. We can remove the check, since when there is no match with the pattern, it will not go into the loop:
```python
-> if not matches:
(Pdb) ll
23 @classmethod
24 def _evaluate_env_var(cls, sequence):
25 matches = cls.variable_pattern.finditer(sequence)
26 import pdb
27
28 pdb.set_trace()
29 -> if not matches:
30 return sequence
31
32 for match in matches:
33 variable_name = match.group("variable")
34
35 if any(letter.islower() for letter in variable_name):
36 continue
37
38 try:
39 variable_value = os.environ[variable_name]
40 except KeyError as e:
41 raise BadConfigurationError(e)
42
43 sequence = cls.replace_var_with_value(
44 sequence, match.group(), variable_value
45 )
46
47 return sequence
(Pdb) sequence
'no env var'
(Pdb) matches
<callable_iterator object at 0x1112855b0>
```
Removing the two lines should also increase the coverage. | closed | 2021-08-09T19:17:04Z | 2021-08-10T12:50:53Z | https://github.com/scanapi/scanapi/issues/488 | [
"Good First Issue",
"Refactor",
"Code Quality"
] | camilamaia | 0 |
developmentseed/lonboard | jupyter | 669 | Depend on geoarrow-rust? | There are probably a few places where `geoarrow-rust` could be useful to Lonboard.
One is getting the total bounds of the input: https://github.com/developmentseed/lonboard/blob/667dfedb633f41c89e70b491f5a961a702c3d884/lonboard/_geoarrow/ops/bbox.py#L36-L98
That apparently takes two seconds with 12.5M points, which seems awfully slow: https://github.com/xarray-contrib/xdggs/pull/67#discussion_r1786659267
The main blocker here is that we want to ensure that geoarrow-rust is stable enough to depend on here in Lonboard. | open | 2024-10-03T18:44:56Z | 2024-10-03T18:45:22Z | https://github.com/developmentseed/lonboard/issues/669 | [] | kylebarron | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 1,707 | Error while using the Demudder in the latest beta version | I get this error while using Demudder. I tried all demudding options (there are three) and also tried renaming the file to a shorter version, in case UVR was struggling with it as it was a long file name.
The model I used was the v1E one.
Last Error Received:
Process: MDX-Net
Missing file error raised. Please address the error and try again.
If this error persists, please contact the developers with the error details.
Raw Error Details:
FileNotFoundError: "[WinError 2] Cannot find the specified file"
Traceback Error: "
File "UVR.py", line 9274, in process_start
File "separate.py", line 858, in seperate
File "separate.py", line 399, in final_process
File "separate.py", line 468, in write_audio
File "separate.py", line 441, in save_with_message
File "separate.py", line 414, in save_audio_file
File "separate.py", line 1616, in save_format
File "pydub\audio_segment.py", line 808, in from_wav
File "pydub\audio_segment.py", line 728, in from_file
File "pydub\utils.py", line 274, in mediainfo_json
File "subprocess.py", line 951, in __init__
File "subprocess.py", line 1420, in _execute_child
"
Error Time Stamp [2025-01-21 11:25:43]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: MB-Roformer-Inst-v1-E
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
is_demud: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: True
is_use_torch_inference_mode: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Matchering
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: True
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_save_to_input_path: False
apollo_overlap: 2
apollo_chunk_size: 5
apollo_model: Choose Model
is_task_complete: False
is_normalization: False
is_use_directml: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: FLAC
wav_type_set: 32-bit Float
device_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: True
model_sample_mode_duration: 30
demudder_method: Phase Rotate
demucs_stems: All Stems
mdx_stems: All Stems
Patch Version: UVR_Patch_1_21_25_2_28_BETA | open | 2025-01-21T10:27:29Z | 2025-02-02T20:05:25Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1707 | [] | thenormal | 1 |
ultrafunkamsterdam/undetected-chromedriver | automation | 830 | Working locally with Headless=False on MacOS, but not Ubuntu server error | Hello, amazing job creating this @ultrafunkamsterdam. This is the only tool for me that bypassed a multi-million dollar companies anti-scraping solutions.
It would work undetected on my MacOS with Headless=False, and when i ported it over to my linux server i get the follow error:
`selenium.common.exceptions.WebDriverException: Message: unknown error: cannot connect to chrome at 127.0.0.1:36851
from chrome not reachable`
`import undetected_chromedriver as uc`
`driver = uc.Chrome(use_subprocess=False,version_main=105, headless=False)`
`driver.get('website.com')`
`source = driver.page_source`
`print(source)`
Tried both `use_subprocess` settings.
Tried setting `browser_executable_path='/usr/bin/google-chrome` and a bunch of other options with no success. And if make it headless=True, it doesn't error but then it is detected as a bot.
`/usr/bin/google-chrome` results in the following:
`[873747:873747:1009/143322.819557:ERROR:ozone_platform_x11.cc(239)] Missing X server or $DISPLAY
[873747:873747:1009/143322.819756:ERROR:env.cc(255)] The platform failed to initialize. Exiting.`
Forgive my ignorance but how do resolve this error? And any insight on what is different between the Headless options that could lead to detection?
Thanks
| open | 2022-10-09T14:37:25Z | 2022-10-10T01:26:33Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/830 | [] | BurnNoticeSpy | 2 |
scanapi/scanapi | rest-api | 455 | Possibly broken super() call | ## Bug report
### Environment
LGTM analysis
### Description of the bug
First argument to super() is not enclosing class
→ https://lgtm.com/projects/g/scanapi/scanapi/snapshot/975eee8439217318dce6c39d70c1d700d6f33bd6/files/scanapi/settings.py?sort=name&dir=ASC&mode=heatmap#xb5202bf2b059205b:1
### Expected behavior?
Rewrite this to just `super()...`.
| closed | 2021-07-31T08:57:19Z | 2021-08-11T16:51:52Z | https://github.com/scanapi/scanapi/issues/455 | [
"Code Quality"
] | jhermann | 0 |
keras-team/keras | deep-learning | 20,675 | Keras API reference has not been updated yet | Even though Keras 3.7.0 has been released, it seems the API reference has not yet been updated.
For example, I couldn't find the CELU activation function listed on [the activations page.](https://keras.io/api/layers/activations/)
Please feel free to let me know if I have misunderstood something.
Thank you! | closed | 2024-12-20T14:14:30Z | 2024-12-22T05:19:07Z | https://github.com/keras-team/keras/issues/20675 | [] | shashaka | 2 |
pyg-team/pytorch_geometric | pytorch | 9,311 | MoleculeNet's BBBP dataset incorrectly batched | ### 🐛 Describe the bug
While batching the BBBP dataset, there is one graph that is not associated with any node. This causes a discrepancy in the number of graph labels in the batch and output shape of the downstream model. This affects loss calculations and a shape mismatched is observed.
Minimal code for reproducibility:
`
import torch
from torch_geometric.loader import DataLoader
from torch_geometric.datasets import MoleculeNet
import random
# Ensure reproducibility
seed = 42
random.seed(seed)
torch.manual_seed(seed)
# Load the BBBP dataset
dataset = MoleculeNet(root='.', name='BBBP')
loader = DataLoader(dataset, batch_size=64, shuffle=True, drop_last=True)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Check unique graphs in batch match number of graphs in batch
for data in loader:
print(data.batch.unique(), data.batch.unique().shape,data.num_graphs)
assert data.batch.unique().shape[0] == data.num_graphs`
Expected output:
tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53,
54, 55, 56, 57, 58, 59, 60, 61, 63]) torch.Size([63]) 64
AssertionError
### Versions
PyTorch version: 2.2.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: NVIDIA DGX Server (x86_64)
GCC version: (GCC) 5.4.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.34
Python version: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-162.23.1.el9_1.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.9.7
/usr/lib64/libcudnn_adv_infer.so.8.9.7
/usr/lib64/libcudnn_adv_train.so.8.9.7
/usr/lib64/libcudnn_cnn_infer.so.8.9.7
/usr/lib64/libcudnn_cnn_train.so.8.9.7
/usr/lib64/libcudnn_ops_infer.so.8.9.7
/usr/lib64/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480CL
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 7
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.2
[pip3] torch==2.2.0
[pip3] torch_cluster==1.6.3+pt22cu121
[pip3] torch-geometric==2.3.1
[pip3] torch_scatter==2.1.2+pt22cu121
[pip3] torch_sparse==0.6.18+pt22cu121
[pip3] torch-spline-conv==1.2.2
[pip3] torchaudio==2.2.0
[pip3] torchdata==0.7.1
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.17.0
[pip3] torchviz==0.0.2
[pip3] triton==2.2.0
[pip3] tsne-torch==1.0.1
[conda] numpy 1.21.2 pypi_0 pypi
[conda] torch 2.2.0 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt22cu121 pypi_0 pypi
[conda] torch-geometric 2.3.1 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt22cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt22cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2 pypi_0 pypi
[conda] torchaudio 2.2.0 pypi_0 pypi
[conda] torchdata 0.7.1 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchvision 0.17.0 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
[conda] triton 2.2.0 pypi_0 pypi
[conda] tsne-torch 1.0.1 pypi_0 pypi | closed | 2024-05-11T20:01:44Z | 2024-05-13T13:31:43Z | https://github.com/pyg-team/pytorch_geometric/issues/9311 | [
"bug"
] | apurvakokate | 1 |
psf/black | python | 4,397 | Psfback
| **Black v24.4.2**
[Playground link](https://black.vercel.app/?version=stable&state=_Td6WFoAAATm1rRGAgAhARYAAAB0L-Wj4ARsAnNdAD2IimZxl1N_WlkPinBFoXIfdFTaTVkGVeHShArYj9yPlDvwBA7LhGo8BvRQqDilPtgsfdKl-ha7EFp0Ma6lY_06IceKiVsJ3BpoICJM9wU1VJLD7l3qd5xTmo78LqThf9uibGWcWCD16LBOn0JK8rhhx_Gf2ClySDJtvm7zQJ1Z-Ipmv9D7I_zhjztfi2UTVsJp7917XToHBm2EoNZqyE8homtGskFIiif5EZthHQvvOj8S2gJx8_t_UpWp1ScpIsD_Xq83LX-B956I_EBIeNoGwZZPFC5zAIoMeiaC1jU-sdOHVucLJM_x-jkzMvK8Utdfvp9MMvKyTfb_BZoe0-FAc2ZVlXEpwYgJVAGdCXv3lQT4bpTXyBwDrDVrUeJDivSSwOvT8tlnuMrXoD1Sk2NZB5SHyNmZsfyAEqLALbUnhkX8hbt5U2yNQRDf1LQhuUIOii6k6H9wnDNRnBiQHUfzKfW1CLiThnuVFjlCxQhJ60u67n3EK38XxHkQdOocJXpBNO51E4-f9z2hj0EDTu_ScuqOiC9cI8qJ4grSZIOnnQLv9WPvmCzx5zib3JacesIxMVvZNQiljq_gL7udm1yeXQjENOrBWbfBEkv1P4izWeAysoJgZUhtZFwKFdoCGt2TXe3xQ-wVZFS5KoMPhGFDZGPKzpK15caQOnWobOHLKaL8eFA-qI44qZrMQ7sSLn04bYeenNR2Vxz7hvK0lJhkgKrpVfUnZrtF-e-ubeeUCThWus4jZbKlFBe2Kroz90Elij_UZBMFCcFo0CfIx5mGlrINrTJLhERszRMMDd39XsBDzpZIYV4TcG7HoMS_IF8aMAAAxI-5uTWXbUQAAY8F7QgAAP01Vc6xxGf7AgAAAAAEWVo=)
## Options
`--line-length=88`
`--safe`
## Input
```python
from seven_dwwarfs import Grumpy, Happy, Sleepy, Bashful, Sneezy, Dopey, Doc
x = { 'a':37,'b':42,
'c':927}
x = 123456789.123456789E123456789
if very_long_variable_name is not None and \
very_long_variable_name.field > 0 or \
very_long_variable_name.is_debug:
z = 'hello '+'world'
else:
world = 'world'
a = 'hello {}'.format(world)
f = rf'hello {world}'
if (this
and that): y = 'hello ''world'#FIXME: https://github.com/psf/black/issues/26
class Foo ( object ):
def f (self ):
return 37*-2
def g(self, x,y=42):
return y
def f ( a: List[ int ]) :
return 37-a[42-u : y**3]
def very_important_function(template: str,*variables,file: os.PathLike,debug:bool=False,):
"""Applies `variables` to the `template` and writes to `file`."""
with open(file, "w") as f:
...
# fmt: off
custom_formatting = [
0, 1, 2,
3, 4, 5,
6, 7, 8,
]
# fmt: on
regular_formatting = [
0, 1, 2,
3, 4, 5,
6, 7, 8,
]
```
## Output
```python
from seven_dwwarfs import Grumpy, Happy, Sleepy, Bashful, Sneezy, Dopey, Doc
x = {"a": 37, "b": 42, "c": 927}
x = 123456789.123456789e123456789
if (
very_long_variable_name is not None
and very_long_variable_name.field > 0
or very_long_variable_name.is_debug
):
z = "hello " + "world"
else:
world = "world"
a = "hello {}".format(world)
f = rf"hello {world}"
if this and that:
y = "hello " "world" # FIXME: https://github.com/psf/black/issues/26
class Foo(object):
def f(self):
return 37 * -2
def g(self, x, y=42):
return y
def f(a: List[int]):
return 37 - a[42 - u : y**3]
def very_important_function(
template: str,
*variables,
file: os.PathLike,
debug: bool = False,
):
"""Applies `variables` to the `template` and writes to `file`."""
with open(file, "w") as f:
...
# fmt: off
custom_formatting = [
0, 1, 2,
3, 4, 5,
6, 7, 8,
]
# fmt: on
regular_formatting = [
0,
1,
2,
3,
4,
5,
6,
7,
8,
]
```
## Expected | closed | 2024-07-07T00:40:18Z | 2024-07-07T00:40:53Z | https://github.com/psf/black/issues/4397 | [] | omenihuson2 | 0 |
IvanIsCoding/ResuLLMe | streamlit | 2 | Fix Line Overflow in LaTeX templates | Sometimes, if a line is too long it can overflow. Some cases that trigger it:
* Very long given name
* Very long university name
* Lots of skills in a category
The templates that are most affected by it are:
* Deedy
* Plush
* Simple | open | 2023-04-12T23:45:51Z | 2024-03-21T23:28:16Z | https://github.com/IvanIsCoding/ResuLLMe/issues/2 | [
"good first issue"
] | IvanIsCoding | 3 |
quokkaproject/quokka | flask | 145 | Dillinger - The best markdown editor needs to be integrated | https://github.com/joemccann/dillinger/
| closed | 2014-04-19T02:08:38Z | 2015-07-16T02:56:34Z | https://github.com/quokkaproject/quokka/issues/145 | [
"enhancement"
] | rochacbruno | 0 |
huggingface/transformers | machine-learning | 36,337 | Assisted generation slower than with base model alone | ### System Info
- `transformers` version: 4.49.0
- Platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.2.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.2.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@gante
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Ran the script `https://github.com/gante/huggingface-demos/blob/main/experiments/faster_generation/benchmark_decoder_open.py` as:
`python benchmark_decoder_open.py /path/to/Llama-3.1-8B --aux-model /path/to/Llama-3.2-1B` but assisted generation turned out to be slower than using the base model alone:
```
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████| 4/4 [00:05<00:00, 1.36s/it]
Resolving data files: 100%|██████████████████████████████████████████████████████████| 1024/1024 [00:02<00:00, 357.24it/s]
Resolving data files: 100%|████████████████████████████████████████████████████████| 1024/1024 [00:00<00:00, 38278.20it/s]
ASSISTED model: 100%|█████████████████████████████████████████████████████████████████████| 20/20 [00:58<00:00, 2.92s/it]
Average time per input (ms): 2729.76
Average time per token (ms): 31.12
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████| 4/4 [00:06<00:00, 1.64s/it]
Resolving data files: 100%|██████████████████████████████████████████████████████████| 1024/1024 [00:03<00:00, 335.68it/s]
Resolving data files: 100%|████████████████████████████████████████████████████████| 1024/1024 [00:00<00:00, 37536.86it/s]
ORIGINAL model: 100%|█████████████████████████████████████████████████████████████████████| 20/20 [00:54<00:00, 2.73s/it]
Average time per input (ms): 2137.03
Average time per token (ms): 24.36
Mismatches: 0
```
Not sure if am missing something, or if this is a bug.
### Expected behavior
Some speedup as shown at: https://huggingface.co/blog/dynamic_speculation_lookahead
Target model | Draft (Assistant) model | Task | Speedup - heuristic | Speedup - dynamic
-- | -- | -- | -- | --
meta-llama/Llama-3.1-8B | meta-llama/Llama-3.2-1B | open-ended generation | 1.00x | 1.18x
-- | -- | -- | -- | --
| open | 2025-02-21T21:51:39Z | 2025-03-24T08:03:33Z | https://github.com/huggingface/transformers/issues/36337 | [
"bug"
] | sahilsuneja1 | 3 |
home-assistant/core | python | 140,389 | PECO does not work with mandatory MFA | ### The problem
The PECO integration (via OPower) can no longer be setup because MFA cannot be disabled and is now mandatory for accounts. For Exelon companies like PECO, the docs currently state that MFA must be disabled for the integration to authenticate.
### What version of Home Assistant Core has the issue?
2025.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
opower
### Link to integration documentation on our website
www.home-assistant.io/integrations/opower/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
Currently, PECO (and possibly all Exelon companies?) only have phone or email based MFA codes, so I'm not sure this issue can be solved. If that's the case, the docs should be changed and this issue kept open for users to collaborate on advocating for better API access. | closed | 2025-03-11T15:24:12Z | 2025-03-24T16:30:43Z | https://github.com/home-assistant/core/issues/140389 | [
"integration: opower"
] | steverep | 4 |
ultralytics/ultralytics | computer-vision | 19,310 | Device selection on export on multi-gpu systems | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
Greatings! 🚀
Sorry for my English. I ran into issue (latest version, February, 19th) when choosing GPU to export on NVIDIA mGPU setup:
```
DEVICE0 = "cuda:1"
torch.set_default_device(device=DEVICE0)
with torch.cuda.device(device=DEVICE0):
model = YOLO("yolo11m.pt")
model.export(format="engine", half=True, imgsz=TRACK_HW, batch=BATCH_SIZE, dynamic=True, device=DEVICE0)
```
I selected second (:1) gpu, but got a usage on first (:0) one. nvidia-smi showed a full load on first gpu with a small one on second.
utils/torch_utils.py
```
if not cpu and not mps and torch.cuda.is_available(): # prefer GPU if available
devices = device.split(",") if device else "0" # i.e. "0,1" -> ["0", "1"]
n = len(devices) # device count
if n > 1: # multi-GPU
if batch < 1:
raise ValueError(
"AutoBatch with batch<1 not supported for Multi-GPU training, "
"please specify a valid batch size, i.e. batch=16."
)
if batch >= 0 and batch % n != 0: # check batch_size is divisible by device_count
raise ValueError(
f"'batch={batch}' must be a multiple of GPU count {n}. Try 'batch={batch // n * n}' or "
f"'batch={batch // n * n + n}', the nearest batch sizes evenly divisible by {n}."
)
space = " " * (len(s) + 1)
for i, d in enumerate(devices):
s += f"{'' if i == 0 else space}CUDA:{d} ({get_gpu_info(i)})\n" # bytes to MB
arg = "cuda:0"
```
The line leads to bug: ```arg = "cuda:0"```
I suppose it should be: ```arg = f"cuda:{device}"```
### Environment
Package Version
------------------------- ------------
addict 2.4.0
aiohappyeyeballs 2.4.4
aiohttp 3.11.11
aiosignal 1.3.2
albucore 0.0.23
albumentations 2.0.2
annotated-types 0.7.0
anyio 4.8.0
attrs 25.1.0
bcrypt 4.2.1
certifi 2025.1.31
cffi 1.17.1
chardet 5.2.0
charset-normalizer 3.4.1
click 8.1.8
coloredlogs 15.0.1
contourpy 1.3.1
cryptography 44.0.0
cycler 0.12.1
fastapi 0.115.6
filelock 3.17.0
flatbuffers 25.1.24
fonttools 4.55.7
frozenlist 1.5.0
fsspec 2025.2.0
geographiclib 2.0
greenlet 3.1.1
h11 0.14.0
huggingface-hub 0.27.1
humanfriendly 10.0
idna 3.10
Jinja2 3.1.5
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
jwt 1.3.1
kiwisolver 1.4.8
lap 0.5.12
lightning-utilities 0.11.9
MarkupSafe 3.0.2
matplotlib 3.10.0
mpmath 1.3.0
msgpack 1.1.0
multidict 6.1.0
networkx 3.4.2
numpy 2.1.1
nvidia-cublas-cu12 12.6.4.1
nvidia-cuda-cupti-cu12 12.6.80
nvidia-cuda-nvrtc-cu12 12.6.77
nvidia-cuda-runtime-cu12 12.6.77
nvidia-cudnn-cu12 9.5.1.17
nvidia-cufft-cu12 11.3.0.4
nvidia-curand-cu12 10.3.7.77
nvidia-cusolver-cu12 11.7.1.2
nvidia-cusparse-cu12 12.5.4.2
nvidia-cusparselt-cu12 0.6.3
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.6.85
nvidia-nvtx-cu12 12.6.77
onnx 1.17.0
onnxruntime-gpu 1.20.1
onnxslim 0.1.48
opencv-python 4.11.0.86
opencv-python-headless 4.11.0.86
openvino 2025.0.0
openvino-telemetry 2025.0.0
packaging 24.2
pandas 2.2.3
pillow 11.1.0
pip 24.3.1
propcache 0.2.1
protobuf 5.29.3
psutil 6.1.1
psycopg2-binary 2.9.10
py-cpuinfo 9.0.0
pyarrow 19.0.0
pycparser 2.22
pydantic 2.10.5
pydantic_core 2.27.2
PyJWT 2.10.1
pyparsing 3.2.1
pysrt 1.1.2
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-magic 0.4.27
python-multipart 0.0.20
pytorch-lightning 2.5.0.post0
pytz 2025.1
PyYAML 6.0.2
pyzmq 26.2.0
ray 2.40.0
referencing 0.36.2
requests 2.32.3
rpds-py 0.22.3
safetensors 0.5.2
scipy 1.15.1
seaborn 0.13.2
setuptools 75.8.0
simsimd 6.2.1
six 1.17.0
sniffio 1.3.1
SQLAlchemy 2.0.37
sqlmodel 0.0.22
starlette 0.41.3
stringzilla 3.11.3
sympy 1.13.1
tensorboardX 2.6.2.2
tensorrt 10.7.0.post1
tensorrt_cu12 10.7.0.post1
tensorrt-cu12-bindings 10.7.0.post1
tensorrt-cu12-libs 10.7.0.post1
timm 1.0.14
torch 2.6.0+cu126
torch_tensorrt 2.6.0+cu126
torchaudio 2.6.0+cu126
TorchCodec 0.2.0+cu126
torchmetrics 1.0.3
torchvision 0.21.0+cu126
tqdm 4.67.1
triton 3.2.0
typing_extensions 4.12.2
tzdata 2025.1
ultralytics 8.3.76
ultralytics-thop 2.0.14
urllib3 2.3.0
uvicorn 0.34.0
websockets 14.2
wheel 0.45.1
yarl 1.18.3
### Minimal Reproducible Example
```
DEVICE0 = "cuda:1"
torch.set_default_device(device=DEVICE0)
with torch.cuda.device(device=DEVICE0):
model = YOLO("yolo11m.pt")
model.export(format="engine", half=True, imgsz=TRACK_HW, batch=BATCH_SIZE, dynamic=True, device=DEVICE0)
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-02-19T10:30:55Z | 2025-02-20T18:39:51Z | https://github.com/ultralytics/ultralytics/issues/19310 | [
"exports"
] | liwtw | 4 |
aimhubio/aim | tensorflow | 2,539 | Aim telemetry hangs in air-gapped environment | ## 🐛 Bug
Within an environment with no internet access, the Aim SDK can hang on exit while Segment attempts to flush telemetry data to its servers.
### To reproduce
Create a `run_aim.py` script:
```python
import aim
import logging
logging.basicConfig()
run = aim.Run()
```
and run `python run_aim.py`.
This hangs on exit and produces log messages like this:
```
INFO:backoff:Backing off send_request(...) for 0.8s (requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='api.segment.io', port=443): Max retries exceeded with url: /v1/batch (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fe251a0f190>, 'Connection to api.segment.io timed out. (connect timeout=15)')))
INFO:backoff:Backing off send_request(...) for 1.8s (requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='api.segment.io', port=443): Max retries exceeded with url: /v1/batch (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fe251a2efa0>, 'Connection to api.segment.io timed out. (connect timeout=15)')))
INFO:backoff:Backing off send_request(...) for 1.4s (requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='api.segment.io', port=443): Max retries exceeded with url: /v1/batch (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fe2519ba130>, 'Connection to api.segment.io timed out. (connect timeout=15)')))
```
### Environment
- Aim Version: **3.6.0** (following https://github.com/aimhubio/aim/pull/2490/)
- Python version: 3.8.13
- pip version: 22.2.2
- OS: Ubuntu 20.04.4
### Additional context
Manually disabling Aim telemetry beforehand via `$ aim telemetry off` (within the same Python environment) resolves the issue, but it would be great if this wasn't necessary! | closed | 2023-02-09T16:41:44Z | 2023-03-10T10:20:53Z | https://github.com/aimhubio/aim/issues/2539 | [
"type / bug",
"help wanted",
"phase / review-needed",
"priority / critical-urgent"
] | wlhjason | 5 |
dpgaspar/Flask-AppBuilder | flask | 2,158 | it's possible to change the table name for user/role/permission tables? | By default the models/tables defined in [flask_appbuilder/security/sqla/models.py](https://github.com/dpgaspar/Flask-AppBuilder/blob/master/flask_appbuilder/security/sqla/models.py) have the `ab_` prefix
I am trying to extend the data model (adding some columns) with a subclass - as suggested in [Extending the User Model](https://flask-appbuilder.readthedocs.io/en/latest/security.html#extending-the-user-model) - but I also would like to change the table names, so `ab_user` will became `users`.
It's possible? if I naively set
```python
class MyUser(User):
__tablename__ = 'users'
...
```
I end with `sqlalchemy.exc.ArgumentError: Can't place __table_args__ on an inherited class with no table.` during the import of the module | open | 2023-11-06T11:59:39Z | 2023-12-22T06:56:39Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2158 | [] | ZeeD | 2 |
huggingface/diffusers | deep-learning | 10,565 | Different generation with `Diffusers` in I2V tasks for LTX-video | ### Describe the bug
Hello, I encountered an issue with the generation when attempting the I2V task using `Diffusers`. Is there any difference between the `diffusers` implementation and the `LTX-video-inference scripts` in the I2V task?
- The above is the result from the `inference.py`, and the following is the result generated with `diffuser`.
- Prompts: `a person`
https://github.com/user-attachments/assets/6e2aeeaf-c52b-402c-ae92-aff2d325464b
https://github.com/user-attachments/assets/59f815ad-1746-4ec5-ae1c-a47dcfa0fd02
https://github.com/user-attachments/assets/8ca3c79b-8003-4fa2-82b1-8ae17beccb9c
- test img

Besides, it seems that the text prompt has a significant impact on the I2V generation with 'diffusers'. Could I be missing any important arguments?
https://huggingface.co/docs/diffusers/api/pipelines/ltx_video
- results
https://github.com/user-attachments/assets/c062c21f-5611-4860-ba17-441dd26a8913
https://github.com/user-attachments/assets/991ec853-ee26-43a7-914b-622d115a9b7f
https://github.com/user-attachments/assets/ff3e7f04-c17d-4f0a-9aba-2db68aae792d
https://github.com/user-attachments/assets/f2699759-c36e-4839-bddd-37b84a85e2c7
### Reproduction
- for LTX-video generation
https://github.com/Lightricks/LTX-Video/blob/main/inference.py
```
python inference.py \
--ckpt_path ./pretrained_models/LTX-Video \
--output_path './samples' \
--prompt "A person." \
--input_image_path ./samples/test_cases.png \
--height 512 \
--width 512 \
--num_frames 49 \
--seed 42
```
- for diffuser generation: it seems that the negative prompts are causing the issues. However, even when I remove them, the results are still not satisfactory.
```
import argparse
import torch
from diffusers import LTXVideoTransformer3DModel
from diffusers import LTXImageToVideoPipeline
from diffusers import FlowMatchEulerDiscreteScheduler, AutoencoderKLLTXVideo
from diffusers.utils import export_to_video, load_image, load_video
from moviepy import VideoFileClip, AudioFileClip
import numpy as np
from pathlib import Path
import os
import imageio
from einops import rearrange
from PIL import Image
import random
def seed_everething(seed: int):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
def generate_video(args):
pipe = LTXImageToVideoPipeline.from_pretrained(args.ltx_model_path, torch_dtype=torch.bfloat16)
pipe.to("cuda")
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
image = load_image(args.validation_image)
prompt = "A person."
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
generator = torch.Generator(
device="cuda" if torch.cuda.is_available() else "cpu"
).manual_seed(42)
video = pipe(
image=image,
prompt=prompt,
guidance_scale=3,
# stg_scale=1,
generator=generator,
callback_on_step_end=None,
negative_prompt=negative_prompt,
width=512,
height=512,
num_frames=49,
num_inference_steps=50,
decode_timestep=0.05,
decode_noise_scale=0.025,
).frames[0]
export_to_video(video, args.output_file, fps=24)
```
- for demo images with difference text prompts
https://huggingface.co/docs/diffusers/api/pipelines/ltx_video
```
import torch
from diffusers import LTXImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
pipe = LTXImageToVideoPipeline.from_pretrained("./pretrained_models/LTX-Video", torch_dtype=torch.bfloat16)
pipe.to("cuda")
image = load_image("samples/image.png")
prompt = "A young girl stands."
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
video = pipe(
image=image,
prompt=prompt,
negative_prompt=negative_prompt,
width=704,
height=480,
num_frames=161,
num_inference_steps=50,
).frames[0]
modified_prompt = "-".join(prompt.split()[:14])
export_to_video(video, f"samples/test_out/demo-{modified_prompt}.mp4", fps=24)
```
### Logs
```shell
```
### System Info
torch 2.4.1
torchao 0.7.0
torchvision 0.19.1
diffusers 0.32.1
python 3.10
### Who can help?
_No response_ | open | 2025-01-14T03:24:06Z | 2025-03-14T15:04:00Z | https://github.com/huggingface/diffusers/issues/10565 | [
"bug",
"stale"
] | Kaihui-Cheng | 8 |
mithi/hexapod-robot-simulator | dash | 30 | Check stability in ik_solver module | - Check pose stability. The inverse kinematics solver doesn't yet actually check if the center of gravity is inside its support polygon.
| closed | 2020-04-11T09:57:34Z | 2020-04-23T16:12:04Z | https://github.com/mithi/hexapod-robot-simulator/issues/30 | [
"feature request"
] | mithi | 1 |
PokeAPI/pokeapi | api | 817 | Pokemon IDs #980 and #987 Does Not Exist | https://pokeapi.co/api/v2/pokemon-species/980 and https://pokeapi.co/api/v2/pokemon-species/987 does not exist. This moves every pokemon after it into an even more wrong id than they all already are. There already is an issue for all of the wrong IDs, but this is a lapse in pokemon existing at all.
| closed | 2023-01-13T02:38:15Z | 2023-01-17T12:58:51Z | https://github.com/PokeAPI/pokeapi/issues/817 | [] | NathanGuidry | 0 |
x-tabdeveloping/topicwizard | dash | 36 | Add Querying Documents interactively based on topic axes. | Imagine the following scenario:
You're a restaurant branch, you run S3 on reviews you got from your customers, and you get a handful of interesting axes.
You would most likely want to query documents based on values on this axis to see for instance which reviews are negative in valence and talk about the food.
A good solution to this would be an interactive little app, where you can add or remove sliders over different topic axes, and the app would show documents that rank closest to the set values on the given axes.
All axis not added would be ignored.
I think this would be immensely useful so we should totally implement it. | open | 2024-04-14T12:47:35Z | 2024-04-14T12:47:36Z | https://github.com/x-tabdeveloping/topicwizard/issues/36 | [] | x-tabdeveloping | 0 |
marcomusy/vedo | numpy | 1,231 | Normals documentation and initialisation | Hello,
I've been working with vedo for a while, and one challenge I encountered was understanding how the `vertex_normals` and `cell_normals` properties function. It took me some time to figure out that these properties are not automatically computed and require explicit initialization.
The documentation currently states: "Check out also `compute_normals()` and `compute_normals_with_pca()`." However, it does not explicitly mention that one of these methods must be called for the `vertex_normals` or `cell_normals` properties to be correctly set. This requirement is only implied through examples, which may not be immediately clear to users. I believe it would be beneficial to make this point more explicit in the documentation.
Additionally, I wonder whether this initialization process could be handled more intuitively. Specifically, why can't the normals be computed automatically when the `vertex_normals` or `cell_normals` properties are accessed for the first time? By default, `compute_normals()` could be called lazily when these properties are retrieved. This approach would require less boilerplate code, and a more intuitive and user-friendly interface.
To summarise:
1. **Implicit Normal Initialization** - I think that calling `vertex_normals` or `cell_normals` should initialise the normals implicitly, if they are not already initialised.
2. **Improved Documentation** - If implicit initialisation is not desirable, still the documentation should be explicit about the fact that `compute_normals()` or `compute_normals_with_pca()` **must be called before** accessing normals.
I would be happy to implement that myself. I would be happy to hear thoughts. | closed | 2025-03-12T16:28:25Z | 2025-03-15T14:18:33Z | https://github.com/marcomusy/vedo/issues/1231 | [] | CorpsSansOrganes | 1 |
microsoft/nni | machine-learning | 5,423 | define HPO process in one file | Hi, in the example [Port PyTorch Quickstart to NNI](https://nni.readthedocs.io/en/stable/tutorials/hpo_quickstart_pytorch/model.html), the search space and experiment are defined in one file, and the trial is defined in another file.
Is there a way to put them together in one single py file, just like NAS (defining a fit function and using FunctionalEvaluator and RetiariiExperiment)?
Thanks a lot. | closed | 2023-03-07T12:44:30Z | 2023-03-10T05:53:19Z | https://github.com/microsoft/nni/issues/5423 | [] | heibaidaolx123 | 2 |
yunjey/pytorch-tutorial | deep-learning | 160 | where is models folder located | closed | 2019-03-02T14:09:37Z | 2019-03-02T20:01:57Z | https://github.com/yunjey/pytorch-tutorial/issues/160 | [] | anjalinagel12 | 0 | |
deepfakes/faceswap | machine-learning | 1,351 | TypeError: 'type' object is not subscriptable | (faceswap3.9) zcb@zcb:~/ys1/faceswap-master$ python faceswap.py -h
Setting Faceswap backend to CPU
Traceback (most recent call last):
File "faceswap.py", line 12, in <module>
from lib.cli import args as cli_args # pylint:disable=wrong-import-position
File "/home/zcb/ys1/faceswap-master/lib/cli/args.py", line 14, in <module>
from lib.gpu_stats import GPUStats
File "/home/zcb/ys1/faceswap-master/lib/gpu_stats/__init__.py", line 9, in <module>
from ._base import set_exclude_devices, GPUInfo
File "/home/zcb/ys1/faceswap-master/lib/gpu_stats/_base.py", line 11, in <module>
_EXCLUDE_DEVICES: list[int] = []
TypeError: 'type' object is not subscriptable
| closed | 2023-09-18T03:29:56Z | 2023-09-18T07:49:05Z | https://github.com/deepfakes/faceswap/issues/1351 | [] | oolYang | 7 |
erdewit/ib_insync | asyncio | 383 | Empty Ticker.ticks | Hello, I have the following function:
```
def get_underlying_price(
symbol: str, ib: IB, currency: str, position: ib_insync.PortfolioItem
) -> float:
contract = get_contract(symbol, position)
print(f"{contract=}")
ib.reqMktData(contract, "", False, False)
ticker = ib.ticker(contract)
ib.sleep(5)
print(f"{ticker=}")
print(f"{ticker.marketPrice()=}")
print(ticker.ticks)
ib.cancelMktData(contract)
```
And an output always this:
```
contract=Stock(symbol='AAPL', exchange='SMART', currency='USD')
ticker=Ticker(contract=Stock(symbol='AAPL', exchange='SMART', currency='USD'))
ticker.marketPrice()=nan
[]
```
So, I always get an empty list in `ticker.ticks`, but I can get data using 'BidAsk' and `last`
Please help,
Thank you. | closed | 2021-06-19T18:35:50Z | 2021-07-29T14:55:29Z | https://github.com/erdewit/ib_insync/issues/383 | [] | dmitriiweb | 1 |
gyli/PyWaffle | matplotlib | 11 | block present when value is 0 | Hi! Thanks for creating PyWaffle and the useful examples.
I started using it today and everything is great, except that sometimes I have a value of 0 for some items. But when the waffle chart is generated, a block is still present for the category with a value of 0.
<img width="849" alt="Screen Shot 2019-08-30 at 4 53 12 PM" src="https://user-images.githubusercontent.com/5349064/64050860-46a75a00-cb47-11e9-805f-5827373dd322.png">
Could you look into making it so that there is no block for a label if the value is 0?
| closed | 2019-08-30T20:59:58Z | 2019-09-16T08:14:12Z | https://github.com/gyli/PyWaffle/issues/11 | [] | glaubius | 1 |
15r10nk/inline-snapshot | pytest | 127 | `--snap-fix` CLI argument | As per #123, my bad for creating an issue with multiple points.
Please can we have a new `--snap-fix` or `--sfix` argument (or something else concise) which implies both `--inline-snapshot=fix` and `--inline-snapshot=create`.
I'd also be fine if `--sfix` was a concise version of `--inline-snapshot=fix`, and there was a config setting to make `--inline-snapshot=fix` imply `--inline-snapshot=create`. | closed | 2024-11-05T12:02:20Z | 2024-11-11T09:18:28Z | https://github.com/15r10nk/inline-snapshot/issues/127 | [] | samuelcolvin | 6 |
Colin-b/pytest_httpx | pytest | 82 | Add a way to reset HTTPXMock._requests | I started using httpx_mock and tried to mock multiple requests to one resource.
Then I expected httpx_mock.reset() to reset the state of HTTPXMock. However, it resets only callbacks, self._requests are still left unchanged.
Generally:
```py
def test_something(httpx_mock: HTTPXMock):
# custom setup that has to be here, but includes mocked requests
httpx_mock.reset()
for i in range(5):
httpx.request(...)
assert len(httpx_mock.get_requests()) == 5 # fails, because of the mocked requests in "setup" part. There is no way to reset list of requests
```
Is there a reason why .reset() is not resetting requests list? | closed | 2022-08-22T11:13:23Z | 2022-11-03T21:06:28Z | https://github.com/Colin-b/pytest_httpx/issues/82 | [
"enhancement"
] | Normale | 2 |
stanfordnlp/stanza | nlp | 618 | [QUESTION] How to update models after upgrading library? | Hi
I upgraded today to 1.2 to use the new UD 2.7 models. However, after upgrading the library, it seems as if you cannot have different model versions of the same language. As a result, I had to delete the language folder in `stanza_resources` before I could download the new models. Is this intended behavior? It would be cool if the version of models could be satisfied and downloaded as you require.
Thanks | closed | 2021-02-08T16:53:31Z | 2021-02-26T07:52:52Z | https://github.com/stanfordnlp/stanza/issues/618 | [
"question"
] | BramVanroy | 2 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 379 | 小白求助,用的方法二,在打开localhost的时候显示failed to load resource怎么解决 | 小白求助,用的方法二,在打开localhost的时候显示failed to load resource怎么解决
| closed | 2024-05-02T03:26:11Z | 2024-06-14T08:41:27Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/379 | [
"enhancement"
] | xhj0214 | 5 |
newpanjing/simpleui | django | 253 | 在没有web服务器的情况下添加simpleui | 思考了一下,要不要写这个内容。想到simpleui的文档那么详细,还是觉得应该提一下这个问题。
场景和问题:在没有web服务器,且关闭了 Django 调试模式的情况下。目录存在,且文件已经克隆好了,但是在打开admin页面时还是提示无法找到文件,可以修改运行指令为:
python manage.py runserver --insecure ip:port
实在抱歉,没注意到底部的讨论区。 | closed | 2020-04-15T15:17:50Z | 2020-04-15T15:22:10Z | https://github.com/newpanjing/simpleui/issues/253 | [
"enhancement"
] | neoshui | 0 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 261 | [BUG] 简短明了的描述问题 | 快捷指令无法使用了,一直提示更新但已经是最新版本 | closed | 2023-08-31T18:30:21Z | 2023-09-05T12:06:03Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/261 | [] | kkkouo | 1 |
keras-team/keras | pytorch | 20,423 | AttributeError: 'KerasHistory' object has no attribute 'layer' | I'm encountering the error "AttributeError: 'KerasHistory' object has no attribute 'layer'"
while working with a Keras model.
I'm trying to access layer information, but it seems I'm referencing the wrong object. the version of TensorFlow is 2.17.0 I tried to change the name layer to operation but it's not working.
this is the code:
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.initializers import glorot_uniform
from tensorflow.keras.layers import Input, ZeroPadding2D, Conv2D, MaxPooling2D, BatchNormalization, Activation, Add, AveragePooling2D, Flatten, Dense, Dropout
input_shape = (96, 96, 1)
X_input = Input(input_shape)
X = ZeroPadding2D((3,3))(X_input)
X = Conv2D(64, (7,7), strides= (2,2), name = 'conv1', kernel_initializer= glorot_uniform(seed = 0))(X)
X = BatchNormalization(axis =3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3,3), strides= (2,2))(X)
X = res_block(X, filter= [64,64,256], stage= 2)
X = res_block(X, filter= [128,128,512], stage= 3)
X = AveragePooling2D((2,2), name = 'Averagea_Pooling')(X)
X = Flatten()(X)
X = Dense(4096, activation = 'relu')(X)
X = Dropout(0.2)(X)
X = Dense(2048, activation = 'relu')(X)
X = Dropout(0.1)(X)
X = Dense(30, activation = 'relu')(X)
model_1_facialKeyPoints = Model( inputs= X_input, outputs = X)
model_1_facialKeyPoints.summary()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-366-fd266d53d661> in <cell line: 34>()
32
33
---> 34 model_1_facialKeyPoints = Model( inputs= X_input, outputs = X)
35 model_1_facialKeyPoints.summary()
4 frames
/usr/local/lib/python3.10/dist-packages/tensorflow/python/keras/engine/functional.py in _validate_graph_inputs_and_outputs(self)
692 # Check that x is an input tensor.
693 # pylint: disable=protected-access
--> 694
695 layer = x._keras_history.layer
696 if len(layer._inbound_nodes) > 1 or (
AttributeError: 'KerasHistory' object has no attribute 'layer' | closed | 2024-10-29T03:37:36Z | 2024-10-29T18:43:01Z | https://github.com/keras-team/keras/issues/20423 | [
"type:Bug"
] | Neta-Robinzon-Butbul | 3 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 410 | Audioread "no backend" issue | This comes up very frequently for Windows users. `pip install -r requirements.txt` does not install a backend for audioread (a dependency of librosa). This causes an exception when loading any file that requires conversion, such as .mp3.
Can we make use of soundfile to load and convert audio files, instead of librosa.load? We already have soundfile in requirements.txt and it automatically installs all prerequisites on Windows and macOS. Linux users need to install `libsndfile1`.
This will prevent users from encountering the "no backend" error. Alternatively we can add a step to README.md asking the user to install ffmpeg before running the toolbox. But this seems more elegant if it works. | closed | 2020-07-09T03:08:10Z | 2020-07-10T08:25:43Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/410 | [] | ghost | 3 |
Yorko/mlcourse.ai | data-science | 667 | The slack team url at mlcourse.ai#community gives 404 error | The link mentioned at "Discussions are held in the #mlcourse_ai channel of the [OpenDataScience (ods.ai)](https://ods.ai/en/) Slack team." sends 404 error | closed | 2020-06-02T20:20:42Z | 2020-06-06T07:57:35Z | https://github.com/Yorko/mlcourse.ai/issues/667 | [
"minor_fix"
] | sidgupta234 | 1 |
microsoft/nlp-recipes | nlp | 361 | [BUG] We should have a separate package yaml file for the utils_nlp package | ### Description
<!--- Describe your bug in detail -->
Right now all dependencies are generated by running "generate_conda_file.py" file to generate the .yaml file, but it would be better if we have separate dependency file for the utils_nlp package.
When we release utils_nlp as a pip installable package, users are expecting to have all dependencies listed somewhere. On my Windows machine (with GPU), even the pytorch-pretrained-bert and dask packages are listed in the nlp_gpu.yaml file, it's not installed properly. I need to manually install those packages to run the notebook "entailment_multinli_bert".
### How do we replicate the bug?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for gpu -->
<!--- * Run unit test `test_timer.py` -->
<!--- * ... -->
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for the timer should pass successfully. -->
### Other Comments
| closed | 2019-08-22T19:37:36Z | 2019-11-25T17:48:40Z | https://github.com/microsoft/nlp-recipes/issues/361 | [
"bug",
"bug-bash"
] | kehuangms | 1 |
AirtestProject/Airtest | automation | 704 | @logwrap raise json exception | Tried to use something like this:
from airtest.core.helper import (G, delay_after_operation, import_device_cls,
logwrap, set_logdir, using, log)
class My_class():
@logwrap
def myfunction(self,myargements):
some actions
Exemple = My_class()
Exemple.myfunction(argements)
Everytime get this:
Traceback (most recent call last):
File "D:\python\lib\site-packages\airtest\cli\runner.py", line 65, in runTest
six.reraise(*sys.exc_info())
File "D:\python\lib\site-packages\six.py", line 693, in reraise
raise value
File "D:\python\lib\site-packages\airtest\cli\runner.py", line 61, in runTest
exec(compile(code.encode("utf-8"), pyfilepath, 'exec'), self.scope)
File "D:\AirtestIDE_2019-09-11_py3_win64\projects\DressUp\AirTestScripts\untitled.air\untitled.py", line 42, in <module>
click("NPC")
File "D:\python\lib\site-packages\airtest\utils\logwraper.py", line 71, in wrapper
res = f(*args, **kwargs)
File "D:\AirtestIDE_2019-09-11_py3_win64\projects\DressUp\AirTestScripts\untitled.air\untitled.py", line 41, in click
click(name)
File "D:\AirtestIDE_2019-09-11_py3_win64\projects\DressUp\AirTestScripts\untitled.air\untitled.py", line 40, in click
Alt.click(name)
File "D:\python\lib\site-packages\airtest\utils\logwraper.py", line 79, in wrapper
logger.log('function', fndata)
File "D:\python\lib\site-packages\airtest\utils\logwraper.py", line 50, in log
log_data = json.dumps({'tag': tag, 'depth': depth, 'time': time.time(), 'data': data}, default=self._dumper)
File "D:\python\lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
File "D:\python\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "D:\python\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
TypeError: keys must be str, int, float, bool or None, not type
I expect that my function would at Html report.
**python version:** `python3.6`
**airtest version:** `1.2.3`
| closed | 2020-03-13T10:43:37Z | 2020-03-30T07:03:43Z | https://github.com/AirtestProject/Airtest/issues/704 | [] | farosep | 1 |
strawberry-graphql/strawberry | django | 3,396 | Can Strawberry created dataclasses be `frozen` by default? | ## Feature Request Type
- [ ] Core functionality
- [X] Alteration (enhancement/optimization) of existing feature(s)
- [ ] New behavior
## Need
Since [1.1.328](https://github.com/microsoft/pyright/releases/tag/1.1.328) Pyright type checker has become more strict w.r.t. to detecting incompatible variable overrides.
Specifically code like this:
```python
class MyParent:
my_field: int | str
class MyChild(MyParent):
my_field: int # `reportIncompatibleVariableOverride` detected here
```
Now triggers a type error, because of this failure scenario:
```python
def mutate_parent(p: MyParent) -> None:
p.my_field = "foo"
def process_child(c: MyChild) -> None:
mutate_parent(c)
c.my_field # the typechecker believes the type is `int`, but the actual value at runtime is `"foo"` and the type is `str`
```
However, if `MyParent` was a frozen dataclass everything would be fine – because Pyright would be convinced that the value isn't mutated when processing the value using the wider parent type.
A scenario where this is useful is interaction between GraphQL types and interfaces they implement. It's common for a GraphQL interface to be wider than the specific type that implements this. In order to represent this through Python inheritance, any such fields needs to be implemented as a Python method, rather than instance/class variables. This is because the return types of the former are interpreted as immutable by Pyright.
**Therefore, I suggest that dataclasses created to represent Strawberry GraphQL types are frozen dataclasses.** I don't think there is any real use case for mutating field values in an already constructed Python object.
## Implementation
I'm sure the solution is somewhere in [PEP 681](https://peps.python.org/pep-0681/), especially because there are plenty of occurrences of the term `frozen` in its text. However, it's not immediately clear to me how this should be done – I've never written a dataclass transform myself.
Happy to dig into it a little bit myself, if nobody else has an immediate idea how to do it though. :) | open | 2024-02-25T12:14:47Z | 2025-03-20T15:56:37Z | https://github.com/strawberry-graphql/strawberry/issues/3396 | [] | kkom | 3 |
xinntao/Real-ESRGAN | pytorch | 672 | 判别器损失问题 | 作者您好,首先很感谢您优秀的工作,在这里有一个问题想问您,请问训练过程中咱判别器损失的变化曲线是什么样的,我将本判别器模型迁移到一个VAE重建的应用场景,在训练过程中,无论是否加载您的判别器pretrained权重,GAN的损失均基本保持不变,请问这是正常的吗?期待您的回复,对您感激不尽! | open | 2023-08-08T09:32:34Z | 2024-04-25T12:56:54Z | https://github.com/xinntao/Real-ESRGAN/issues/672 | [] | alexzdy | 4 |
nerfstudio-project/nerfstudio | computer-vision | 3,535 | nerfacto automatically downscaling to max 1500 pixels | When I run the given nerfacto command (following the documentation exactly), it downscales to max 1500 pixels ( 50 x 30) and thus the scene is basically unrecognizable. I tried changing this by setting --downscale-factor 1 but it isn't working (stuck in training) and --downscale-factor 2 also gives max 1500 pixels. If I need at least 720p what should I do?
| open | 2024-11-27T17:11:16Z | 2024-11-27T17:11:16Z | https://github.com/nerfstudio-project/nerfstudio/issues/3535 | [] | leyuheon1 | 0 |
graphql-python/graphene-django | graphql | 798 | Updating the Docs with a `diff` view | Hey,
It would be super cool if you could add a `diff` view in the docs for the https://docs.graphene-python.org/projects/django/en/latest/tutorial-plain/#getting-single-objects code block.
Plus you could also add the filename to be changed since there are two `schema.py` files both in *cookbook* and *ingredients*.
You could update the sphinx configs to higlight diffs to display like this(example)
```diff
import graphene
from graphene_django.types import DjangoObjectType
from cookbook.ingredients.models import Category, Ingredient
class CategoryType(DjangoObjectType):
class Meta:
model = Category
class IngredientType(DjangoObjectType):
class Meta:
model = Ingredient
class Query(object):
+ category = graphene.Field(CategoryType,
+ id=graphene.Int(),
+ name=graphene.String())
all_categories = graphene.List(CategoryType)
+ ingredient = graphene.Field(IngredientType,
+ id=graphene.Int(),
+ name=graphene.String())
all_ingredients = graphene.List(IngredientType)
def resolve_all_categories(self, info, **kwargs):
+ return Category.objects.all()
+
+ def resolve_all_ingredients(self, info, **kwargs):
+ return Ingredient.objects.all()
+
+ def resolve_category(self, info, **kwargs):
+ id = kwargs.get('id')
+ name = kwargs.get('name')
+
+ if id is not None:
+ return Category.objects.get(pk=id)
+
+ if name is not None:
+ return Category.objects.get(name=name)
+
+ return None
+
+ def resolve_ingredient(self, info, **kwargs):
+ id = kwargs.get('id')
+ name = kwargs.get('name')
+
+ if id is not None:
+ return Ingredient.objects.get(pk=id)
+
+ if name is not None:
+ return Ingredient.objects.get(name=name)
+
+ return None
```
This way, people new to graphene could understand how the file should be changed.
Thanks and I totally love this project :heart: | closed | 2019-10-13T07:29:33Z | 2019-10-31T23:31:31Z | https://github.com/graphql-python/graphene-django/issues/798 | [] | athul | 2 |
encode/databases | sqlalchemy | 274 | Create a standard and developer and user friendly feed back system after query is executed | for the commands like execute , execute_many , if there is insert query then return the new id of the row that has been inserted .
if the query is update the return number of updated rows . | closed | 2020-12-11T10:15:51Z | 2021-03-25T15:05:57Z | https://github.com/encode/databases/issues/274 | [] | abhijitgujar86 | 0 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 847 | RuntimeError: element 0 of tensors does not require grad and does not have a grad_f | ### Check before submitting issues
- [X] Make sure to pull the latest code, as some issues and bugs have been fixed.
- [X] Due to frequent dependency updates, please ensure you have followed the steps in our [Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)
- [X] I have read the [FAQ section](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/FAQ) AND searched for similar issues and did not find a similar problem or solution
- [X] Third-party plugin issues - e.g., [llama.cpp](https://github.com/ggerganov/llama.cpp), [text-generation-webui](https://github.com/oobabooga/text-generation-webui), [LlamaChat](https://github.com/alexrozanski/LlamaChat), we recommend checking the corresponding project for solutions
- [X] Model validity check - Be sure to check the model's [SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md). If the model is incorrect, we cannot guarantee its performance
### Type of Issue
Model training and fine-tuning
### Base Model
LLaMA-7B
### Operating System
Linux
### Describe your issue in detail
```
Thanks for your work
I run command:
. run_pt.sh
But I get error from running, I've found but all replies is that it has fixed, I reinstall with another version, it does not work.
I merge new token. Does it cause this errors and I need to set use_cache=False?
```
### Dependencies (must be provided for code-related issues)
_No response_
### Execution logs or screenshots
```
[WARNING|logging.py:305] 2023-09-24 16:30:30,071 >> `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/utils/checkpoint.py:31: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn("None of the inputs have requires_grad=True. Gradients will be None")
[WARNING|logging.py:305] 2023-09-24 16:30:30,086 >> `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/utils/checkpoint.py:31: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn("None of the inputs have requires_grad=True. Gradients will be None")
Traceback (most recent call last):
File "run_clm_pt_with_peft.py", line 642, in <module>
main()
File "run_clm_pt_with_peft.py", line 610, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 1536, in train
return inner_training_loop(
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 1809, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 2665, in training_step
self.accelerator.backward(loss)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/accelerate/accelerator.py", line 1838, in backward
self.deepspeed_engine_wrapped.backward(loss, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/accelerate/utils/deepspeed.py", line 167, in backward
self.engine.backward(loss, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1923, in backward
self.optimizer.backward(loss, retain_graph=retain_graph)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1958, in backward
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
scaled_loss.backward(retain_graph=retain_graph)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/_tensor.py", line 488, in backward
torch.autograd.backward(
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Traceback (most recent call last):
File "run_clm_pt_with_peft.py", line 642, in <module>
main()
File "run_clm_pt_with_peft.py", line 610, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 1536, in train
return inner_training_loop(
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 1809, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 2665, in training_step
self.accelerator.backward(loss)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/accelerate/accelerator.py", line 1838, in backward
self.deepspeed_engine_wrapped.backward(loss, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/accelerate/utils/deepspeed.py", line 167, in backward
self.engine.backward(loss, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1923, in backward
self.optimizer.backward(loss, retain_graph=retain_graph)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1958, in backward
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
scaled_loss.backward(retain_graph=retain_graph)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/_tensor.py", line 488, in backward
torch.autograd.backward(
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
wandb: Waiting for W&B process to finish... (failed 1).
wandb: You can sync this run to the cloud by running:
wandb: wandb sync /home/tupk/tupk/nlp/Chinese-LLaMA-Alpaca/scripts/training/wandb/offline-run-20230924_163000-ydl05elp
wandb: Find logs at: ./wandb/offline-run-20230924_163000-ydl05elp/logs
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 3137437) of binary: /home/tupk/anaconda3/envs/nlp/bin/python
Traceback (most recent call last):
File "/home/tupk/anaconda3/envs/nlp/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/distributed/run.py", line 762, in main
run(args)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
run_clm_pt_with_peft.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2023-09-24_16:30:35
host : ai-gpu-server
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 3137438)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-09-24_16:30:35
host : ai-gpu-server
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 3137437)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
``` | closed | 2023-09-24T09:37:36Z | 2023-10-25T12:37:43Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/847 | [] | phamkhactu | 2 |
microsoft/JARVIS | pytorch | 22 | Can Nvidia's 40 series graphics card be used for the current project? | Can Nvidia's 40 series graphics card be used for the current project?
as I can see that the System Requirements is as follows:
Ubuntu 16.04 LTS
NVIDIA GeForce RTX 3090 * 1
RAM > 24GB | closed | 2023-04-04T11:56:24Z | 2023-04-06T19:41:52Z | https://github.com/microsoft/JARVIS/issues/22 | [] | GothicFox | 5 |
indico/indico | flask | 6,653 | Abstract submission notifications come from the no-reply email making users think that can't reply, even if reply-to is set | **Describe the bug**
This seems to be working as intended, but the way it works is very confusing for our event managers, as well as people submitting abstracts.
If you setup an abstract notification, and set a reply-to, the from is still the no-reply user with is confusing to the user. We have set things up so that they can respond to the notification, but they cant'
**To Reproduce**
Steps to reproduce the behavior:
1. In indico config set up the no-reply address as an actual no-reply address
2. In a conference turn on Abstract submissions
3. Set up a notification with a reply-to as the conference organizer
4. As a test user, sign-up for the site and submit an abstract
5. As test user receive email notification from "no-reply" but with reply to set as the conference organizer
**Expected behavior**
If reply-to is set, then the email should be from the reply-to set on the notification and not "no-reply" email for the Indico site.
**Screenshots**


| open | 2024-12-06T21:45:36Z | 2024-12-06T21:45:36Z | https://github.com/indico/indico/issues/6653 | [
"bug"
] | dwindibank | 0 |
koxudaxi/datamodel-code-generator | fastapi | 2,001 | Missing Imports When Model Is Changed as a Discriminator | **Describe the bug**
int `Parser.parse` function the `self.__apply_discriminator_type(model, imports)` function gets called `for module, models in module_models`, before `module, models, init, imports, scoped_model_resolver` get added to the `processed_models` array. If certain condition match, this function call has side effects on the already processed models, changing type annotations in the model but not adjusting the imports, thus leading to incorrect models.
**To Reproduce**
Example schemas
`schema.json`:
```json
{
"properties": {
"inner": {
"discriminator": {
"mapping": {
"a": "./type_1.json",
"A": "./type_1.json"
},
"propertyName": "type_"
},
"oneOf": [
{
"$ref": "./type_1.json"
}
],
"title": "Inner"
}
},
"required": [
"inner"
],
"title": "Response",
"type": "object"
}
```
`type_1.json`:
```json
{
"properties": {
"type_": {
"default": "a",
"enum": ["a", "A"],
"type": "string",
"title": "Type"
}
},
"title": "Type1",
"type": "object"
}
```
Used commandline:
```
$ datamodel-codegen --input folder/where/the/files/are --output /output --output-model-type pydantic_v2.BaseModel
```
**Expected behavior**
I would expect the resulting pydantic files to import everything they use, but for `type_1.json` I get:
```python
from __future__ import annotations
from enum import Enum
from typing import Optional
from pydantic import BaseModel, Field
class Type(Enum):
a = 'a'
A = 'A'
class Type1(BaseModel):
type_: Literal['a', 'A'] = Field(..., title='Type')
```
This model imports `Optional`, which is not used anymore. Optional was deleted [here](https://github.com/koxudaxi/datamodel-code-generator/blob/5727116c36563afbadfe593191286aac50a7b354/datamodel_code_generator/parser/base.py#L820), while handling `schema.json`. Before the `__apply_discriminator_type` method call for `schema.json`, the model for `type_1.json was:
```python
class Type(Enum):
a = 'a'
A = 'A'
class Type1(BaseModel):
type_: Optional[Type] = Field('a', title='Type')
```
and after:
```python
class Type(Enum):
a = 'a'
A = 'A'
class Type1(BaseModel):
type_: Literal['a', 'A'] = Field(..., title='Type')
```
Note, that the `Type1` model of `type_1.json` in the `processed_models` contains the missing import, but also the wrong `Optional` import.
**Version:**
- OS: macOS
- Python version: 3.12.3
- datamodel-code-generator version: using local version with commit [5727116](https://github.com/koxudaxi/datamodel-code-generator/commit/5727116c36563afbadfe593191286aac50a7b354)
**Additional context**
The missing import can be easily added by posprocessing the `processed_models`:
```python
for processed_model in processed_models:
for model in processed_model.models:
processed_model.imports.append(model.imports)
```
However, this does not remove the unused model (e.g. `Type` in this example) or unused imports (e.g. `Optional`).
| closed | 2024-06-12T11:54:20Z | 2024-07-01T16:47:10Z | https://github.com/koxudaxi/datamodel-code-generator/issues/2001 | [] | luca-knaack-webcom | 0 |
microsoft/MMdnn | tensorflow | 898 | Does MMDnn support retrain? | Platform (like ubuntu 16.04/win10):
Python version:
Source framework with version (like Tensorflow 1.4.1 with GPU):
Destination framework with version (like CNTK 2.3 with GPU):
Pre-trained model path (webpath or webdisk path):
Running scripts:
| closed | 2020-10-05T15:19:09Z | 2020-10-05T15:20:22Z | https://github.com/microsoft/MMdnn/issues/898 | [] | calvin886 | 0 |
brightmart/text_classification | nlp | 68 | question_id question_string_list | 您好,请问question_id和question_string_list分别指的是什么呢,能不能大概说下预测文件的格式呀。 | open | 2018-07-09T06:46:46Z | 2018-07-10T05:29:33Z | https://github.com/brightmart/text_classification/issues/68 | [] | tangdouer | 2 |
ultralytics/ultralytics | python | 19,527 | About the txt saved | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
when setting save_txt=True , the results of the segmentation model sometimes have only classic and conf but no contour points(like line 4), is it because the object is too small to extract the contour?

### Additional
_No response_ | closed | 2025-03-05T06:09:54Z | 2025-03-05T17:20:24Z | https://github.com/ultralytics/ultralytics/issues/19527 | [
"question",
"fixed",
"segment"
] | Henry0528 | 4 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,252 | sipbuild.pyproject.PyProjectOptionException during pip install | While running
pip install -r requirements.txt
got
```
╰─> [25 lines of output]
Traceback (most recent call last):
File "/Users/marco/.pyenv/versions/3.11.5/envs/rtvc/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/Users/marco/.pyenv/versions/3.11.5/envs/rtvc/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/marco/.pyenv/versions/3.11.5/envs/rtvc/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 152, in prepare_metadata_for_build_wheel
whl_basename = backend.build_wheel(metadata_directory, config_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-build-env-avz9esk8/overlay/lib/python3.11/site-packages/sipbuild/api.py", line 46, in build_wheel
project = AbstractProject.bootstrap('wheel',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-build-env-avz9esk8/overlay/lib/python3.11/site-packages/sipbuild/abstract_project.py", line 87, in bootstrap
project.setup(pyproject, tool, tool_description)
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-build-env-avz9esk8/overlay/lib/python3.11/site-packages/sipbuild/project.py", line 586, in setup
self.apply_user_defaults(tool)
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-install-pm6odyos/pyqt5_381e2a1d4cbc42e5906a2b57d17ae409/project.py", line 63, in apply_user_defaults
super().apply_user_defaults(tool)
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-build-env-avz9esk8/overlay/lib/python3.11/site-packages/pyqtbuild/project.py", line 70, in apply_user_defaults
super().apply_user_defaults(tool)
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-build-env-avz9esk8/overlay/lib/python3.11/site-packages/sipbuild/project.py", line 237, in apply_user_defaults
self.builder.apply_user_defaults(tool)
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-build-env-avz9esk8/overlay/lib/python3.11/site-packages/pyqtbuild/builder.py", line 69, in apply_user_defaults
raise PyProjectOptionException('qmake',
sipbuild.pyproject.PyProjectOptionException
[end of output]
```
python 3.11.5 | open | 2023-09-22T21:03:23Z | 2024-06-14T08:18:41Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1252 | [] | marcobazzani | 1 |
horovod/horovod | deep-learning | 3,626 | Tensorflow 2 Distributed Optimizer | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
It's a clear truth that tf.gradients use less VRAM than a gradientape in tensorflow 2, and we've been using tf.gradients just fine for the past few years. However, Horovod recommends tensorflow 2 users to use hvd.distributedgradienttape.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Utilize hvd distributed optimizer in tensorflow v2. Confirmation that this works, as we haven't found a single tensorflow 2 example that does this.
**Describe alternatives you've considered**
GradientTape: uses precious 4GB more VRAM. That's another 2 batches we can cram in out of 10.
**Additional context**
N/A | open | 2022-07-29T11:00:39Z | 2022-07-29T11:00:56Z | https://github.com/horovod/horovod/issues/3626 | [
"enhancement"
] | Apprisco | 0 |
getsentry/sentry | python | 86,998 | Visual bug with overlapping line charts | It looks like antialiasing is off or something:

https://cleptric.sentry.io/organizations/cleptric/insights/backend?project=4508604100706304&statsPeriod=14d | open | 2025-03-13T16:25:00Z | 2025-03-17T15:24:36Z | https://github.com/getsentry/sentry/issues/86998 | [] | matejminar | 0 |
miguelgrinberg/flasky | flask | 163 | Send email with celery has error | I tried using celery to send the email, but one error goes out:
EncodeError: can't pickle thread.lock objects
And the code of email.py is like this:
# encoding: utf-8
from threading import Thread
from flask import current_app, render_template
from flask.ext.mail import Message
from . import mail, celery
@celery.task
def send_async_email(app, msg):
with app.app_context():
mail.send(msg)
def send_email(to, subject, template, *_kwargs):
app = current_app._get_current_object()
msg = Message(app.config['FLASKY_MAIL_SUBJECT_PREFIX'] + ' ' + subject,
sender=app.config['FLASKY_MAIL_SENDER'], recipients=[to])
msg.body = render_template(template + '.txt', *_kwargs)
msg.html = render_template(template + '.html', **kwargs)
thr = Thread(target=send_async_email.delay, args=[app, msg])
thr.start()
return thr
I don't know what's wrong, and could I get some help from you? thx!
| closed | 2016-06-22T08:49:04Z | 2016-06-26T07:07:10Z | https://github.com/miguelgrinberg/flasky/issues/163 | [
"question"
] | 8cbx | 2 |
polakowo/vectorbt | data-visualization | 64 | ADVSTEX with limit entry orders? | Great package, been trying it for about a month and I've found it to be very flexible!
I've been using ADVSTEX and there doesn't seem to be an obvious way to define limit entry orders, and subsequently, filled entries. The class seems to assume that entries are filled order entries or market bought.
The entries I have built for ADVSTEX are signal based entries, but I would like to move towards the above direction or being able to reflect limit order entries that are filled or canceled (time-based).
Briefly, I think I can rudimentarily get around this by running the entries through a custom indicator to reflect only filled entry order, but that may not easily account for orders that are never filled. It seems that this package can benefit from an integrated function within ADVSTEX itself. Would like to hear your thoughts on this.
| closed | 2020-12-17T07:12:14Z | 2021-01-22T16:02:33Z | https://github.com/polakowo/vectorbt/issues/64 | [] | quanatee | 5 |
ranaroussi/yfinance | pandas | 1,329 | Exception: yfinance failed to decrypt Yahoo data response with hardcoded keys, contact developers | I use Python 3.8
Installed yfinance 0.2.4
I'm pulling the info of a ticker using this code:
`info = yf.Ticker(symbol).info`
And I'm getting this error message:
> Exception("Yahoo has again changed data format, yfinance now unsure which key(s) is for decryption: '80226cfb77c7'-><class 'str'> , '80226cfb77c7'-><class 'str'> , '80226cfb77c7'-><class 'str'> , '80226cfb77c7'-><class 'str'> , '80226cfb77c7'-><class 'str'> , '80226cfb77c7'-><class 'str'> , '80226cfb77c7'-><class 'str'> , '80226cfb77c7'-><class 'str'> ,
And it continues with a ton of <class 'str'> like above.
Does someone know how to fix it?
**EDIT from @ValueRaider**
In case you're confused by title not matching this top post - I edited title to reflect the latest error message. Discussion below still very relevent.
EDIT: New discussion starting in #1407 | closed | 2023-01-23T20:11:55Z | 2023-02-10T18:07:16Z | https://github.com/ranaroussi/yfinance/issues/1329 | [] | DolevAlgam | 71 |
Miserlou/Zappa | flask | 1,598 | Tags support bug: Zappa attempts to override existing tags | <!--- Provide a general summary of the issue in the Title above -->
## Context
I'm running zappa with `"tags": { "sometag": "somevalue" }` specified and `s3_bucket` set to a bucket created by another CloudFormation stack.
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
CloudFormation adds its own tags to each resource it creates, prefixed with `aws:`, for example `aws:cloudformation:stack-name`. Those tags cannot be removed or changed, but unfortunately Zappa attempts to override all tags on the S3 bucket with the ones specified in `tags` in `zappa_settings.json`
I believe the issue comes from this line: https://github.com/Miserlou/Zappa/blob/master/zappa/core.py#L930
Where Zappa simply tries to override all tags on the bucket, not taking into account any tags that already exist (specifically aws-specific tags).
This results in the following error:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/zappa/cli.py", line 2693, in handle
sys.exit(cli.handle())
File "/usr/local/lib/python3.6/site-packages/zappa/cli.py", line 504, in handle
self.dispatch_command(self.command, stage)
File "/usr/local/lib/python3.6/site-packages/zappa/cli.py", line 551, in dispatch_command
self.update(self.vargs['zip'], self.vargs['no_upload'])
File "/usr/local/lib/python3.6/site-packages/zappa/cli.py", line 889, in update
success = self.zappa.upload_to_s3(self.zip_path, self.s3_bucket_name, disable_progress=self.disable_progress)
File "/usr/local/lib/python3.6/site-packages/zappa/core.py", line 930, in upload_to_s3
self.s3_client.put_bucket_tagging(Bucket=bucket_name, Tagging=tags)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 612, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidTag) when calling the PutBucketTagging operation: System Tags cannot be removed by requester
```
## Expected Behavior
<!--- Tell us what should happen -->
Update happens successfully.
## Actual Behavior
<!--- Tell us what happens instead -->
`Oh no! An error occurred! :(`
See traceback above.
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
Option 1: run `get_bucket_tagging()` first to fetch a list of tags starting with `aws:` and include them in `put_bucket_tagging()` (though AWS may still think you're trying to change their tags?)
Option 2: change default behaviour (or add option) and don't tag S3 bucket, since it's not a resource that's created and directly managed by Zappa.
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Create an S3 bucket with CloudFormation
2. Specify that bucket in your `zappa_settings.json` as well as `tags`.
3. Run `zappa deploy`
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.46.2
* Operating System and Python version: Mac OS X, Python 3.6.5
* The output of `pip freeze`: _none_
* Link to your project (optional): -
* Your `zappa_settings.py`:
```
{
"dev": {
"app_function": "lambda_function.app",
"aws_region": "eu-west-1",
"project_name": "test1234",
"runtime": "python3.6",
"s3_bucket": "my_bucket_created_by_cloudformation",
"slim_handler": true,
"tags": {
"MyTag": "MyValue",
"AnotherTag": "AnotherValue"
}
}
}
```
| open | 2018-08-29T11:28:35Z | 2018-10-07T17:44:31Z | https://github.com/Miserlou/Zappa/issues/1598 | [] | paulina-mudano | 3 |
wagtail/wagtail | django | 12,558 | get_admin_default_ordering documentation | The get_admin_default_ordering method documentation lists available sort orders, but in practice, anything valid for `PageQuerySet.order_by` can be used.
### Pertinent section of the Wagtail docs
https://docs.wagtail.org/en/stable/reference/pages/model_reference.html#wagtail.models.Page.get_admin_default_ordering
### Details
I have tried sorting with other Page model fields, such as `-first_published_at` , using `F` expressions and even fields from specific page class.
ie:
```python
def get_admin_default_ordering(self):
return ["-first_published_at"]
# return [F("first_published_at").desc(nulls_last=True)]
# return ["-blogentrypage__publish_date"]
```
I propose removing the phrase "The following sort orders are available:" from the documentation.
Perhaps adding something like, "The result of `get_admin_default_ordering` is passed to `PageQuerySet.order_by`."
| closed | 2024-11-09T09:33:01Z | 2024-11-11T07:37:12Z | https://github.com/wagtail/wagtail/issues/12558 | [
"Documentation"
] | bmihelac | 4 |
onnx/onnx | machine-learning | 6,578 | Protos documentation page | # Ask a Question
### Question
In the file `onnx/docs/docsgen/source/api/classes` is stated that sequences can contain sequences, maps or tensors but every operator that deals with sequences I have seen constrains the elements of a sequence to be of type tensor. The same goes for maps. Am I missing something?
### Further information
- Relevant Area: documentation
| open | 2024-12-06T20:01:57Z | 2024-12-06T20:30:43Z | https://github.com/onnx/onnx/issues/6578 | [
"question"
] | matscalia | 0 |
streamlit/streamlit | machine-learning | 10,026 | Supress multiselect "Remove an option first". Message displays immediately upon reaching max selection. | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
st.multiselect displays this message immediately upon reaching n = max_selections:

For example, if max_selection = 2, the user will make 1 selection, and then a 2nd selection - this message will pop up immediately upon making the 2nd selection.
### Why?
The current state of the function makes the user feel as though their 2nd selection is raising an error.
### How?
The message should only appear if the user is attempting to make n > max_selections.
### Additional Context
_No response_ | open | 2024-12-16T04:00:23Z | 2024-12-17T03:42:44Z | https://github.com/streamlit/streamlit/issues/10026 | [
"type:enhancement",
"feature:st.multiselect"
] | LarryLoveIV | 3 |
chiphuyen/stanford-tensorflow-tutorials | tensorflow | 35 | arxiv_abstracts.txt contains only 100 distinct lines | Each line is repeated 72 times. I guess this will cripple the expected performance on the network as there is a really small number of distinct training samples. Is this by design for the course?
| open | 2017-06-27T02:37:01Z | 2017-07-11T21:05:26Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/35 | [] | eduardofv | 1 |
ni1o1/transbigdata | data-visualization | 65 | 计算过程中,数据发生变化 | 大家好,我在用transBigData 聚合集计栅格内数据量 的时候发现,先前都在0~1的数据,计算之后都自动变成1了,怎么在计算过程中保持数据不变呢?

| closed | 2022-11-13T12:43:49Z | 2022-11-23T07:05:17Z | https://github.com/ni1o1/transbigdata/issues/65 | [] | wangleping2000 | 1 |
ccxt/ccxt | api | 24,556 | Rate Limit field not properly defined in documentation | ### Operating System
_No response_
### Programming Languages
_No response_
### CCXT Version
_No response_
### Description
Looking at the chapter devoted to "rate limit", https://docs.ccxt.com/#/README?id=rate-limit , there is missing the most important information, what it is, what the `exhange.rateLimit` number means!
### Code
```
```
| open | 2024-12-13T22:05:00Z | 2024-12-28T19:45:08Z | https://github.com/ccxt/ccxt/issues/24556 | [] | telenskyt | 2 |
tflearn/tflearn | data-science | 1,171 | OSS License compatibility question | There’s some possible confusion on the license of your repository when you combine other open-source code.
The module `tflearn/vendor/arg_scope.py` claims its license as **Apache-2.0**. However, the license of your whole project is shown as **the MIT license** in LICENSE, i.e., less strict than Apache-2.0 on license terms, which has impacted the whole license compatibility in your repository and may bring legal and financial risks.
You can select another proper license for your repository, or write a custom license with license exceptions if some license terms couldn’t be summed up consistently
| open | 2023-01-14T05:33:02Z | 2023-01-14T05:33:02Z | https://github.com/tflearn/tflearn/issues/1171 | [] | Ashley123456789 | 0 |
LAION-AI/Open-Assistant | machine-learning | 3,731 | Unable to create new chat | I'm unable to start a new conversation. Nothing happens when I hit the URL https://open-assistant.io/chat. I tried several browsers, believing the problem was with the browser settings, but it happened every time. | closed | 2023-11-15T09:40:35Z | 2023-11-25T07:28:20Z | https://github.com/LAION-AI/Open-Assistant/issues/3731 | [] | ac852321 | 6 |
pennersr/django-allauth | django | 3,380 | Intermittent login with Google errors | We are using allauth library to allow login via Google on our site. We did not override any allauth functionality, or change any of our code around logging in recently. The error message for the intermittent failures is:
```
"exception": "Error retrieving access token: b'{\\n \"error\": \"invalid_grant\",\\n \"error_description\": \"Bad Request\"\\n}'"
```
We are using django-allauth==0.52.0. It seems this is affecting around 30% of users attempting to login with Google.
Any help or additional context would be appreciated. | closed | 2023-08-16T21:26:43Z | 2023-10-11T15:30:52Z | https://github.com/pennersr/django-allauth/issues/3380 | [] | kate-skorija | 5 |
httpie/cli | api | 958 | request response time | I think it would be a good idea to have how long the response of a request took displayed in the output, this could be useful if comparing API's or testing the performance impact of a change to an API. with a little help I'd be willing to help out with this | closed | 2020-07-23T19:19:08Z | 2022-01-24T12:14:59Z | https://github.com/httpie/cli/issues/958 | [] | s1ntaxe770r | 2 |
lanpa/tensorboardX | numpy | 514 | Question in projecting an embedding with different labels | When I projecting an embedding with different labels, for example:
```python
writer.add_embedding(same_embedding, labels_str_two,
tag=f'labels_str_two')
writer.add_embedding(same_embedding, labels_str_one, tag='labels_str_one')
```
I got two different pictures, just like these two pictures. So why relatively distances between points are different when projecting one embedding with different labels?


| open | 2019-10-03T12:15:55Z | 2019-10-05T17:36:11Z | https://github.com/lanpa/tensorboardX/issues/514 | [] | heslowen | 5 |
harry0703/MoneyPrinterTurbo | automation | 422 | 无法拉取 docker image | 这个大家是怎么解决的呢?
`
(base) ➜ MoneyPrinterTurbo git:(main) docker-compose up
Building webui
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/
Sending build context to Docker daemon 214.5MB
Step 1/11 : FROM python:3.10-slim-bullseye
3.10-slim-bullseye: Pulling from library/python
f7b75fe1f735: Retrying in 1 second
6fb769904474: Retrying in 1 second
710493390cdc: Retrying in 1 second
750dde19623c: Waiting
96feefb6843c: Waiting
error pulling image configuration: download failed after attempts=6: dial tcp 108.160.162.104:443: i/o timeout
ERROR: Service 'webui' failed to build : Build failed` | closed | 2024-06-22T02:54:24Z | 2024-06-26T03:14:04Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/422 | [] | xushijie | 0 |
plotly/dash | data-visualization | 2,653 | dcc.Dropdown has inconsistent layout flow with other common input components | **Describe your context**
```
dash 2.13.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
Many of the go-to Dash Core Components have the CSS style `display` set to `inline-block`, with a notable exception of `dcc.Dropdown`. This means that without any custom styles, the dropdown component has inconsistent layout flow compared to other input controls it's likely to be found with. I know I can style the component to fix the issue, but this seems like overkill for simple demos where you just need some controls next to each other, and would be confusing for people getting started with Dash.
Here's an example:
```python
from dash import Dash, dcc, html
app = Dash(__name__)
app.layout = html.Div(
children=[
html.Label("Dropdown"),
dcc.Dropdown(),
dcc.DatePickerRange("DatePickerRange"),
html.Label("DatePickerSingle"),
dcc.DatePickerSingle(),
html.Label("Input"),
dcc.Input(),
],
)
app.run(port=8050)
```
Which gives the this layout:

This inconsistent flow layout that comes out of the box is too jarring, even for throwaway demo code, so I inevitably end up adding manual styling just for that one component to normalise things a bit:
```python
app2 = Dash(__name__)
app2.layout = html.Div(
children=[
html.Label("Dropdown"),
dcc.Dropdown(
style={
"display": "inline-block",
"width": 300,
"vertical-align": "middle",
}
),
dcc.DatePickerRange("DatePickerRange"),
html.Label("DatePickerSingle"),
dcc.DatePickerSingle(),
html.Label("Input"),
dcc.Input(),
],
)
app2.run(port=8051)
```

I'm wondering if there would be any appetite for trying to normalise the layout flow for `dcc.Dropdown`? I know there's the impact on the many existing Dash apps out there to consider, but I do think it would make for a better experience, also for people getting started with Dash too.
| open | 2023-10-05T15:35:07Z | 2024-08-13T19:38:28Z | https://github.com/plotly/dash/issues/2653 | [
"bug",
"P3"
] | ned2 | 0 |
axnsan12/drf-yasg | django | 521 | Question: why is only JSON allowed? | Even though DRF is configured to allow json and form data, this library only shows JSON as the only available request type. | closed | 2020-01-04T16:54:57Z | 2020-02-17T01:08:08Z | https://github.com/axnsan12/drf-yasg/issues/521 | [] | cristianocca | 1 |
sanic-org/sanic | asyncio | 2,757 | Circular import in target file accidentally triggers 'No module named ... found' | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
While developing it appears that I accidentally caused a circular import which Python traces back to starting in the file I have my app in. As a result, Python outputs an error such as the following:
```
ImportError: cannot import name 'constants' from partially initialized module 'app' (most likely due to a circular import) (/api/app/__init__.py)
```
In this case my module I pass to the sanic server is `app:app`, from within `/api`.
### Code snippet
_No response_
### Expected Behavior
I had this in the back of my mind the entire time, but found it very difficult to troubleshoot due to Sanic swallowing the error. As a result I ended up the rabbit hole of accidental breaking changes and tried commenting out different changes. An hour later I finally found the right import.
It would help if Sanic continued to output the specific import error, on the off-chance that it isn't an incorrectly setup module. The alternative would be to use more fine-grained `importlib` and manually call some functions rather than use their help functions. As a result there should be a different call which finds the file (an `ImportError` here hints at an incorrectly setup module), than the one which loads it (user error).
### How do you run Sanic?
Sanic CLI
### Operating System
Windows (Docker, Python:3.11)
### Sanic Version
23.3
### Additional context
_No response_ | closed | 2023-06-02T15:50:34Z | 2023-07-05T11:38:16Z | https://github.com/sanic-org/sanic/issues/2757 | [
"help wanted",
"beginner",
"feature request"
] | Bluenix2 | 0 |
littlecodersh/ItChat | api | 1,030 | 这个项目用不了了,可以用gewechat/openwechat | ipad免费方案(推荐) https://github.com/Devo919/Gewechat
hook免费方案:https://github.com/eatmoreapple/openwechat(会挤掉电脑)
有需要的自取,下面打广告的都是骗子 二道贩子 大家都不要理。 | open | 2024-11-30T09:23:55Z | 2025-02-08T00:53:24Z | https://github.com/littlecodersh/ItChat/issues/1030 | [] | ShanHaiYKP | 3 |
browser-use/browser-use | python | 481 | Tor support | ### Problem Description
Sometimes, chrome base browser have a lot of issues, and some extensions are now limited due to ManifestV3.
Also, without the tor network, we often face some rate limitation with google search or public APIs.
### Proposed Solution
Instead of using playwright, switching to selenium would probably offer an easier integration of a firefox-based browser.
I'd love to know your thoughts on a firefox and tor integration, and I might contribute if that's something that interests you.
Cheers
### Alternative Solutions
_No response_
### Additional Context
_No response_ | closed | 2025-01-30T15:58:30Z | 2025-02-21T21:31:10Z | https://github.com/browser-use/browser-use/issues/481 | [
"enhancement"
] | gotyer | 1 |
LibreTranslate/LibreTranslate | api | 354 | ERROR: The Compose file './docker-compose.cuda.yml' is invalid | After a fresh git clone, when running
`docker-compose -f docker-compose.cuda.yml up -d --build`
I get
`
ERROR: The Compose file './docker-compose.cuda.yml' is invalid because:
services.libretranslate-cuda.deploy.resources.reservations value Additional properties are not allowed ('devices' was unexpected)
` | closed | 2022-12-02T22:26:17Z | 2022-12-11T18:44:27Z | https://github.com/LibreTranslate/LibreTranslate/issues/354 | [] | Athanaze | 2 |
plotly/dash | flask | 3,132 | `jupyter_mode='external'` doesn't work with `use_pages=True` | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.18.1
dash_ag_grid 31.2.0
dash-bootstrap-components 1.6.0
dash-colorscales 0.0.4
dash-core-components 2.0.0
dash_cytoscape 1.0.2
dash_daq 0.5.0
dash_html_components 2.0.0
dash_renderer 1.9.1
dash-table 5.0.0
pydash 6.0.2
```
**Describe the bug**
Using both `jupyter_mode='external'` and 'use_pages=True` fails on the `__main__` module lookup when ran inside JupyterLab.
```
File /opt/conda/lib/python3.12/site-packages/dash/dash.py:659, in Dash.init_app(self, app, **kwargs)
656 self._setup_routes()
658 _get_app.APP = self
--> 659 self.enable_pages()
661 self._setup_plotlyjs()
File /opt/conda/lib/python3.12/site-packages/dash/dash.py:2189, in Dash.enable_pages(self)
2187 return
2188 if self.pages_folder:
-> 2189 _import_layouts_from_pages(self.config.pages_folder)
2191 @self.server.before_request
2192 def router():
2193 if self._got_first_request["pages"]:
File /opt/conda/lib/python3.12/site-packages/dash/_pages.py:441, in _import_layouts_from_pages(pages_folder)
438 if "register_page" not in content:
439 continue
--> 441 module_name = _infer_module_name(page_path)
442 spec = importlib.util.spec_from_file_location(module_name, page_path)
443 page_module = importlib.util.module_from_spec(spec)
File /opt/conda/lib/python3.12/site-packages/dash/_pages.py:109, in _infer_module_name(page_path)
106 parent_module = _path_to_module_name(parent_path)
108 module_name = f"{parent_module}.{module}"
--> 109 if _module_name_is_package(CONFIG.name):
110 # Only prefix with CONFIG.name when it's an imported package name
111 module_name = f"{CONFIG.name}.{module_name}"
112 return module_name
File /opt/conda/lib/python3.12/site-packages/dash/_pages.py:90, in _module_name_is_package(module_name)
87 def _module_name_is_package(module_name):
88 return (
89 module_name in sys.modules
---> 90 and Path(sys.modules[module_name].__file__).name == "__init__.py"
91 )
AttributeError: module '__main__' has no attribute '__file__'
```
**Expected behavior**
It works similar to how `use_pages=True` works outside of Jupyter. Not sure how correct the fix is, but I am using this monkey-patch to work around it:
```
from dash import _pages
def _hacked_module_name_is_package(module_name):
return (
module_name in sys.modules
and hasattr(sys.modules[module_name], "__file__")
and Path(sys.modules[module_name].__file__).name == "__init__.py"
)
_pages._module_name_is_package = _hacked_module_name_is_package
```
| open | 2025-01-24T20:23:32Z | 2025-02-03T17:33:59Z | https://github.com/plotly/dash/issues/3132 | [
"bug",
"P2"
] | Aleksei-Poliakov | 0 |
dask/dask | scikit-learn | 10,999 | TypeError: float() argument must be a string or a real number, not 'csr_matrix' | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
During a NLP task to generate a cluster of 6 Million documents with HDBSCAN, dask is not capable of executing the clustering task. It returns the message
```python
TypeError: float() argument must be a string or a real number, not 'csr_matrix'
```
I tried to use the full dataset, but my server runs out of memory, so I used
```python
X_per = da.from_array(X).persist()
```
to load the data and try to get some clusters out. The following example is based in part on the example given in
[https://examples.dask.org/machine-learning/text-vectorization.html](url). The example provided works fine on my environment, but the clustering task is not working properly.
**Minimal Complete Verifiable Example**:
```python
import pandas as pd
fulldf=pd.DataFrame({"_id":{"0":"GSf83ngBH54tuwn8_T8k","1":"Gif83ngBH54tuwn8_T8k","2":"Hyf83ngBH54tuwn8_T8k","3":"Jyf83ngBH54tuwn8_T8k","4":"KSf83ngBH54tuwn8_T8k","5":"Nyf83ngBH54tuwn8_T8k","6":"OSf83ngBH54tuwn8_T8k","7":"Oyf83ngBH54tuwn8_T8k","8":"Pif83ngBH54tuwn8_T8k","9":"Pyf83ngBH54tuwn8_T8k","10":"RCf83ngBH54tuwn8_T8k","11":"Tyf83ngBH54tuwn8_T8k","12":"UCf83ngBH54tuwn8_T8k","13":"USf83ngBH54tuwn8_T8k","14":"Uyf83ngBH54tuwn8_T8k","15":"VCf83ngBH54tuwn8_T8k","16":"WCf83ngBH54tuwn8_T8k","17":"WSf83ngBH54tuwn8_T8k","18":"Wyf83ngBH54tuwn8_T8k","19":"YCf83ngBH54tuwn8_T8k"},"origphrase":{"0":"23 DOSAGE FORMS, COMPOSITION AND PACKAGING ..........................................","1":"24 PART II: SCIENTIFIC INFORMATION ..........................................................................","2":"38 REFERENCES ...........................................................................................................","3":"Oral Delayed","4":"Tablet \\/ 40 mg esomeprazole Crospovidone, hypromellose, hydroxypropyl cellulose, glyceryl monostearate, macrogol, magnesium stearate,","5":"reflux esophagitis ","6":"Co-administration with rilprivirine is contraindicated.","7":"In the presence of any alarm symptom","8":"Pseudomembranous colitis has been reported with nearly all antibacterial agents, including clarithromycin and amoxicillin, and may range in severity from mild to life threatening.","9":"Therefore, it is important to consider this diagnosis in patients who present with diarrhea subsequent to the administration of antibacterial agents.","10":"In moderate to severe cases, consideration should be given to management with fluids and electrolytes, protein supplementation, and treatment with an antibacterial drug clinically effective against Clostridium difficile colitis.","11":"loading dose\\/75mg daily maintenance dose and esomeprazole 40 mg once daily resulting in decreased exposure to the active metabolite of clopidogrel by an average of 40%, and resulting in decreased maximum inhibition of ADP induced platelet aggregation by an average of 14%.","12":"Based on these data, concomitant use of esomeprazole and clopidogrel should be avoided see DRUG INTERACTIONS.","13":"Concomitant use of Proton Pump Inhibitors PPIs with Methotrexate:","14":"A temporary withdrawal of the PPI may be considered in some patients receiving treatments with high dose methotrexate see DRUG INTERACTIONS.","15":"Carcinogenesis and Mutagenesis Long-term toxicity studies of omeprazole, revealed the gastric mucosa as the target organ.","16":"No ECL-cell carcinoids were identified in the carcinogenicity study in mice or in long-term up to 7 years general toxicity studies in dogs.","17":"A vast number of studies have revealed that pronounced and sustained hypergastrinemia is the mechanism behind the development of the gastric ECL-cell carcinoids in the rat.","18":"Partial fundectomy in rats results in hypergastrinemia and gastric ECL-cell carcinoids in the remaining part of the fundic mucosa, towards the end of the rats\\u2019 life span.","19":"The effect of esomeprazole on serum gastrin concentrations was evaluated in approximately 2,700 patients in clinical trials up to 8 weeks and in over 1,300 patients for up to 6-12 months daily doses of either 20 or 40 mg."},"pm":{"0":59893,"1":59893,"2":59893,"3":59893,"4":59893,"5":59893,"6":59893,"7":59893,"8":59893,"9":59893,"10":59893,"11":59893,"12":59893,"13":59893,"14":59893,"15":59893,"16":59893,"17":59893,"18":59893,"19":59893}})
import dask
from dask_ml.feature_extraction.text import CountVectorizer
import dask.bag as db
import sparse
from sklearn.cluster import HDBSCAN
from sklearn.feature_extraction.text import HashingVectorizer
import dask.array as da
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
import joblib
import sklearn.pipeline
fulldf.origphrase = fulldf.origphrase.astype(str)
cluster = LocalCUDACluster()
client = Client(cluster)
#vectorizer = CountVectorizer()
vectorizer = HashingVectorizer() #TfidfVectorizer(stop_words='english')
corpus = db.from_sequence(fulldf.origphrase , npartitions=500)
X = vectorizer.fit_transform(corpus)
X_per = da.from_array(X).persist()
clus = HDBSCAN(min_cluster_size=20)
with joblib.parallel_backend('dask'):
yhat = clus.fit_predict( X_per ) #ERROR happens in this line
```
**Anything else we need to know?**:
The error seems related to the fact I pass a dask.array as parameter.
```python-traceback
TypeError: float() argument must be a string or a real number, not 'csr_matrix'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/erico/lab/packages_dask/sklearn/cluster/_hdbscan/hdbscan.py", line 921, in fit_predict
self.fit(X)
File "/home/erico/lab/packages_dask/sklearn/base.py", line 1474, in wrapper
return fit_method(estimator, *args, **kwargs)
File "/home/erico/lab/packages_dask/sklearn/cluster/_hdbscan/hdbscan.py", line 721, in fit
X = self._validate_data(
File "/home/erico/lab/packages_dask/sklearn/base.py", line 633, in _validate_data
out = check_array(X, input_name="X", **check_params)
File "/home/erico/lab/packages_dask/sklearn/utils/validation.py", line 997, in check_array
array = _asarray_with_order(array, order=order, dtype=dtype, xp=xp)
File "/home/erico/lab/packages_dask/sklearn/utils/_array_api.py", line 521, in _asarray_with_order
array = numpy.asarray(array, order=order, dtype=dtype)
ValueError: setting an array element with a sequence.
```
**Environment**:
- Dask version: '2024.1.1'
- Python version: Python 3.10.13 (main, Oct 4 2023, 08:39:12) [GCC 8.3.0] on linux
- Operating System:``` Linux HRE-GPU-DRONE 5.15.64-1-pve #1 SMP PVE 5.15.64-1 (Thu, 13 Oct 2022 10:30:34 +0200) x86_64 GNU/Linux```
- Install method (conda, pip, source): pip
| closed | 2024-03-13T13:39:14Z | 2024-03-14T13:18:41Z | https://github.com/dask/dask/issues/10999 | [
"needs triage"
] | erico-imgproj | 1 |
plotly/dash | plotly | 3,186 | Make sure that external_stylesheets style doesn't apply to the dev tools | See #3185 for an example of an issue that arises when external stylesheets are loaded and can override the css in dev tools | open | 2025-02-24T21:43:35Z | 2025-02-28T17:39:35Z | https://github.com/plotly/dash/issues/3186 | [
"bug",
"P1",
"cs"
] | marthacryan | 2 |
blb-ventures/strawberry-django-plus | graphql | 157 | Cursor Pagination without relay | Hello and merry xmas.
I just finished an implementation of cursor pagination for a non-relay context and I encountered a couple of issues I would like to ask about and perhaps get a few pointers on a better implementation. Pagination was implemented using a modified version of [django-cursor-pagination](https://github.com/photocrowd/django-cursor-pagination) , sub-classing StrawberryDjangoField , implementing a generic type for the pagination result and a custom field function to return and pass options to the instance of the sub classed StrawberryDjangoField
**Field Implementation:**
```python
@strawberry.type
class PageInfo:
has_next_page: bool
has_previous_page: bool
start_cursor: Optional[str]
end_cursor: Optional[str]
E = TypeVar("E")
@strawberry.type
class CursorPaginatedList(Generic[E]):
page_info: PageInfo
items: List[E]
@strawberry.input
class CursorPaginationInput:
first: Optional[int] = None
last: Optional[int] = None
before: Optional[str] = None
after: Optional[str] = None
class CursoredGenericDjangoField(
_StrawberryDjangoField
):
def __init__(self, cursor_ordering=UNSET, **kwargs):
self.cursor_pagination = kwargs.pop('cursor_pagination',False)
#remove other pagination and order if defined
if self.cursor_pagination:
kwargs.pop('order')
kwargs.pop('pagination')
super().__init__(**kwargs)
@property
def arguments(self) -> List[StrawberryArgument]:
arguments = []
if self.cursor_pagination:
arguments.append(argument("cursor_pagination",CursorPaginationInput))
arguments.append(argument("cursor_ordering",Optional[str]))
return super().arguments + arguments
@property
def is_list(self):
if self.cursor_pagination is UNSET or not self.cursor_pagination:
return super().is_list
return True
@property
def type(self) -> Union[StrawberryType, type]:
return super().type
@type.setter
def type(self, type_: Any) -> None:
super(CursoredGenericDjangoField, self.__class__).type.fset(self, type_)
#store type and inner type to use in return of result and get_queryset to grab
#any get_queryset overload that exists on inner type definition
if type_ is not None and self.cursor_pagination:
self.paginated_type = type_
self.inner_type = typing.get_args(type_)[0]
#resolvers.resolve_result returns a coroutine which needs to be awaited so we can get a list and slice it
#so this needs to be async. Could not figure out a better way to do this.
async def rewrap_result(self,result,**kwargs):
qls = list(await result)
cp = kwargs.get('cursor_pagination',None)
if cp and len(qls):
page = self.paginator.page_from_list(qls,first=cp.first , last=cp.last , after=cp.after, before=cp.before)
pi = PageInfo(
has_next_page= page.has_next,
has_previous_page= page.has_previous,
start_cursor=page.paginator.cursor(page.items[0]),
end_cursor=page.paginator.cursor(page.items[-1]),
)
res = self.paginated_type(
page_info=pi,
items=page.items,
)
else:
pi = PageInfo(
has_next_page= False,
has_previous_page= False,
start_cursor=None,
end_cursor=None,
)
res = self.paginated_type(
page_info=pi,
items=qls,
)
return res
def get_result(
self,
source: Optional[models.Model],
info: Info,
args: List[Any],
kwargs: Dict[str, Any],
) -> Union[Awaitable[Any], Any]:
if self.cursor_pagination:
return self.rewrap_result(super().get_result(source , info , args , kwargs), **kwargs)
return super().get_result(source , info , args , kwargs)
def paginate(self , qs , **kwargs):
cp = kwargs.get('cursor_pagination')
order = kwargs.get('cursor_ordering','id')
self.paginator = CursorPaginator(qs, ordering=(order,'-id'))
#Had to modify the paginator to split it's original page method into
#two separate methods one adding filters to the queryset and another to slice the result
#because resolvers.resolve_result returns a coroutine and not a queryset
page = self.paginator.page_queryset(first=cp.first, after=cp.after , last=cp.last , before=cp.before)
return page
def get_queryset(self, queryset, info, order=UNSET, **kwargs):
if self.cursor_pagination:
inner_get_queryset = getattr(self.inner_type,"get_queryset",None)
#Apply inner get queryset defined on the paginated type
if inner_get_queryset:
queryset = inner_get_queryset(queryset, info, **kwargs)
#Get the queryset from super now so we get filtering on it
queryset = super().get_queryset(queryset, info, order=order, **kwargs)
#Apply pagination and ordering or just ordering based on one
#unique field as required for cursor pagination
if kwargs.get('cursor_pagination',None):
queryset = self.paginate(queryset,**kwargs)
else:
ob = kwargs.get('cursor_ordering',None)
if ob:
queryset = queryset.order_by(ob)
return queryset
return super().get_queryset(queryset, info, order=order, **kwargs)
#re-defining the field function would not be necessary if:
# 1) Any number of extra params would be passed to the constructor of the field class
# 2) a field_cls parameter would allow for selection of which class would be returned like with type
def field(
resolver=None,
*,
name: Optional[str] = None,
field_name: Optional[str] = None,
is_subscription: bool = False,
description: Optional[str] = None,
permission_classes: Optional[List[Type[BasePermission]]] = None,
deprecation_reason: Optional[str] = None,
default: Any = dataclasses.MISSING,
default_factory: Union[Callable[..., object], object] = dataclasses.MISSING,
metadata: Optional[Mapping[Any, Any]] = None,
directives: Optional[Sequence[object]] = (),
pagination: Optional[bool] = UNSET,
filters: Optional[type] = UNSET,
order: Optional[type] = UNSET,
only: Optional[TypeOrSequence[str]] = None,
select_related: Optional[TypeOrSequence[str]] = None,
prefetch_related: Optional[TypeOrSequence[PrefetchType]] = None,
disable_optimization: bool = False,
init: Literal[True, False, None] = None,
#This was added to allow cursor_pagination=True to be passed to the instance
#or any other parameter that we may require in other subclasses of StrawberryDjangoField
**kwargs
) -> Any:
f = CursoredGenericDjangoField(
python_name=None,
django_name=field_name,
graphql_name=name,
type_annotation=None,
description=description,
is_subscription=is_subscription,
permission_classes=permission_classes or [],
deprecation_reason=deprecation_reason,
default=default,
default_factory=default_factory,
metadata=metadata,
directives=directives,
filters=filters,
pagination=pagination,
order=order,
only=only,
select_related=select_related,
prefetch_related=prefetch_related,
disable_optimization=disable_optimization,
**kwargs
)
if resolver:
f = f(resolver)
return f
```
**Usage:**
```Python
@strawberry.type
class Query:
paginated_list: module.CursorPaginatedList[SomeType] = module.field(cursor_pagination=True , filters=SomeFilter)
```
Or CursorPaginatedList could be sub classed to add more fields to it.
**Issues:**
1. Is there any way to make the field function in this package more "generic" so it can instantiate any subclass of StrawberryDjangoField passing to it any number of extra arguments and leave it to the user to pop them so they don't get passed down the inheritance chain ?
2. resolvers.resolve_result returns a coroutine which forced me to use the workarounds outlined in the comments and to modify the paginators functionality splitting the filtering on the queryset from the slicing of the result. Can you suggest any workaround on this?
3. My initial instinct was to detect whether the type is a subclass of CursorPaginatedList and not have to pass a cursor_pagination=True to the constructor but I could not find a way to detect the type within the __init__ to disable default order and pagination. Also couldn't find a reliable way to detect whether a type is a subclass for that matter. Any suggestions ?
**Enhancements:**
1. While reading through the code to get hints for my implementation I noticed that it's very tightly coupled with the relay implementation which made it kind of hard to figure out which part needed to be re-implemented. Perhaps the relay implementation should be completely separate ?
2. StrawberryDjangoField is a subclass of strawberry_django.field.StrawberryDjangoField which has multiple inheritance on the pagination , filter and ordering classes. It would be nice if a bare minimally working version of StrawberryDjangoField was provided so a user can do his own composition on the field choosing to use , not use or maybe replace some of these classes.
Thanks in advance for any comments or suggestions you may have on this. | closed | 2022-12-25T21:01:12Z | 2023-06-15T21:35:30Z | https://github.com/blb-ventures/strawberry-django-plus/issues/157 | [
"question"
] | m4riok | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.